vdpa/mlx5: fix polling threads scheduling

Message ID 1612776481-151396-1-git-send-email-matan@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Maxime Coquelin
Headers
Series vdpa/mlx5: fix polling threads scheduling |

Checks

Context Check Description
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-testing warning Testing issues
ci/travis-robot warning Travis build: failed
ci/checkpatch success coding style OK

Commit Message

Matan Azrad Feb. 8, 2021, 9:28 a.m. UTC
When the event mode is with 0 fixed delay, the polling-thread will never
give-up CPU.

So, when multi-polling-threads are active, the context-switch between
them will be managed by the system which may affect latency according to
the time-out decided by the system.

In order to fix multi-devices polling thread scheduling, this patch
forces rescheduling for each CQ poll iteration.

Move the polling thread to SCHED_RR mode with maximum priority to
complete the fairness.

Fixes: 6956a48cabbb ("vdpa/mlx5: set polling mode default delay to zero")

Signed-off-by: Matan Azrad <matan@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)
  

Comments

Maxime Coquelin Feb. 8, 2021, 11:17 a.m. UTC | #1
On 2/8/21 10:28 AM, Matan Azrad wrote:
> When the event mode is with 0 fixed delay, the polling-thread will never
> give-up CPU.
> 
> So, when multi-polling-threads are active, the context-switch between
> them will be managed by the system which may affect latency according to
> the time-out decided by the system.
> 
> In order to fix multi-devices polling thread scheduling, this patch
> forces rescheduling for each CQ poll iteration.
> 
> Move the polling thread to SCHED_RR mode with maximum priority to
> complete the fairness.
> 
> Fixes: 6956a48cabbb ("vdpa/mlx5: set polling mode default delay to zero")
> 
> Signed-off-by: Matan Azrad <matan@nvidia.com>
> ---
>  drivers/vdpa/mlx5/mlx5_vdpa_event.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
> index 0f635ff..86adc86 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
> @@ -232,6 +232,9 @@
>  	}
>  	if (priv->timer_delay_us)
>  		usleep(priv->timer_delay_us);
> +	else
> +		/* Give-up CPU to improve polling threads scheduling. */
> +		pthread_yield();
>  }
>  
>  static void *
> @@ -500,6 +503,9 @@
>  	rte_cpuset_t cpuset;
>  	pthread_attr_t attr;
>  	char name[16];
> +	const struct sched_param sp = {
> +		.sched_priority = sched_get_priority_max(SCHED_RR),
> +	};
>  
>  	if (!priv->eventc)
>  		/* All virtqs are in poll mode. */
> @@ -520,6 +526,16 @@
>  			DRV_LOG(ERR, "Failed to set thread affinity.");
>  			return -1;
>  		}
> +		ret = pthread_attr_setschedpolicy(&attr, SCHED_RR);
> +		if (ret) {
> +			DRV_LOG(ERR, "Failed to set thread sched policy = RR.");
> +			return -1;
> +		}
> +		ret = pthread_attr_setschedparam(&attr, &sp);
> +		if (ret) {
> +			DRV_LOG(ERR, "Failed to set thread priority.");
> +			return -1;
> +		}
>  		ret = pthread_create(&priv->timer_tid, &attr,
>  				     mlx5_vdpa_poll_handle, (void *)priv);
>  		if (ret) {
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
  
Xueming Li Feb. 9, 2021, 3:15 a.m. UTC | #2
>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Maxime Coquelin
>Sent: Monday, February 8, 2021 7:17 PM
>To: Matan Azrad <matan@nvidia.com>; dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH] vdpa/mlx5: fix polling threads scheduling
>
>
>
>On 2/8/21 10:28 AM, Matan Azrad wrote:
>> When the event mode is with 0 fixed delay, the polling-thread will
>> never give-up CPU.
>>
>> So, when multi-polling-threads are active, the context-switch between
>> them will be managed by the system which may affect latency according
>> to the time-out decided by the system.
>>
>> In order to fix multi-devices polling thread scheduling, this patch
>> forces rescheduling for each CQ poll iteration.
>>
>> Move the polling thread to SCHED_RR mode with maximum priority to
>> complete the fairness.
>>
>> Fixes: 6956a48cabbb ("vdpa/mlx5: set polling mode default delay to
>> zero")
>>
>> Signed-off-by: Matan Azrad <matan@nvidia.com>
>> ---
>>  drivers/vdpa/mlx5/mlx5_vdpa_event.c | 16 ++++++++++++++++
>>  1 file changed, 16 insertions(+)
>>
>> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
>> b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
>> index 0f635ff..86adc86 100644
>> --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
>> +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
>> @@ -232,6 +232,9 @@
>>  	}
>>  	if (priv->timer_delay_us)
>>  		usleep(priv->timer_delay_us);
>> +	else
>> +		/* Give-up CPU to improve polling threads scheduling. */
>> +		pthread_yield();
>>  }
>>
>>  static void *
>> @@ -500,6 +503,9 @@
>>  	rte_cpuset_t cpuset;
>>  	pthread_attr_t attr;
>>  	char name[16];
>> +	const struct sched_param sp = {
>> +		.sched_priority = sched_get_priority_max(SCHED_RR),
>> +	};
>>
>>  	if (!priv->eventc)
>>  		/* All virtqs are in poll mode. */
>> @@ -520,6 +526,16 @@
>>  			DRV_LOG(ERR, "Failed to set thread affinity.");
>>  			return -1;
>>  		}
>> +		ret = pthread_attr_setschedpolicy(&attr, SCHED_RR);
>> +		if (ret) {
>> +			DRV_LOG(ERR, "Failed to set thread sched policy = RR.");
>> +			return -1;
>> +		}
>> +		ret = pthread_attr_setschedparam(&attr, &sp);
>> +		if (ret) {
>> +			DRV_LOG(ERR, "Failed to set thread priority.");
>> +			return -1;
>> +		}
>>  		ret = pthread_create(&priv->timer_tid, &attr,
>>  				     mlx5_vdpa_poll_handle, (void *)priv);
>>  		if (ret) {
>>
>
>Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Acked-by: Xueming Li <xuemingl@mellanox.com>
  
Thomas Monjalon Feb. 10, 2021, 9:17 p.m. UTC | #3
> >> When the event mode is with 0 fixed delay, the polling-thread will
> >> never give-up CPU.
> >>
> >> So, when multi-polling-threads are active, the context-switch between
> >> them will be managed by the system which may affect latency according
> >> to the time-out decided by the system.
> >>
> >> In order to fix multi-devices polling thread scheduling, this patch
> >> forces rescheduling for each CQ poll iteration.
> >>
> >> Move the polling thread to SCHED_RR mode with maximum priority to
> >> complete the fairness.
> >>
> >> Fixes: 6956a48cabbb ("vdpa/mlx5: set polling mode default delay to
> >> zero")
> >>
> >> Signed-off-by: Matan Azrad <matan@nvidia.com>
> >
> > Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> Acked-by: Xueming Li <xuemingl@mellanox.com>
converted to nvidia.com

Applied, thanks
  

Patch

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 0f635ff..86adc86 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -232,6 +232,9 @@ 
 	}
 	if (priv->timer_delay_us)
 		usleep(priv->timer_delay_us);
+	else
+		/* Give-up CPU to improve polling threads scheduling. */
+		pthread_yield();
 }
 
 static void *
@@ -500,6 +503,9 @@ 
 	rte_cpuset_t cpuset;
 	pthread_attr_t attr;
 	char name[16];
+	const struct sched_param sp = {
+		.sched_priority = sched_get_priority_max(SCHED_RR),
+	};
 
 	if (!priv->eventc)
 		/* All virtqs are in poll mode. */
@@ -520,6 +526,16 @@ 
 			DRV_LOG(ERR, "Failed to set thread affinity.");
 			return -1;
 		}
+		ret = pthread_attr_setschedpolicy(&attr, SCHED_RR);
+		if (ret) {
+			DRV_LOG(ERR, "Failed to set thread sched policy = RR.");
+			return -1;
+		}
+		ret = pthread_attr_setschedparam(&attr, &sp);
+		if (ret) {
+			DRV_LOG(ERR, "Failed to set thread priority.");
+			return -1;
+		}
 		ret = pthread_create(&priv->timer_tid, &attr,
 				     mlx5_vdpa_poll_handle, (void *)priv);
 		if (ret) {