From patchwork Wed Dec 2 23:36:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 84725 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C65AA09DF; Thu, 3 Dec 2020 00:37:31 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 29ECFC9C0; Thu, 3 Dec 2020 00:37:29 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 52E43C9C0 for ; Thu, 3 Dec 2020 00:37:27 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from xuemingl@nvidia.com) with SMTP; 3 Dec 2020 01:37:23 +0200 Received: from nvidia.com (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0B2Naj4b027188; Thu, 3 Dec 2020 01:37:23 +0200 From: Xueming Li To: Matan Azrad , Viacheslav Ovsiienko , Maxime Coquelin Cc: dev@dpdk.org, xuemingl@nvidia.com, Asaf Penso Date: Wed, 2 Dec 2020 23:36:42 +0000 Message-Id: <1606952203-23310-3-git-send-email-xuemingl@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1606952203-23310-1-git-send-email-xuemingl@nvidia.com> References: <1606952203-23310-1-git-send-email-xuemingl@nvidia.com> Subject: [dpdk-dev] [PATCH 3/4] vdpa/mlx5: add cpu core parameter to bind polling thread X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds new device argument to specify cpu core affinity to event polling thread for better latency and throughput. The thread could be also located by name "vDPA-mlx5-". Signed-off-by: Xueming Li Acked-by: Matan Azrad Reviewed-by: Maxime Coquelin --- doc/guides/vdpadevs/mlx5.rst | 5 +++++ drivers/vdpa/mlx5/mlx5_vdpa.c | 7 +++++++ drivers/vdpa/mlx5/mlx5_vdpa.h | 1 + drivers/vdpa/mlx5/mlx5_vdpa_event.c | 23 ++++++++++++++++++++++- 4 files changed, 35 insertions(+), 1 deletion(-) diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index 903fdb0e60..20254257c9 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -134,6 +134,11 @@ Driver options interrupts are configured to the device in order to notify traffic for the driver. Default value is 2s. +- ``event_core`` parameter [int] + + CPU core number to set polling thread affinity to, default to control plane + cpu. + Error handling ^^^^^^^^^^^^^^ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 5020a99fae..1f92c529c9 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -612,6 +612,7 @@ mlx5_vdpa_args_check_handler(const char *key, const char *val, void *opaque) { struct mlx5_vdpa_priv *priv = opaque; unsigned long tmp; + int n_cores = sysconf(_SC_NPROCESSORS_ONLN); if (strcmp(key, "class") == 0) return 0; @@ -630,6 +631,11 @@ mlx5_vdpa_args_check_handler(const char *key, const char *val, void *opaque) priv->event_us = (uint32_t)tmp; } else if (strcmp(key, "no_traffic_time") == 0) { priv->no_traffic_time_s = (uint32_t)tmp; + } else if (strcmp(key, "event_core") == 0) { + if (tmp >= (unsigned long)n_cores) + DRV_LOG(WARNING, "Invalid event_core %s.", val); + else + priv->event_core = tmp; } else { DRV_LOG(WARNING, "Invalid key %s.", key); } @@ -643,6 +649,7 @@ mlx5_vdpa_config_get(struct rte_devargs *devargs, struct mlx5_vdpa_priv *priv) priv->event_mode = MLX5_VDPA_EVENT_MODE_DYNAMIC_TIMER; priv->event_us = 0; + priv->event_core = -1; priv->no_traffic_time_s = MLX5_VDPA_DEFAULT_NO_TRAFFIC_TIME_S; if (devargs == NULL) return; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index 08e04a86c4..b4dd3834aa 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -131,6 +131,7 @@ struct mlx5_vdpa_priv { pthread_cond_t timer_cond; volatile uint8_t timer_on; int event_mode; + int event_core; /* Event thread cpu affinity core. */ uint32_t event_us; uint32_t timer_delay_us; uint32_t no_traffic_time_s; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 5366937e03..f731c80004 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -532,6 +532,9 @@ int mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv) { int ret; + rte_cpuset_t cpuset; + pthread_attr_t attr; + char name[16]; if (!priv->eventc) /* All virtqs are in poll mode. */ @@ -540,12 +543,30 @@ mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv) pthread_mutex_init(&priv->timer_lock, NULL); pthread_cond_init(&priv->timer_cond, NULL); priv->timer_on = 0; - ret = pthread_create(&priv->timer_tid, NULL, + pthread_attr_init(&attr); + CPU_ZERO(&cpuset); + if (priv->event_core != -1) + CPU_SET(priv->event_core, &cpuset); + else + cpuset = rte_lcore_cpuset(rte_get_main_lcore()); + ret = pthread_attr_setaffinity_np(&attr, sizeof(cpuset), + &cpuset); + if (ret) { + DRV_LOG(ERR, "Failed to set thread affinity."); + return -1; + } + ret = pthread_create(&priv->timer_tid, &attr, mlx5_vdpa_poll_handle, (void *)priv); if (ret) { DRV_LOG(ERR, "Failed to create timer thread."); return -1; } + snprintf(name, sizeof(name), "vDPA-mlx5-%d", priv->vid); + ret = pthread_setname_np(priv->timer_tid, name); + if (ret) { + DRV_LOG(ERR, "Failed to set timer thread name."); + return -1; + } } priv->intr_handle.fd = priv->eventc->fd; priv->intr_handle.type = RTE_INTR_HANDLE_EXT;