From patchwork Wed Oct 28 10:44:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 82629 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 20D16A04DD; Wed, 28 Oct 2020 11:44:58 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 03AE3C9EA; Wed, 28 Oct 2020 11:44:57 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 6DE5AC9C2 for ; Wed, 28 Oct 2020 11:44:55 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from xuemingl@nvidia.com) with SMTP; 28 Oct 2020 12:44:50 +0200 Received: from nvidia.com (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09SAioIf005163; Wed, 28 Oct 2020 12:44:50 +0200 From: Xueming Li To: Matan Azrad , Viacheslav Ovsiienko , Maxime Coquelin Cc: dev@dpdk.org, xuemingl@nvidia.com, Asaf Penso , stable@dpdk.org Date: Wed, 28 Oct 2020 10:44:38 +0000 Message-Id: <1603881879-19275-1-git-send-email-xuemingl@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1603710656-32187-1-git-send-email-xuemingl@nvidia.com> References: <1603710656-32187-1-git-send-email-xuemingl@nvidia.com> Subject: [dpdk-dev] [PATCH v1 1/2] common/mlx5: get number of ports that can be bonded X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Get HCA capability: number of physical ports that can be bonded. Cc: stable@dpdk.org Signed-off-by: Xueming Li Acked-by: Matan Azrad Reviewed-by: Maxime Coquelin --- drivers/common/mlx5/mlx5_devx_cmds.c | 5 +++-- drivers/common/mlx5/mlx5_devx_cmds.h | 1 + 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 8aee12d527..e748d034d0 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -711,6 +711,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->non_wire_sq = MLX5_GET(cmd_hca_cap, hcattr, non_wire_sq); attr->log_max_static_sq_wq = MLX5_GET(cmd_hca_cap, hcattr, log_max_static_sq_wq); + attr->num_lag_ports = MLX5_GET(cmd_hca_cap, hcattr, num_lag_ports); attr->dev_freq_khz = MLX5_GET(cmd_hca_cap, hcattr, device_frequency_khz); attr->scatter_fcs_w_decap_disable = @@ -1429,8 +1430,8 @@ mlx5_devx_cmd_create_tis(void *ctx, tis_ctx = MLX5_ADDR_OF(create_tis_in, in, ctx); MLX5_SET(tisc, tis_ctx, strict_lag_tx_port_affinity, tis_attr->strict_lag_tx_port_affinity); - MLX5_SET(tisc, tis_ctx, strict_lag_tx_port_affinity, - tis_attr->strict_lag_tx_port_affinity); + MLX5_SET(tisc, tis_ctx, lag_tx_port_affinity, + tis_attr->lag_tx_port_affinity); MLX5_SET(tisc, tis_ctx, prio, tis_attr->prio); MLX5_SET(tisc, tis_ctx, transport_domain, tis_attr->transport_domain); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index abbea67784..3781fedd9e 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -99,6 +99,7 @@ struct mlx5_hca_attr { uint32_t cross_channel:1; uint32_t non_wire_sq:1; /* SQ with non-wire ops is supported. */ uint32_t log_max_static_sq_wq:5; /* Static WQE size SQ. */ + uint32_t num_lag_ports:4; /* Number of ports can be bonded. */ uint32_t dev_freq_khz; /* Timestamp counter frequency, kHz. */ uint32_t scatter_fcs_w_decap_disable:1; uint32_t regex:1; From patchwork Wed Oct 28 10:44:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 82630 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B956BA04DD; Wed, 28 Oct 2020 11:46:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9A90DC9C0; Wed, 28 Oct 2020 11:46:12 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id AF7EAC9C0 for ; Wed, 28 Oct 2020 11:46:11 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from xuemingl@nvidia.com) with SMTP; 28 Oct 2020 12:46:05 +0200 Received: from nvidia.com (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09SAioIg005163; Wed, 28 Oct 2020 12:46:04 +0200 From: Xueming Li To: Matan Azrad , Viacheslav Ovsiienko , Maxime Coquelin Cc: dev@dpdk.org, xuemingl@nvidia.com, Asaf Penso , matan@mellanox.com, stable@dpdk.org Date: Wed, 28 Oct 2020 10:44:39 +0000 Message-Id: <1603881879-19275-2-git-send-email-xuemingl@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1603881879-19275-1-git-send-email-xuemingl@nvidia.com> References: <1603881879-19275-1-git-send-email-xuemingl@nvidia.com> In-Reply-To: <1603710656-32187-1-git-send-email-xuemingl@nvidia.com> References: <1603710656-32187-1-git-send-email-xuemingl@nvidia.com> Subject: [dpdk-dev] [PATCH v1 2/2] vdpa/mlx5: specify lag port affinity X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If set TIS lag port affinity to auto, firmware assign port affinity on each creation with Round Robin. In case of 2 PFs, if create virtq, destroy and create again, then each virtq will get same port affinity. To resolve this fw limitation, this patch sets create TIS with specified affinity for each PF. Fixes: bff735011078 ("vdpa/mlx5: prepare virtio queues") Cc: matan@mellanox.com Cc: stable@dpdk.org Signed-off-by: Xueming Li Acked-by: Matan Azrad Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa.c | 3 +++ drivers/vdpa/mlx5/mlx5_vdpa.h | 3 ++- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 23 ++++++++++++++--------- 3 files changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 2d88633bfd..43e84f034e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -730,6 +730,9 @@ mlx5_vdpa_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, } priv->caps = attr.vdpa; priv->log_max_rqt_size = attr.log_max_rqt_size; + priv->num_lag_ports = attr.num_lag_ports; + if (attr.num_lag_ports == 0) + priv->num_lag_ports = 1; priv->ctx = ctx; priv->pci_dev = pci_dev; priv->var = mlx5_glue->dv_alloc_var(ctx, 0); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index fcbc12ab0c..c8c1adfde4 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -146,8 +146,9 @@ struct mlx5_vdpa_priv { struct mlx5dv_devx_uar *uar; struct rte_intr_handle intr_handle; struct mlx5_devx_obj *td; - struct mlx5_devx_obj *tis; + struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */ uint16_t nr_virtqs; + uint8_t num_lag_ports; uint64_t features; /* Negotiated features. */ uint16_t log_max_rqt_size; struct mlx5_vdpa_steer steer; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 17e71cf4f4..4724baca4e 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -103,12 +103,13 @@ void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) { int i; - for (i = 0; i < priv->nr_virtqs; i++) mlx5_vdpa_virtq_unset(&priv->virtqs[i]); - if (priv->tis) { - claim_zero(mlx5_devx_cmd_destroy(priv->tis)); - priv->tis = NULL; + for (i = 0; i < priv->num_lag_ports; i++) { + if (priv->tiss[i]) { + claim_zero(mlx5_devx_cmd_destroy(priv->tiss[i])); + priv->tiss[i] = NULL; + } } if (priv->td) { claim_zero(mlx5_devx_cmd_destroy(priv->td)); @@ -302,7 +303,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) attr.hw_used_index = last_used_idx; attr.q_size = vq.size; attr.mkey = priv->gpa_mkey_index; - attr.tis_id = priv->tis->id; + attr.tis_id = priv->tiss[(index / 2) % priv->num_lag_ports]->id; attr.queue_index = index; attr.pd = priv->pdn; virtq->virtq = mlx5_devx_cmd_create_virtq(priv->ctx, &attr); @@ -432,10 +433,14 @@ mlx5_vdpa_virtqs_prepare(struct mlx5_vdpa_priv *priv) return -rte_errno; } tis_attr.transport_domain = priv->td->id; - priv->tis = mlx5_devx_cmd_create_tis(priv->ctx, &tis_attr); - if (!priv->tis) { - DRV_LOG(ERR, "Failed to create TIS."); - goto error; + for (i = 0; i < priv->num_lag_ports; i++) { + /* 0 is auto affinity, non-zero value to propose port. */ + tis_attr.lag_tx_port_affinity = i + 1; + priv->tiss[i] = mlx5_devx_cmd_create_tis(priv->ctx, &tis_attr); + if (!priv->tiss[i]) { + DRV_LOG(ERR, "Failed to create TIS %u.", i); + goto error; + } } priv->nr_virtqs = nr_vring; for (i = 0; i < nr_vring; i++)