From patchwork Thu Oct 8 14:16:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 80060 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2628A04BC; Thu, 8 Oct 2020 16:18:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 379901C135; Thu, 8 Oct 2020 16:17:21 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id C1FF31C131 for ; Thu, 8 Oct 2020 16:17:19 +0200 (CEST) From: Bing Zhao To: viacheslavo@mellanox.com, matan@mellanox.com Cc: dev@dpdk.org, orika@nvidia.com, rasland@nvidia.com Date: Thu, 8 Oct 2020 22:16:59 +0800 Message-Id: <1602166620-46303-4-git-send-email-bingz@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1602166620-46303-1-git-send-email-bingz@nvidia.com> References: <1602166620-46303-1-git-send-email-bingz@nvidia.com> Subject: [dpdk-dev] [PATCH 3/4] net/mlx5: conditional hairpin auto bind X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In single port hairpin mode, after the queues are configured during start up. The binding process will be enabled automatically in the port start phase and the default control flow for egress will be created. When switching to two ports hairpin mode, the auto binding process should be skipped if there is no TX queue with the peer RX queue on the same device, and it should be skipped also if the queues are configured with manual bind attribute. If the explicit TX flow rule mode is configured or hairpin is between two ports, the default control flows for TX queues should not be created. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5_trigger.c | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index f326b57..77d84dd 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -214,6 +214,8 @@ struct mlx5_devx_obj *rq; unsigned int i; int ret = 0; + bool need_auto = false; + uint16_t self_port = dev->data->port_id; for (i = 0; i != priv->txqs_n; ++i) { txq_ctrl = mlx5_txq_get(dev, i); @@ -223,6 +225,25 @@ mlx5_txq_release(dev, i); continue; } + if (txq_ctrl->hairpin_conf.peers[0].port != self_port) + continue; + if (txq_ctrl->hairpin_conf.manual_bind) { + mlx5_txq_release(dev, i); + return 0; + } + need_auto = true; + mlx5_txq_release(dev, i); + } + if (!need_auto) + return 0; + for (i = 0; i != priv->txqs_n; ++i) { + txq_ctrl = mlx5_txq_get(dev, i); + if (!txq_ctrl) + continue; + if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + mlx5_txq_release(dev, i); + continue; + } if (!txq_ctrl->obj) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no txq object found: %d", @@ -798,9 +819,13 @@ dev->data->port_id, strerror(rte_errno)); goto error; } + /* + * Such step will be skipped if there is no hairpin TX queue configured + * with RX peer queue from the same device. + */ ret = mlx5_hairpin_auto_bind(dev); if (ret) { - DRV_LOG(ERR, "port %u hairpin binding failed: %s", + DRV_LOG(ERR, "port %u hairpin auto binding failed: %s", dev->data->port_id, strerror(rte_errno)); goto error; } @@ -949,7 +974,10 @@ struct mlx5_txq_ctrl *txq_ctrl = mlx5_txq_get(dev, i); if (!txq_ctrl) continue; - if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) { + if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN && + !txq_ctrl->hairpin_conf.manual_bind && + txq_ctrl->hairpin_conf.peers[0].port == + priv->dev_data->port_id) { ret = mlx5_ctrl_flow_source_queue(dev, i); if (ret) { mlx5_txq_release(dev, i);