From patchwork Thu Oct 8 14:16:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 80058 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E7B97A04BC; Thu, 8 Oct 2020 16:17:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1EC1F1C121; Thu, 8 Oct 2020 16:17:15 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id 5C3F11C117 for ; Thu, 8 Oct 2020 16:17:14 +0200 (CEST) From: Bing Zhao To: viacheslavo@mellanox.com, matan@mellanox.com Cc: dev@dpdk.org, orika@nvidia.com, rasland@nvidia.com Date: Thu, 8 Oct 2020 22:16:57 +0800 Message-Id: <1602166620-46303-2-git-send-email-bingz@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1602166620-46303-1-git-send-email-bingz@nvidia.com> References: <1602166620-46303-1-git-send-email-bingz@nvidia.com> Subject: [dpdk-dev] [PATCH 1/4] net/mlx5: remove hairpin queue peer port checking X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In the current implementation of single port mode hairpin, the peer queue should belong to the same port of the current port. When two ports hairpin mode is introduced, the checking should be removed to make the hairpin queue setup execute successfully. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5_rxq.c | 4 +--- drivers/net/mlx5/mlx5_txq.c | 4 +--- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index f1d8373..66abce7 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -776,9 +776,7 @@ res = mlx5_rx_queue_pre_setup(dev, idx, &desc); if (res) return res; - if (hairpin_conf->peer_count != 1 || - hairpin_conf->peers[0].port != dev->data->port_id || - hairpin_conf->peers[0].queue >= priv->txqs_n) { + if (hairpin_conf->peer_count != 1) { DRV_LOG(ERR, "port %u unable to setup hairpin queue index %u " " invalid hairpind configuration", dev->data->port_id, idx); diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index af84f5f..17a9f5a 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -421,9 +421,7 @@ res = mlx5_tx_queue_pre_setup(dev, idx, &desc); if (res) return res; - if (hairpin_conf->peer_count != 1 || - hairpin_conf->peers[0].port != dev->data->port_id || - hairpin_conf->peers[0].queue >= priv->rxqs_n) { + if (hairpin_conf->peer_count != 1) { DRV_LOG(ERR, "port %u unable to setup hairpin queue index %u " " invalid hairpind configuration", dev->data->port_id, idx); From patchwork Thu Oct 8 14:16:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 80059 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E3C46A04BC; Thu, 8 Oct 2020 16:17:53 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ABF271C12D; Thu, 8 Oct 2020 16:17:19 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id 263141C12A for ; Thu, 8 Oct 2020 16:17:17 +0200 (CEST) From: Bing Zhao To: viacheslavo@mellanox.com, matan@mellanox.com Cc: dev@dpdk.org, orika@nvidia.com, rasland@nvidia.com Date: Thu, 8 Oct 2020 22:16:58 +0800 Message-Id: <1602166620-46303-3-git-send-email-bingz@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1602166620-46303-1-git-send-email-bingz@nvidia.com> References: <1602166620-46303-1-git-send-email-bingz@nvidia.com> Subject: [dpdk-dev] [PATCH 2/4] net/mlx5: add support for two ports hairpin mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to support hairpin between two ports, mlx5 PMD needs to implement the functions and provide them as the function pointers. The bind and unbind functions are executed per port pairs. All the hairpin queues between the two ports should have the same attributes during queues setup. Different configurations among queue pairs from the same ports are not supported. It is allowed that two ports only have one direction hairpin. In order to set up the connection between two queues, peer RX queue HW information must be fetched via the internal RTE API and the queue information could be used to modify the SQ object. Then the RQ object will be modified with the TX queue HW information. The reverse operation is not supported right now. When disconnecting the queues pair, SQ and RQ object should be reset without any peer HW information. The unbinding operation will try to disconnect all TX queues from the port from the RX queues of the peer port. TX explicit mode attribute will be saved and used when creating a hairpin flow. Signed-off-by: Bing Zhao --- drivers/net/mlx5/linux/mlx5_os.c | 10 + drivers/net/mlx5/mlx5.h | 19 ++ drivers/net/mlx5/mlx5_rxtx.h | 2 + drivers/net/mlx5/mlx5_trigger.c | 470 ++++++++++++++++++++++++++++++++++++++- 4 files changed, 499 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 487714f..ee8e1bb 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -2530,6 +2530,11 @@ .get_module_eeprom = mlx5_get_module_eeprom, .hairpin_cap_get = mlx5_hairpin_cap_get, .mtr_ops_get = mlx5_flow_meter_ops_get, + .hairpin_bind = mlx5_hairpin_bind, + .hairpin_unbind = mlx5_hairpin_unbind, + .hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update, + .hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind, + .hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind, }; /* Available operations from secondary process. */ @@ -2608,4 +2613,9 @@ .get_module_eeprom = mlx5_get_module_eeprom, .hairpin_cap_get = mlx5_hairpin_cap_get, .mtr_ops_get = mlx5_flow_meter_ops_get, + .hairpin_bind = mlx5_hairpin_bind, + .hairpin_unbind = mlx5_hairpin_unbind, + .hairpin_queue_peer_update = mlx5_hairpin_queue_peer_update, + .hairpin_queue_peer_bind = mlx5_hairpin_queue_peer_bind, + .hairpin_queue_peer_unbind = mlx5_hairpin_queue_peer_unbind, }; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 87d3c15..80d0859 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -878,6 +878,14 @@ struct mlx5_priv { #define PORT_ID(priv) ((priv)->dev_data->port_id) #define ETH_DEV(priv) (&rte_eth_devices[PORT_ID(priv)]) +struct rte_hairpin_peer_info { + uint32_t qp_id; + uint32_t vhca_id; + uint16_t peer_q; + uint16_t tx_explicit; + uint16_t manual_bind; +}; + /* mlx5.c */ int mlx5_getenv_int(const char *); @@ -1028,6 +1036,17 @@ void mlx5_vlan_vmwa_acquire(struct rte_eth_dev *dev, int mlx5_traffic_enable(struct rte_eth_dev *dev); void mlx5_traffic_disable(struct rte_eth_dev *dev); int mlx5_traffic_restart(struct rte_eth_dev *dev); +int mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, + struct rte_hairpin_peer_info *current_info, + struct rte_hairpin_peer_info *peer_info, + uint32_t direction); +int mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, + struct rte_hairpin_peer_info *peer_info, + uint32_t direction); +int mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, + uint32_t direction); +int mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port); +int mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port); /* mlx5_flow.c */ diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 674296e..ac612ca 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -184,6 +184,7 @@ struct mlx5_rxq_ctrl { void *wq_umem; /* WQ buffer registration info. */ void *cq_umem; /* CQ buffer registration info. */ struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ + uint32_t hairpin_status; }; /* TX queue send local data. */ @@ -279,6 +280,7 @@ struct mlx5_txq_ctrl { off_t uar_mmap_offset; /* UAR mmap offset for non-primary process. */ void *bf_reg; /* BlueFlame register from Verbs. */ uint16_t dump_file_n; /* Number of dump files. */ + uint32_t hairpin_status; struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ struct mlx5_txq_data txq; /* Data path structure. */ /* Must be the last field in the structure, contains elts[]. */ diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index e72e5fb..f326b57 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -203,7 +203,7 @@ * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_hairpin_bind(struct rte_eth_dev *dev) +mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_devx_modify_sq_attr sq_attr = { 0 }; @@ -281,6 +281,472 @@ return -rte_errno; } +int +mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, + struct rte_hairpin_peer_info *current_info, + struct rte_hairpin_peer_info *peer_info, + uint32_t direction) +{ + struct mlx5_priv *priv = dev->data->dev_private; + (void)current_info; + + /* + * Peer port used as egress. In the current design, hairpin TX queue + * will be bound to the peer RX queue. Indeed, only the information of + * peer RX queue needs to be fetched. + */ + if (direction) { + struct mlx5_txq_ctrl *txq_ctrl; + + txq_ctrl = mlx5_txq_get(dev, peer_queue); + if (!txq_ctrl) { + rte_errno = EINVAL; + return -rte_errno; + } + if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d not a hairpin txq", + dev->data->port_id, peer_queue); + mlx5_txq_release(dev, peer_queue); + return -rte_errno; + } + if (!txq_ctrl->obj || !txq_ctrl->obj->sq) { + rte_errno = ENOMEM; + DRV_LOG(ERR, "port %u no txq object found: %d", + dev->data->port_id, peer_queue); + mlx5_txq_release(dev, peer_queue); + return -rte_errno; + } + peer_info->qp_id = txq_ctrl->obj->sq->id; + peer_info->vhca_id = priv->config.hca_attr.vhca_id; + /* 1-to-1 mapping, only the first is used. */ + peer_info->peer_q = txq_ctrl->hairpin_conf.peers[0].queue; + peer_info->tx_explicit = txq_ctrl->hairpin_conf.tx_explicit; + peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind; + mlx5_txq_release(dev, peer_queue); + } else { /* Peer port used as ingress. */ + struct mlx5_rxq_ctrl *rxq_ctrl; + + rxq_ctrl = mlx5_rxq_get(dev, peer_queue); + if (!rxq_ctrl) { + rte_errno = EINVAL; + return -rte_errno; + } + if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d not a hairpin rxq", + dev->data->port_id, peer_queue); + mlx5_rxq_release(dev, peer_queue); + return -rte_errno; + } + if (!rxq_ctrl->obj || !rxq_ctrl->obj->rq) { + rte_errno = ENOMEM; + DRV_LOG(ERR, "port %u no rxq object found: %d", + dev->data->port_id, peer_queue); + mlx5_rxq_release(dev, peer_queue); + return -rte_errno; + } + peer_info->qp_id = rxq_ctrl->obj->rq->id; + peer_info->vhca_id = priv->config.hca_attr.vhca_id; + peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; + peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; + peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; + mlx5_rxq_release(dev, peer_queue); + } + return 0; +} + +int +mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, + struct rte_hairpin_peer_info *peer_info, + uint32_t direction) +{ + int ret = 0; + + /* + * Consistency checking of the peer queue: opposite direction is used + * to get the peer queue info with ethdev index, no need to check. + */ + if (peer_info->peer_q != cur_queue) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d and peer queue %d mismatch", + dev->data->port_id, cur_queue, peer_info->peer_q); + return -rte_errno; + } + if (!direction) { + struct mlx5_txq_ctrl *txq_ctrl; + struct mlx5_devx_modify_sq_attr sq_attr = { 0 }; + + txq_ctrl = mlx5_txq_get(dev, cur_queue); + if (!txq_ctrl) { + rte_errno = EINVAL; + return -rte_errno; + } + if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d not a hairpin txq", + dev->data->port_id, cur_queue); + mlx5_txq_release(dev, cur_queue); + return -rte_errno; + } + if (!txq_ctrl->obj || !txq_ctrl->obj->sq) { + rte_errno = ENOMEM; + DRV_LOG(ERR, "port %u no txq object found: %d", + dev->data->port_id, cur_queue); + mlx5_txq_release(dev, cur_queue); + return -rte_errno; + } + if (txq_ctrl->hairpin_status) { + rte_errno = EBUSY; + DRV_LOG(ERR, "port %u TX queue %d is already bound", + dev->data->port_id, cur_queue); + mlx5_txq_release(dev, cur_queue); + return -rte_errno; + } + /* + * All queues' of one port consistency checking is done in the + * bind() function, and that is optional. + */ + if (peer_info->tx_explicit != + txq_ctrl->hairpin_conf.tx_explicit) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u TX queue %d and peer TX rule " + "mode mismatch", dev->data->port_id, cur_queue); + mlx5_txq_release(dev, cur_queue); + return -rte_errno; + } + if (peer_info->manual_bind != + txq_ctrl->hairpin_conf.manual_bind) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u TX queue %d and peer binding " + "mode mismatch", dev->data->port_id, cur_queue); + mlx5_txq_release(dev, cur_queue); + return -rte_errno; + } + sq_attr.state = MLX5_SQC_STATE_RDY; + sq_attr.sq_state = MLX5_SQC_STATE_RST; + sq_attr.hairpin_peer_rq = peer_info->qp_id; + sq_attr.hairpin_peer_vhca = peer_info->vhca_id; + ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq, &sq_attr); + if (!ret) + txq_ctrl->hairpin_status = 1; + mlx5_txq_release(dev, cur_queue); + } else { + struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; + + rxq_ctrl = mlx5_rxq_get(dev, cur_queue); + if (!rxq_ctrl) { + rte_errno = EINVAL; + return -rte_errno; + } + if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d not a hairpin rxq", + dev->data->port_id, cur_queue); + mlx5_rxq_release(dev, cur_queue); + return -rte_errno; + } + if (!rxq_ctrl->obj || !rxq_ctrl->obj->rq) { + rte_errno = ENOMEM; + DRV_LOG(ERR, "port %u no rxq object found: %d", + dev->data->port_id, cur_queue); + mlx5_rxq_release(dev, cur_queue); + return -rte_errno; + } + if (rxq_ctrl->hairpin_status) { + rte_errno = EBUSY; + DRV_LOG(ERR, "port %u RX queue %d is already bound", + dev->data->port_id, cur_queue); + mlx5_rxq_release(dev, cur_queue); + return -rte_errno; + } + if (peer_info->tx_explicit != + rxq_ctrl->hairpin_conf.tx_explicit) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u RX queue %d and peer TX rule " + "mode mismatch", dev->data->port_id, cur_queue); + mlx5_rxq_release(dev, cur_queue); + return -rte_errno; + } + if (peer_info->manual_bind != + rxq_ctrl->hairpin_conf.manual_bind) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d and peer binding " + "mode mismatch", dev->data->port_id, cur_queue); + mlx5_rxq_release(dev, cur_queue); + return -rte_errno; + } + rq_attr.state = MLX5_SQC_STATE_RDY; + rq_attr.rq_state = MLX5_SQC_STATE_RST; + rq_attr.hairpin_peer_sq = peer_info->qp_id; + rq_attr.hairpin_peer_vhca = peer_info->vhca_id; + ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); + if (!ret) + rxq_ctrl->hairpin_status = 1; + mlx5_rxq_release(dev, cur_queue); + } + return ret; +} + +int +mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, + uint32_t direction) +{ + int ret = 0; + + if (!direction) { + struct mlx5_txq_ctrl *txq_ctrl; + struct mlx5_devx_modify_sq_attr sq_attr = { 0 }; + + txq_ctrl = mlx5_txq_get(dev, cur_queue); + if (!txq_ctrl) { + rte_errno = EINVAL; + return -rte_errno; + } + if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d not a hairpin txq", + dev->data->port_id, cur_queue); + mlx5_txq_release(dev, cur_queue); + return -rte_errno; + } + if (!txq_ctrl->obj || !txq_ctrl->obj->sq) { + rte_errno = ENOMEM; + DRV_LOG(ERR, "port %u no txq object found: %d", + dev->data->port_id, cur_queue); + mlx5_txq_release(dev, cur_queue); + return -rte_errno; + } + /* Already unbound, 0 returns. */ + if (!txq_ctrl->hairpin_status) { + mlx5_txq_release(dev, cur_queue); + DRV_LOG(DEBUG, "port %u TX queue %d is already unbound", + dev->data->port_id, cur_queue); + return 0; + } + sq_attr.state = MLX5_SQC_STATE_RST; + sq_attr.sq_state = MLX5_SQC_STATE_RST; + ret = mlx5_devx_cmd_modify_sq(txq_ctrl->obj->sq, &sq_attr); + if (!ret) + txq_ctrl->hairpin_status = 0; + mlx5_txq_release(dev, cur_queue); + } else { + struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; + + rxq_ctrl = mlx5_rxq_get(dev, cur_queue); + if (!rxq_ctrl) { + rte_errno = EINVAL; + return -rte_errno; + } + if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d not a hairpin rxq", + dev->data->port_id, cur_queue); + mlx5_rxq_release(dev, cur_queue); + return -rte_errno; + } + if (!rxq_ctrl->obj || !rxq_ctrl->obj->rq) { + rte_errno = ENOMEM; + DRV_LOG(ERR, "port %u no rxq object found: %d", + dev->data->port_id, cur_queue); + mlx5_rxq_release(dev, cur_queue); + return -rte_errno; + } + if (!rxq_ctrl->hairpin_status) { + mlx5_rxq_release(dev, cur_queue); + DRV_LOG(DEBUG, "port %u RX queue %d is already unbound", + dev->data->port_id, cur_queue); + return 0; + } + rq_attr.state = MLX5_SQC_STATE_RST; + rq_attr.rq_state = MLX5_SQC_STATE_RST; + ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); + if (!ret) + rxq_ctrl->hairpin_status = 0; + mlx5_rxq_release(dev, cur_queue); + } + return ret; +} + +int +mlx5_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct mlx5_priv *priv = dev->data->dev_private; + int ret = 0; + struct mlx5_txq_ctrl *txq_ctrl; + uint32_t i, j; + struct rte_hairpin_peer_info peer; + struct rte_hairpin_peer_info cur; + const struct rte_eth_hairpin_conf *conf; + uint16_t num_q = 0; + uint16_t local_port = priv->dev_data->port_id; + uint32_t manual; + uint32_t explicit; + uint16_t rx_queue; + + /* + * Before binding TXQ to peer RXQ, first round loop will be used for + * checking the queues' configuration consistency. This would be a + * little time consuming but better to do the rollback. + */ + for (i = 0; i != priv->txqs_n; i++) { + txq_ctrl = mlx5_txq_get(dev, i); + if (!txq_ctrl) + continue; + if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + mlx5_txq_release(dev, i); + continue; + } + /* + * All hairpin TX queues of a single port that connects to the + * same peer RX port should have the same "auto-bind" and + * "implicit TX rule part" modes. + * Peer consistency checking will be done in per queue binding. + * Only the single port hairpin supports the two modes above. + */ + conf = &txq_ctrl->hairpin_conf; + if (conf->peers[0].port == rx_port) { + if (!num_q) { + manual = conf->manual_bind; + explicit = conf->tx_explicit; + if ((!manual || !explicit) && + rx_port != local_port) { + mlx5_txq_release(dev, i); + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d does " + "not support %s%s with " + "peer port %u", local_port, i, + manual ? "" : "auto-bind/", + explicit ? "" : "TX-implicit", + rx_port); + return -rte_errno; + } + } else { + if (manual != conf->manual_bind || + explicit != conf->tx_explicit) { + mlx5_txq_release(dev, i); + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u queue %d mode " + "mismatch: %u %u, %u %u", + local_port, i, manual, + conf->manual_bind, explicit, + conf->tx_explicit); + return -rte_errno; + } + } + num_q++; + } + mlx5_txq_release(dev, i); + /* Once no queue is configured, success is returned directly. */ + if (!num_q) + return ret; + } + /* All the hairpin TX queues need to be traversed again */ + for (i = 0; i != priv->txqs_n; i++) { + txq_ctrl = mlx5_txq_get(dev, i); + if (!txq_ctrl) + continue; + if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + mlx5_txq_release(dev, i); + continue; + } + if (txq_ctrl->hairpin_conf.peers[0].port != rx_port) { + mlx5_txq_release(dev, i); + continue; + } + rx_queue = txq_ctrl->hairpin_conf.peers[0].queue; + /* Fetch peer RXQ's information. */ + ret = rte_eth_hairpin_queue_peer_update(rx_port, rx_queue, + NULL, &peer, 0); + if (ret) { + mlx5_txq_release(dev, i); + goto error; + } + /* Accessing own device, mlx5 PMD API is enough. */ + ret = mlx5_hairpin_queue_peer_bind(dev, i, &peer, 0); + if (ret) + goto error; + /* Pass TXQ's information to peer RXQ. */ + cur.peer_q = rx_queue; + cur.qp_id = txq_ctrl->obj->sq->id; + cur.vhca_id = priv->config.hca_attr.vhca_id; + cur.tx_explicit = txq_ctrl->hairpin_conf.tx_explicit; + cur.manual_bind = txq_ctrl->hairpin_conf.manual_bind; + /* Accessing another device, RTE level API is needed. */ + ret = rte_eth_hairpin_queue_peer_bind(rx_port, rx_queue, + &cur, 1); + if (ret) + goto error; + mlx5_txq_release(dev, i); + } + return 0; +error: + /* + * Do roll-back process for the bound queues. + * No need to check the return value of the queue unbind function. + */ + for (j = 0; j <= i; j++) { + /* No validation is needed here. */ + txq_ctrl = mlx5_txq_get(dev, i); + rx_queue = txq_ctrl->hairpin_conf.peers[0].queue; + mlx5_txq_release(dev, i); + rte_eth_hairpin_queue_peer_unbind(rx_port, rx_queue, 1); + mlx5_hairpin_queue_peer_unbind(dev, i, 0); + } + return ret; +} + +int +mlx5_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_txq_ctrl *txq_ctrl; + uint32_t i; + int ret; + uint16_t cur_port = priv->dev_data->port_id; + + for (i = 0; i != priv->txqs_n; i++) { + uint16_t rx_queue; + + txq_ctrl = mlx5_txq_get(dev, i); + if (!txq_ctrl) + continue; + if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + mlx5_txq_release(dev, i); + continue; + } + if (txq_ctrl->hairpin_conf.peers[0].port != rx_port) { + mlx5_txq_release(dev, i); + continue; + } + /* Indeed, only the first used queue needs to be checked. */ + if (!txq_ctrl->hairpin_conf.manual_bind) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u and port %u is in auto-bind mode", + cur_port, rx_port); + mlx5_txq_release(dev, i); + return -rte_errno; + } + rx_queue = txq_ctrl->hairpin_conf.peers[0].queue; + mlx5_txq_release(dev, i); + ret = rte_eth_hairpin_queue_peer_unbind(rx_port, rx_queue, 1); + if (ret) { + DRV_LOG(ERR, "port %u RX queue %d unbind - failure", + rx_port, rx_queue); + return ret; + } + ret = mlx5_hairpin_queue_peer_unbind(dev, i, 0); + if (ret) { + DRV_LOG(ERR, "port %u TX queue %d unbind - failure", + cur_port, i); + return ret; + } + } + return 0; +} + /** * DPDK callback to start the device. * @@ -332,7 +798,7 @@ dev->data->port_id, strerror(rte_errno)); goto error; } - ret = mlx5_hairpin_bind(dev); + ret = mlx5_hairpin_auto_bind(dev); if (ret) { DRV_LOG(ERR, "port %u hairpin binding failed: %s", dev->data->port_id, strerror(rte_errno)); From patchwork Thu Oct 8 14:16:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 80060 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2628A04BC; Thu, 8 Oct 2020 16:18:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 379901C135; Thu, 8 Oct 2020 16:17:21 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id C1FF31C131 for ; Thu, 8 Oct 2020 16:17:19 +0200 (CEST) From: Bing Zhao To: viacheslavo@mellanox.com, matan@mellanox.com Cc: dev@dpdk.org, orika@nvidia.com, rasland@nvidia.com Date: Thu, 8 Oct 2020 22:16:59 +0800 Message-Id: <1602166620-46303-4-git-send-email-bingz@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1602166620-46303-1-git-send-email-bingz@nvidia.com> References: <1602166620-46303-1-git-send-email-bingz@nvidia.com> Subject: [dpdk-dev] [PATCH 3/4] net/mlx5: conditional hairpin auto bind X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In single port hairpin mode, after the queues are configured during start up. The binding process will be enabled automatically in the port start phase and the default control flow for egress will be created. When switching to two ports hairpin mode, the auto binding process should be skipped if there is no TX queue with the peer RX queue on the same device, and it should be skipped also if the queues are configured with manual bind attribute. If the explicit TX flow rule mode is configured or hairpin is between two ports, the default control flows for TX queues should not be created. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5_trigger.c | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index f326b57..77d84dd 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -214,6 +214,8 @@ struct mlx5_devx_obj *rq; unsigned int i; int ret = 0; + bool need_auto = false; + uint16_t self_port = dev->data->port_id; for (i = 0; i != priv->txqs_n; ++i) { txq_ctrl = mlx5_txq_get(dev, i); @@ -223,6 +225,25 @@ mlx5_txq_release(dev, i); continue; } + if (txq_ctrl->hairpin_conf.peers[0].port != self_port) + continue; + if (txq_ctrl->hairpin_conf.manual_bind) { + mlx5_txq_release(dev, i); + return 0; + } + need_auto = true; + mlx5_txq_release(dev, i); + } + if (!need_auto) + return 0; + for (i = 0; i != priv->txqs_n; ++i) { + txq_ctrl = mlx5_txq_get(dev, i); + if (!txq_ctrl) + continue; + if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) { + mlx5_txq_release(dev, i); + continue; + } if (!txq_ctrl->obj) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no txq object found: %d", @@ -798,9 +819,13 @@ dev->data->port_id, strerror(rte_errno)); goto error; } + /* + * Such step will be skipped if there is no hairpin TX queue configured + * with RX peer queue from the same device. + */ ret = mlx5_hairpin_auto_bind(dev); if (ret) { - DRV_LOG(ERR, "port %u hairpin binding failed: %s", + DRV_LOG(ERR, "port %u hairpin auto binding failed: %s", dev->data->port_id, strerror(rte_errno)); goto error; } @@ -949,7 +974,10 @@ struct mlx5_txq_ctrl *txq_ctrl = mlx5_txq_get(dev, i); if (!txq_ctrl) continue; - if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN) { + if (txq_ctrl->type == MLX5_TXQ_TYPE_HAIRPIN && + !txq_ctrl->hairpin_conf.manual_bind && + txq_ctrl->hairpin_conf.peers[0].port == + priv->dev_data->port_id) { ret = mlx5_ctrl_flow_source_queue(dev, i); if (ret) { mlx5_txq_release(dev, i); From patchwork Thu Oct 8 14:17:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 80061 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6087AA04BC; Thu, 8 Oct 2020 16:18:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 34DCA1C194; Thu, 8 Oct 2020 16:17:25 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id 941A31C194 for ; Thu, 8 Oct 2020 16:17:23 +0200 (CEST) From: Bing Zhao To: viacheslavo@mellanox.com, matan@mellanox.com Cc: dev@dpdk.org, orika@nvidia.com, rasland@nvidia.com Date: Thu, 8 Oct 2020 22:17:00 +0800 Message-Id: <1602166620-46303-5-git-send-email-bingz@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1602166620-46303-1-git-send-email-bingz@nvidia.com> References: <1602166620-46303-1-git-send-email-bingz@nvidia.com> Subject: [dpdk-dev] [PATCH 4/4] doc: update hairpin support for mlx5 driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hairpin between two ports will be supported by mlx5 PMD. Signed-off-by: Bing Zhao --- doc/guides/rel_notes/release_20_11.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 05ceea0..454472b 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -70,6 +70,11 @@ New Features * Added support for non-zero priorities for group 0 flows * Added support for VXLAN decap combined with VLAN pop +* **Updated Nvidia mlx5 driver.** + + * Added support for hairpin between two ports and hairpin explicit + TX flow rules insertion. + * **Updated Solarflare network PMD.** Updated the Solarflare ``sfc_efx`` driver with changes including: