From patchwork Wed Jan 3 10:10:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingjin Ye X-Patchwork-Id: 135703 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7EF6E43809; Wed, 3 Jan 2024 11:28:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C98D402EB; Wed, 3 Jan 2024 11:28:04 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by mails.dpdk.org (Postfix) with ESMTP id A87DB4013F; Wed, 3 Jan 2024 11:28:02 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704277683; x=1735813683; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kGnt/gc5iAW/xfXRFiaAQupW14jG61ciJ3JO8yc2cOo=; b=Mdr/CI4csjkWtuynck50lNEgdRe2J2fay7VN538lC+9iOt0etsGmBbNY N+e+Wlat1UKaCvCMOU5VdR01Ne3D0OtUMHxTpjHt4G5IZuQQ5OAi23jGI 0OGfBNVosUaI6e9inypcp8ncr9aTKcC7lZ95Yx4yhGCRmaUi2Z2Cc0/TX dvZrzB0KStLL/nP5q+ECoRzJEkuXkW7zIUR0vvlQUEaq751aENtcgmwjp QV2me3zC+T+JEsWBx3JI13GCzOooMex6W3VYio1TySKeVwo8tBz3vjrBX fdI9Y4ofxm9etkHUizugpI3vvnzzdwJ4j54R+Im/8vvKIdQCJpwh3pG0k Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="3761748" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="3761748" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 02:28:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="1027036729" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="1027036729" Received: from unknown (HELO localhost.localdomain) ([10.239.252.253]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 02:27:57 -0800 From: Mingjin Ye To: dev@dpdk.org Cc: qiming.yang@intel.com, Mingjin Ye , stable@dpdk.org, Jingjing Wu , Beilei Xing Subject: [PATCH v8 1/2] net/iavf: fix Rx/Tx burst in multi-process Date: Wed, 3 Jan 2024 10:10:53 +0000 Message-Id: <20240103101054.1330081-2-mingjinx.ye@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240103101054.1330081-1-mingjinx.ye@intel.com> References: <20240102105211.788819-3-mingjinx.ye@intel.com> <20240103101054.1330081-1-mingjinx.ye@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In a multi-process environment, a secondary process operates on shared memory and changes the function pointer of the primary process, resulting in a crash when the primary process cannot find the function address during an Rx/Tx burst. Fixes: 5b3124a0a6ef ("net/iavf: support no polling when link down") Cc: stable@dpdk.org Signed-off-by: Mingjin Ye --- v2: Add fix for Rx burst. --- v3: fix Rx/Tx routing. --- v4: Fix the ops array. --- drivers/net/iavf/iavf.h | 42 +++++++- drivers/net/iavf/iavf_rxtx.c | 182 ++++++++++++++++++++++++----------- 2 files changed, 166 insertions(+), 58 deletions(-) diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 10868f2c30..73a089c199 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -313,6 +313,44 @@ struct iavf_devargs { struct iavf_security_ctx; +enum iavf_rx_burst_type { + IAVF_RX_DEFAULT, + IAVF_RX_FLEX_RXD, + IAVF_RX_BULK_ALLOC, + IAVF_RX_SCATTERED, + IAVF_RX_SCATTERED_FLEX_RXD, + IAVF_RX_SSE, + IAVF_RX_AVX2, + IAVF_RX_AVX2_OFFLOAD, + IAVF_RX_SSE_FLEX_RXD, + IAVF_RX_AVX2_FLEX_RXD, + IAVF_RX_AVX2_FLEX_RXD_OFFLOAD, + IAVF_RX_SSE_SCATTERED, + IAVF_RX_AVX2_SCATTERED, + IAVF_RX_AVX2_SCATTERED_OFFLOAD, + IAVF_RX_SSE_SCATTERED_FLEX_RXD, + IAVF_RX_AVX2_SCATTERED_FLEX_RXD, + IAVF_RX_AVX2_SCATTERED_FLEX_RXD_OFFLOAD, + IAVF_RX_AVX512, + IAVF_RX_AVX512_OFFLOAD, + IAVF_RX_AVX512_FLEX_RXD, + IAVF_RX_AVX512_FLEX_RXD_OFFLOAD, + IAVF_RX_AVX512_SCATTERED, + IAVF_RX_AVX512_SCATTERED_OFFLOAD, + IAVF_RX_AVX512_SCATTERED_FLEX_RXD, + IAVF_RX_AVX512_SCATTERED_FLEX_RXD_OFFLOAD, +}; + +enum iavf_tx_burst_type { + IAVF_TX_DEFAULT, + IAVF_TX_SSE, + IAVF_TX_AVX2, + IAVF_TX_AVX2_OFFLOAD, + IAVF_TX_AVX512, + IAVF_TX_AVX512_OFFLOAD, + IAVF_TX_AVX512_CTX_OFFLOAD, +}; + /* Structure to store private data for each VF instance. */ struct iavf_adapter { struct iavf_hw hw; @@ -328,8 +366,8 @@ struct iavf_adapter { bool stopped; bool closed; bool no_poll; - eth_rx_burst_t rx_pkt_burst; - eth_tx_burst_t tx_pkt_burst; + enum iavf_rx_burst_type rx_burst_type; + enum iavf_tx_burst_type tx_burst_type; uint16_t fdir_ref_cnt; struct iavf_devargs devargs; }; diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index f19aa14646..89db82c694 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -3707,15 +3707,77 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, return i; } +static +const eth_rx_burst_t iavf_rx_pkt_burst_ops[] = { + [IAVF_RX_DEFAULT] = iavf_recv_pkts, + [IAVF_RX_FLEX_RXD] = iavf_recv_pkts_flex_rxd, + [IAVF_RX_BULK_ALLOC] = iavf_recv_pkts_bulk_alloc, + [IAVF_RX_SCATTERED] = iavf_recv_scattered_pkts, + [IAVF_RX_SCATTERED_FLEX_RXD] = iavf_recv_scattered_pkts_flex_rxd, +#ifdef RTE_ARCH_X86 + [IAVF_RX_SSE] = iavf_recv_pkts_vec, + [IAVF_RX_AVX2] = iavf_recv_pkts_vec_avx2, + [IAVF_RX_AVX2_OFFLOAD] = iavf_recv_pkts_vec_avx2_offload, + [IAVF_RX_SSE_FLEX_RXD] = iavf_recv_pkts_vec_flex_rxd, + [IAVF_RX_AVX2_FLEX_RXD] = iavf_recv_pkts_vec_avx2_flex_rxd, + [IAVF_RX_AVX2_FLEX_RXD_OFFLOAD] = + iavf_recv_pkts_vec_avx2_flex_rxd_offload, + [IAVF_RX_SSE_SCATTERED] = iavf_recv_scattered_pkts_vec, + [IAVF_RX_AVX2_SCATTERED] = iavf_recv_scattered_pkts_vec_avx2, + [IAVF_RX_AVX2_SCATTERED_OFFLOAD] = + iavf_recv_scattered_pkts_vec_avx2_offload, + [IAVF_RX_SSE_SCATTERED_FLEX_RXD] = + iavf_recv_scattered_pkts_vec_flex_rxd, + [IAVF_RX_AVX2_SCATTERED_FLEX_RXD] = + iavf_recv_scattered_pkts_vec_avx2_flex_rxd, + [IAVF_RX_AVX2_SCATTERED_FLEX_RXD_OFFLOAD] = + iavf_recv_scattered_pkts_vec_avx2_flex_rxd_offload, +#ifdef CC_AVX512_SUPPORT + [IAVF_RX_AVX512] = iavf_recv_pkts_vec_avx512, + [IAVF_RX_AVX512_OFFLOAD] = iavf_recv_pkts_vec_avx512_offload, + [IAVF_RX_AVX512_FLEX_RXD] = iavf_recv_pkts_vec_avx512_flex_rxd, + [IAVF_RX_AVX512_FLEX_RXD_OFFLOAD] = + iavf_recv_pkts_vec_avx512_flex_rxd_offload, + [IAVF_RX_AVX512_SCATTERED] = iavf_recv_scattered_pkts_vec_avx512, + [IAVF_RX_AVX512_SCATTERED_OFFLOAD] = + iavf_recv_scattered_pkts_vec_avx512_offload, + [IAVF_RX_AVX512_SCATTERED_FLEX_RXD] = + iavf_recv_scattered_pkts_vec_avx512_flex_rxd, + [IAVF_RX_AVX512_SCATTERED_FLEX_RXD_OFFLOAD] = + iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload, +#endif +#elif defined RTE_ARCH_ARM + [IAVF_RX_SSE] = iavf_recv_pkts_vec, +#endif +}; + +static +const eth_tx_burst_t iavf_tx_pkt_burst_ops[] = { + [IAVF_TX_DEFAULT] = iavf_xmit_pkts, +#ifdef RTE_ARCH_X86 + [IAVF_TX_SSE] = iavf_xmit_pkts_vec, + [IAVF_TX_AVX2] = iavf_xmit_pkts_vec_avx2, + [IAVF_TX_AVX2_OFFLOAD] = iavf_xmit_pkts_vec_avx2_offload, +#ifdef CC_AVX512_SUPPORT + [IAVF_TX_AVX512] = iavf_xmit_pkts_vec_avx512, + [IAVF_TX_AVX512_OFFLOAD] = iavf_xmit_pkts_vec_avx512_offload, + [IAVF_TX_AVX512_CTX_OFFLOAD] = iavf_xmit_pkts_vec_avx512_ctx_offload, +#endif +#endif +}; + static uint16_t iavf_recv_pkts_no_poll(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct iavf_rx_queue *rxq = rx_queue; + enum iavf_rx_burst_type rx_burst_type = + rxq->vsi->adapter->rx_burst_type; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) return 0; - return rxq->vsi->adapter->rx_pkt_burst(rx_queue, + return iavf_rx_pkt_burst_ops[rx_burst_type](rx_queue, rx_pkts, nb_pkts); } @@ -3724,10 +3786,13 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct iavf_tx_queue *txq = tx_queue; + enum iavf_tx_burst_type tx_burst_type = + txq->vsi->adapter->tx_burst_type; + if (!txq->vsi || txq->vsi->adapter->no_poll) return 0; - return txq->vsi->adapter->tx_pkt_burst(tx_queue, + return iavf_tx_pkt_burst_ops[tx_burst_type](tx_queue, tx_pkts, nb_pkts); } @@ -3738,6 +3803,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev) struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + enum iavf_rx_burst_type rx_burst_type; int no_poll_on_link_down = adapter->devargs.no_poll_on_link_down; int i; struct iavf_rx_queue *rxq; @@ -3808,43 +3874,43 @@ iavf_set_rx_function(struct rte_eth_dev *dev) } } if (use_flex) { - dev->rx_pkt_burst = iavf_recv_scattered_pkts_vec_flex_rxd; + rx_burst_type = IAVF_RX_SSE_SCATTERED_FLEX_RXD; if (use_avx2) { if (check_ret == IAVF_VECTOR_PATH) - dev->rx_pkt_burst = - iavf_recv_scattered_pkts_vec_avx2_flex_rxd; + rx_burst_type = + IAVF_RX_AVX2_SCATTERED_FLEX_RXD; else - dev->rx_pkt_burst = - iavf_recv_scattered_pkts_vec_avx2_flex_rxd_offload; + rx_burst_type = + IAVF_RX_AVX2_SCATTERED_FLEX_RXD_OFFLOAD; } #ifdef CC_AVX512_SUPPORT if (use_avx512) { if (check_ret == IAVF_VECTOR_PATH) - dev->rx_pkt_burst = - iavf_recv_scattered_pkts_vec_avx512_flex_rxd; + rx_burst_type = + IAVF_RX_AVX512_SCATTERED_FLEX_RXD; else - dev->rx_pkt_burst = - iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload; + rx_burst_type = + IAVF_RX_AVX512_SCATTERED_FLEX_RXD_OFFLOAD; } #endif } else { - dev->rx_pkt_burst = iavf_recv_scattered_pkts_vec; + rx_burst_type = IAVF_RX_SSE_SCATTERED; if (use_avx2) { if (check_ret == IAVF_VECTOR_PATH) - dev->rx_pkt_burst = - iavf_recv_scattered_pkts_vec_avx2; + rx_burst_type = + IAVF_RX_AVX2_SCATTERED; else - dev->rx_pkt_burst = - iavf_recv_scattered_pkts_vec_avx2_offload; + rx_burst_type = + IAVF_RX_AVX2_SCATTERED_OFFLOAD; } #ifdef CC_AVX512_SUPPORT if (use_avx512) { if (check_ret == IAVF_VECTOR_PATH) - dev->rx_pkt_burst = - iavf_recv_scattered_pkts_vec_avx512; + rx_burst_type = + IAVF_RX_AVX512_SCATTERED; else - dev->rx_pkt_burst = - iavf_recv_scattered_pkts_vec_avx512_offload; + rx_burst_type = + IAVF_RX_AVX512_SCATTERED_OFFLOAD; } #endif } @@ -3874,51 +3940,46 @@ iavf_set_rx_function(struct rte_eth_dev *dev) } } if (use_flex) { - dev->rx_pkt_burst = iavf_recv_pkts_vec_flex_rxd; + rx_burst_type = IAVF_RX_SSE_FLEX_RXD; if (use_avx2) { if (check_ret == IAVF_VECTOR_PATH) - dev->rx_pkt_burst = - iavf_recv_pkts_vec_avx2_flex_rxd; + rx_burst_type = IAVF_RX_AVX2_FLEX_RXD; else - dev->rx_pkt_burst = - iavf_recv_pkts_vec_avx2_flex_rxd_offload; + rx_burst_type = IAVF_RX_AVX2_FLEX_RXD_OFFLOAD; } #ifdef CC_AVX512_SUPPORT if (use_avx512) { if (check_ret == IAVF_VECTOR_PATH) - dev->rx_pkt_burst = - iavf_recv_pkts_vec_avx512_flex_rxd; + rx_burst_type = IAVF_RX_AVX512_FLEX_RXD; else - dev->rx_pkt_burst = - iavf_recv_pkts_vec_avx512_flex_rxd_offload; + rx_burst_type = + IAVF_RX_AVX512_FLEX_RXD_OFFLOAD; } #endif } else { - dev->rx_pkt_burst = iavf_recv_pkts_vec; + rx_burst_type = IAVF_RX_SSE; if (use_avx2) { if (check_ret == IAVF_VECTOR_PATH) - dev->rx_pkt_burst = - iavf_recv_pkts_vec_avx2; + rx_burst_type = IAVF_RX_AVX2; else - dev->rx_pkt_burst = - iavf_recv_pkts_vec_avx2_offload; + rx_burst_type = IAVF_RX_AVX2_OFFLOAD; } #ifdef CC_AVX512_SUPPORT if (use_avx512) { if (check_ret == IAVF_VECTOR_PATH) - dev->rx_pkt_burst = - iavf_recv_pkts_vec_avx512; + rx_burst_type = IAVF_RX_AVX512; else - dev->rx_pkt_burst = - iavf_recv_pkts_vec_avx512_offload; + rx_burst_type = IAVF_RX_AVX512_OFFLOAD; } #endif } } if (no_poll_on_link_down) { - adapter->rx_pkt_burst = dev->rx_pkt_burst; + adapter->rx_burst_type = rx_burst_type; dev->rx_pkt_burst = iavf_recv_pkts_no_poll; + } else { + dev->rx_pkt_burst = iavf_rx_pkt_burst_ops[rx_burst_type]; } return; } @@ -3934,11 +3995,13 @@ iavf_set_rx_function(struct rte_eth_dev *dev) rxq = dev->data->rx_queues[i]; (void)iavf_rxq_vec_setup(rxq); } - dev->rx_pkt_burst = iavf_recv_pkts_vec; + rx_burst_type = IAVF_RX_SSE; if (no_poll_on_link_down) { - adapter->rx_pkt_burst = dev->rx_pkt_burst; + adapter->rx_burst_type = rx_burst_type; dev->rx_pkt_burst = iavf_recv_pkts_no_poll; + } else { + dev->rx_pkt_burst = iavf_rx_pkt_burst_ops[rx_burst_type]; } return; } @@ -3947,25 +4010,27 @@ iavf_set_rx_function(struct rte_eth_dev *dev) PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).", dev->data->port_id); if (use_flex) - dev->rx_pkt_burst = iavf_recv_scattered_pkts_flex_rxd; + rx_burst_type = IAVF_RX_SCATTERED_FLEX_RXD; else - dev->rx_pkt_burst = iavf_recv_scattered_pkts; + rx_burst_type = IAVF_RX_SCATTERED; } else if (adapter->rx_bulk_alloc_allowed) { PMD_DRV_LOG(DEBUG, "Using bulk Rx callback (port=%d).", dev->data->port_id); - dev->rx_pkt_burst = iavf_recv_pkts_bulk_alloc; + rx_burst_type = IAVF_RX_BULK_ALLOC; } else { PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).", dev->data->port_id); if (use_flex) - dev->rx_pkt_burst = iavf_recv_pkts_flex_rxd; + rx_burst_type = IAVF_RX_FLEX_RXD; else - dev->rx_pkt_burst = iavf_recv_pkts; + rx_burst_type = IAVF_RX_DEFAULT; } if (no_poll_on_link_down) { - adapter->rx_pkt_burst = dev->rx_pkt_burst; + adapter->rx_burst_type = rx_burst_type; dev->rx_pkt_burst = iavf_recv_pkts_no_poll; + } else { + dev->rx_pkt_burst = iavf_rx_pkt_burst_ops[rx_burst_type]; } } @@ -3975,6 +4040,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) { struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + enum iavf_tx_burst_type tx_burst_type; int no_poll_on_link_down = adapter->devargs.no_poll_on_link_down; #ifdef RTE_ARCH_X86 struct iavf_tx_queue *txq; @@ -4010,11 +4076,11 @@ iavf_set_tx_function(struct rte_eth_dev *dev) if (use_sse) { PMD_DRV_LOG(DEBUG, "Using Vector Tx (port %d).", dev->data->port_id); - dev->tx_pkt_burst = iavf_xmit_pkts_vec; + tx_burst_type = IAVF_TX_SSE; } if (use_avx2) { if (check_ret == IAVF_VECTOR_PATH) { - dev->tx_pkt_burst = iavf_xmit_pkts_vec_avx2; + tx_burst_type = IAVF_TX_AVX2; PMD_DRV_LOG(DEBUG, "Using AVX2 Vector Tx (port %d).", dev->data->port_id); } else if (check_ret == IAVF_VECTOR_CTX_OFFLOAD_PATH) { @@ -4022,7 +4088,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) "AVX2 does not support outer checksum offload."); goto normal; } else { - dev->tx_pkt_burst = iavf_xmit_pkts_vec_avx2_offload; + tx_burst_type = IAVF_TX_AVX2_OFFLOAD; dev->tx_pkt_prepare = iavf_prep_pkts; PMD_DRV_LOG(DEBUG, "Using AVX2 OFFLOAD Vector Tx (port %d).", dev->data->port_id); @@ -4031,16 +4097,16 @@ iavf_set_tx_function(struct rte_eth_dev *dev) #ifdef CC_AVX512_SUPPORT if (use_avx512) { if (check_ret == IAVF_VECTOR_PATH) { - dev->tx_pkt_burst = iavf_xmit_pkts_vec_avx512; + tx_burst_type = IAVF_TX_AVX512; PMD_DRV_LOG(DEBUG, "Using AVX512 Vector Tx (port %d).", dev->data->port_id); } else if (check_ret == IAVF_VECTOR_OFFLOAD_PATH) { - dev->tx_pkt_burst = iavf_xmit_pkts_vec_avx512_offload; + tx_burst_type = IAVF_TX_AVX512_OFFLOAD; dev->tx_pkt_prepare = iavf_prep_pkts; PMD_DRV_LOG(DEBUG, "Using AVX512 OFFLOAD Vector Tx (port %d).", dev->data->port_id); } else { - dev->tx_pkt_burst = iavf_xmit_pkts_vec_avx512_ctx_offload; + tx_burst_type = IAVF_TX_AVX512_CTX_OFFLOAD; dev->tx_pkt_prepare = iavf_prep_pkts; PMD_DRV_LOG(DEBUG, "Using AVX512 CONTEXT OFFLOAD Vector Tx (port %d).", dev->data->port_id); @@ -4063,8 +4129,10 @@ iavf_set_tx_function(struct rte_eth_dev *dev) } if (no_poll_on_link_down) { - adapter->tx_pkt_burst = dev->tx_pkt_burst; + adapter->tx_burst_type = tx_burst_type; dev->tx_pkt_burst = iavf_xmit_pkts_no_poll; + } else { + dev->tx_pkt_burst = iavf_tx_pkt_burst_ops[tx_burst_type]; } return; } @@ -4073,12 +4141,14 @@ iavf_set_tx_function(struct rte_eth_dev *dev) #endif PMD_DRV_LOG(DEBUG, "Using Basic Tx callback (port=%d).", dev->data->port_id); - dev->tx_pkt_burst = iavf_xmit_pkts; + tx_burst_type = IAVF_TX_DEFAULT; dev->tx_pkt_prepare = iavf_prep_pkts; if (no_poll_on_link_down) { - adapter->tx_pkt_burst = dev->tx_pkt_burst; + adapter->tx_burst_type = tx_burst_type; dev->tx_pkt_burst = iavf_xmit_pkts_no_poll; + } else { + dev->tx_pkt_burst = iavf_tx_pkt_burst_ops[tx_burst_type]; } } From patchwork Wed Jan 3 10:10:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingjin Ye X-Patchwork-Id: 135704 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 434EB43809; Wed, 3 Jan 2024 11:28:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2C5D140395; Wed, 3 Jan 2024 11:28:17 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by mails.dpdk.org (Postfix) with ESMTP id D360440648 for ; Wed, 3 Jan 2024 11:28:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704277695; x=1735813695; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NVRWj1/J0Eg4g0bjW/a4sVrXWv/tdav8SkXsa28560A=; b=A0wsYjzQzl3LziG+UzBqx1xbP5YCEQKsvnA8FfB6+pbqp7h1q27c8jDv RolI276fZkBwAB/Owu0a2s1L6HDSuwSP1184I9debXPDzi+04Y7B6MALt hmI7Gm9XVIyuayB9yFN/Ly4/J2XFgrom3McpNcWEKoLZqrxYvizjCDb+q h5kI/5VN+njj1Ok2dcgjD1h4EG99dXG2WKmzFzvuqWPaNZ1xxJHxaUf2x UDpypxPsr5j3ME+SFWZ9wEs1zjiJN5Jeis7X1jpba2lw12Ho7lIUUfDtq psYrVZsClxolFEW/dLjn5Xgsci+OU/MZBedyM2cKEwLt9x33fFyoV4/ST Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="3761758" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="3761758" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 02:28:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="1027036766" X-IronPort-AV: E=Sophos;i="6.04,327,1695711600"; d="scan'208";a="1027036766" Received: from unknown (HELO localhost.localdomain) ([10.239.252.253]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jan 2024 02:28:10 -0800 From: Mingjin Ye To: dev@dpdk.org Cc: qiming.yang@intel.com, Mingjin Ye , Simei Su , Wenjun Wu , Yuying Zhang , Beilei Xing , Jingjing Wu Subject: [PATCH v8 2/2] net/iavf: add diagnostic support in TX path Date: Wed, 3 Jan 2024 10:10:54 +0000 Message-Id: <20240103101054.1330081-3-mingjinx.ye@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240103101054.1330081-1-mingjinx.ye@intel.com> References: <20240102105211.788819-3-mingjinx.ye@intel.com> <20240103101054.1330081-1-mingjinx.ye@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The only way to enable diagnostics for TX paths is to modify the application source code. Making it difficult to diagnose faults. In this patch, the devarg option "mbuf_check" is introduced and the parameters are configured to enable the corresponding diagnostics. supported cases: mbuf, size, segment, offload. 1. mbuf: check for corrupted mbuf. 2. size: check min/max packet length according to hw spec. 3. segment: check number of mbuf segments not exceed hw limitation. 4. offload: check any unsupported offload flag. parameter format: mbuf_check=[mbuf,,] eg: dpdk-testpmd -a 0000:81:01.0,mbuf_check=[mbuf,size] -- -i Signed-off-by: Mingjin Ye --- v2: Remove call chain. --- v3: Optimisation implementation. --- v4: Fix Windows os compilation error. --- v5: Split Patch. --- v6: remove strict. --- v7: Modify the description document. --- doc/guides/nics/intel_vf.rst | 9 ++++ drivers/net/iavf/iavf.h | 12 +++++ drivers/net/iavf/iavf_ethdev.c | 76 ++++++++++++++++++++++++++ drivers/net/iavf/iavf_rxtx.c | 98 ++++++++++++++++++++++++++++++++++ drivers/net/iavf/iavf_rxtx.h | 2 + 5 files changed, 197 insertions(+) diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index ad08198f0f..bda6648726 100644 --- a/doc/guides/nics/intel_vf.rst +++ b/doc/guides/nics/intel_vf.rst @@ -111,6 +111,15 @@ For more detail on SR-IOV, please refer to the following documents: by setting the ``devargs`` parameter like ``-a 18:01.0,no-poll-on-link-down=1`` when IAVF is backed by an Intel\ |reg| E810 device or an Intel\ |reg| 700 Series Ethernet device. + When IAVF is backed by an Intel\ |reg| E810 device or an Intel\ |reg| 700 series Ethernet devices. + Set the ``devargs`` parameter ``mbuf_check`` to enable TX diagnostics. For example, + ``-a 18:01.0,mbuf_check=mbuf`` or ``-a 18:01.0,mbuf_check=[mbuf,size]``. Supported cases: + + * mbuf: Check for corrupted mbuf. + * size: Check min/max packet length according to hw spec. + * segment: Check number of mbuf segments not exceed hw limitation. + * offload: Check any unsupported offload flag. + The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 73a089c199..6535b624cb 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -113,9 +113,14 @@ struct iavf_ipsec_crypto_stats { } ierrors; }; +struct iavf_mbuf_stats { + uint64_t tx_pkt_errors; +}; + struct iavf_eth_xstats { struct virtchnl_eth_stats eth_stats; struct iavf_ipsec_crypto_stats ips_stats; + struct iavf_mbuf_stats mbuf_stats; }; /* Structure that defines a VSI, associated with a adapter. */ @@ -309,6 +314,7 @@ struct iavf_devargs { uint32_t watchdog_period; int auto_reset; int no_poll_on_link_down; + int mbuf_check; }; struct iavf_security_ctx; @@ -351,6 +357,11 @@ enum iavf_tx_burst_type { IAVF_TX_AVX512_CTX_OFFLOAD, }; +#define IAVF_MBUF_CHECK_F_TX_MBUF (1ULL << 0) +#define IAVF_MBUF_CHECK_F_TX_SIZE (1ULL << 1) +#define IAVF_MBUF_CHECK_F_TX_SEGMENT (1ULL << 2) +#define IAVF_MBUF_CHECK_F_TX_OFFLOAD (1ULL << 3) + /* Structure to store private data for each VF instance. */ struct iavf_adapter { struct iavf_hw hw; @@ -368,6 +379,7 @@ struct iavf_adapter { bool no_poll; enum iavf_rx_burst_type rx_burst_type; enum iavf_tx_burst_type tx_burst_type; + uint64_t mc_flags; /* mbuf check flags. */ uint16_t fdir_ref_cnt; struct iavf_devargs devargs; }; diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index d1edb0dd5c..7d1cd9050b 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -39,6 +40,8 @@ #define IAVF_RESET_WATCHDOG_ARG "watchdog_period" #define IAVF_ENABLE_AUTO_RESET_ARG "auto_reset" #define IAVF_NO_POLL_ON_LINK_DOWN_ARG "no-poll-on-link-down" +#define IAVF_MBUF_CHECK_ARG "mbuf_check" + uint64_t iavf_timestamp_dynflag; int iavf_timestamp_dynfield_offset = -1; @@ -48,6 +51,7 @@ static const char * const iavf_valid_args[] = { IAVF_RESET_WATCHDOG_ARG, IAVF_ENABLE_AUTO_RESET_ARG, IAVF_NO_POLL_ON_LINK_DOWN_ARG, + IAVF_MBUF_CHECK_ARG, NULL }; @@ -174,6 +178,7 @@ static const struct rte_iavf_xstats_name_off rte_iavf_stats_strings[] = { {"tx_broadcast_packets", _OFF_OF(eth_stats.tx_broadcast)}, {"tx_dropped_packets", _OFF_OF(eth_stats.tx_discards)}, {"tx_error_packets", _OFF_OF(eth_stats.tx_errors)}, + {"tx_mbuf_error_packets", _OFF_OF(mbuf_stats.tx_pkt_errors)}, {"inline_ipsec_crypto_ipackets", _OFF_OF(ips_stats.icount)}, {"inline_ipsec_crypto_ibytes", _OFF_OF(ips_stats.ibytes)}, @@ -1837,6 +1842,9 @@ iavf_dev_xstats_reset(struct rte_eth_dev *dev) iavf_dev_stats_reset(dev); memset(&vf->vsi.eth_stats_offset.ips_stats, 0, sizeof(struct iavf_ipsec_crypto_stats)); + memset(&vf->vsi.eth_stats_offset.mbuf_stats, 0, + sizeof(struct iavf_mbuf_stats)); + return 0; } @@ -1876,6 +1884,19 @@ iavf_dev_update_ipsec_xstats(struct rte_eth_dev *ethdev, } } +static void +iavf_dev_update_mbuf_stats(struct rte_eth_dev *ethdev, + struct iavf_mbuf_stats *mbuf_stats) +{ + uint16_t idx; + struct iavf_tx_queue *txq; + + for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) { + txq = ethdev->data->tx_queues[idx]; + mbuf_stats->tx_pkt_errors += txq->mbuf_errors; + } +} + static int iavf_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { @@ -1904,6 +1925,9 @@ static int iavf_dev_xstats_get(struct rte_eth_dev *dev, if (iavf_ipsec_crypto_supported(adapter)) iavf_dev_update_ipsec_xstats(dev, &iavf_xtats.ips_stats); + if (adapter->devargs.mbuf_check) + iavf_dev_update_mbuf_stats(dev, &iavf_xtats.mbuf_stats); + /* loop over xstats array and values from pstats */ for (i = 0; i < IAVF_NB_XSTATS; i++) { xstats[i].id = i; @@ -2286,6 +2310,50 @@ iavf_parse_watchdog_period(__rte_unused const char *key, const char *value, void return 0; } +static int +iavf_parse_mbuf_check(__rte_unused const char *key, const char *value, void *args) +{ + char *cur; + char *tmp; + int str_len; + int valid_len; + + int ret = 0; + uint64_t *mc_flags = args; + char *str2 = strdup(value); + if (str2 == NULL) + return -1; + + str_len = strlen(str2); + if (str2[0] == '[' && str2[str_len - 1] == ']') { + if (str_len < 3) { + ret = -1; + goto mdd_end; + } + valid_len = str_len - 2; + memmove(str2, str2 + 1, valid_len); + memset(str2 + valid_len, '\0', 2); + } + cur = strtok_r(str2, ",", &tmp); + while (cur != NULL) { + if (!strcmp(cur, "mbuf")) + *mc_flags |= IAVF_MBUF_CHECK_F_TX_MBUF; + else if (!strcmp(cur, "size")) + *mc_flags |= IAVF_MBUF_CHECK_F_TX_SIZE; + else if (!strcmp(cur, "segment")) + *mc_flags |= IAVF_MBUF_CHECK_F_TX_SEGMENT; + else if (!strcmp(cur, "offload")) + *mc_flags |= IAVF_MBUF_CHECK_F_TX_OFFLOAD; + else + PMD_DRV_LOG(ERR, "Unsupported mdd check type: %s", cur); + cur = strtok_r(NULL, ",", &tmp); + } + +mdd_end: + free(str2); + return ret; +} + static int iavf_parse_devargs(struct rte_eth_dev *dev) { struct iavf_adapter *ad = @@ -2340,6 +2408,14 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev) goto bail; } + ret = rte_kvargs_process(kvlist, IAVF_MBUF_CHECK_ARG, + &iavf_parse_mbuf_check, &ad->mc_flags); + if (ret) + goto bail; + + if (ad->mc_flags) + ad->devargs.mbuf_check = 1; + ret = rte_kvargs_process(kvlist, IAVF_ENABLE_AUTO_RESET_ARG, &parse_bool, &ad->devargs.auto_reset); if (ret) diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 89db82c694..74dd72ce68 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -3796,6 +3796,97 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts, tx_pkts, nb_pkts); } +/* Tx mbuf check */ +static uint16_t +iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t idx; + uint64_t ol_flags; + struct rte_mbuf *mb; + uint16_t good_pkts = nb_pkts; + const char *reason = NULL; + bool pkt_error = false; + struct iavf_tx_queue *txq = tx_queue; + struct iavf_adapter *adapter = txq->vsi->adapter; + enum iavf_tx_burst_type tx_burst_type = + txq->vsi->adapter->tx_burst_type; + + for (idx = 0; idx < nb_pkts; idx++) { + mb = tx_pkts[idx]; + ol_flags = mb->ol_flags; + + if ((adapter->mc_flags & IAVF_MBUF_CHECK_F_TX_MBUF) && + (rte_mbuf_check(mb, 1, &reason) != 0)) { + PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason); + pkt_error = true; + break; + } + + if ((adapter->mc_flags & IAVF_MBUF_CHECK_F_TX_SIZE) && + (mb->data_len < IAVF_TX_MIN_PKT_LEN || + mb->data_len > adapter->vf.max_pkt_len)) { + PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out " + "of range, reasonable range (%d - %u)\n", mb->data_len, + IAVF_TX_MIN_PKT_LEN, adapter->vf.max_pkt_len); + pkt_error = true; + break; + } + + if (adapter->mc_flags & IAVF_MBUF_CHECK_F_TX_SEGMENT) { + /* Check condition for nb_segs > IAVF_TX_MAX_MTU_SEG. */ + if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) { + if (mb->nb_segs > IAVF_TX_MAX_MTU_SEG) { + PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds " + "HW limit, maximum allowed value is %d\n", mb->nb_segs, + IAVF_TX_MAX_MTU_SEG); + pkt_error = true; + break; + } + } else if ((mb->tso_segsz < IAVF_MIN_TSO_MSS) || + (mb->tso_segsz > IAVF_MAX_TSO_MSS)) { + /* MSS outside the range are considered malicious */ + PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out " + "of range, reasonable range (%d - %u)\n", mb->tso_segsz, + IAVF_MIN_TSO_MSS, IAVF_MAX_TSO_MSS); + pkt_error = true; + break; + } else if (mb->nb_segs > txq->nb_tx_desc) { + PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out " + "of ring length\n"); + pkt_error = true; + break; + } + } + + if (adapter->mc_flags & IAVF_MBUF_CHECK_F_TX_OFFLOAD) { + if (ol_flags & IAVF_TX_OFFLOAD_NOTSUP_MASK) { + PMD_TX_LOG(ERR, "INVALID mbuf: TX offload " + "is not supported\n"); + pkt_error = true; + break; + } + + if (!rte_validate_tx_offload(mb)) { + PMD_TX_LOG(ERR, "INVALID mbuf: TX offload " + "setup error\n"); + pkt_error = true; + break; + } + } + } + + if (pkt_error) { + txq->mbuf_errors++; + good_pkts = idx; + if (good_pkts == 0) + return 0; + } + + return iavf_tx_pkt_burst_ops[tx_burst_type](tx_queue, + tx_pkts, good_pkts); +} + /* choose rx function*/ void iavf_set_rx_function(struct rte_eth_dev *dev) @@ -4041,6 +4132,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); enum iavf_tx_burst_type tx_burst_type; + int mbuf_check = adapter->devargs.mbuf_check; int no_poll_on_link_down = adapter->devargs.no_poll_on_link_down; #ifdef RTE_ARCH_X86 struct iavf_tx_queue *txq; @@ -4131,6 +4223,9 @@ iavf_set_tx_function(struct rte_eth_dev *dev) if (no_poll_on_link_down) { adapter->tx_burst_type = tx_burst_type; dev->tx_pkt_burst = iavf_xmit_pkts_no_poll; + } else if (mbuf_check) { + adapter->tx_burst_type = tx_burst_type; + dev->tx_pkt_burst = iavf_xmit_pkts_check; } else { dev->tx_pkt_burst = iavf_tx_pkt_burst_ops[tx_burst_type]; } @@ -4147,6 +4242,9 @@ iavf_set_tx_function(struct rte_eth_dev *dev) if (no_poll_on_link_down) { adapter->tx_burst_type = tx_burst_type; dev->tx_pkt_burst = iavf_xmit_pkts_no_poll; + } else if (mbuf_check) { + adapter->tx_burst_type = tx_burst_type; + dev->tx_pkt_burst = iavf_xmit_pkts_check; } else { dev->tx_pkt_burst = iavf_tx_pkt_burst_ops[tx_burst_type]; } diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index f432f9d956..90e7291928 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -297,6 +297,8 @@ struct iavf_tx_queue { uint16_t next_rs; /* next to check DD, for VPMD */ uint16_t ipsec_crypto_pkt_md_offset; + uint64_t mbuf_errors; + bool q_set; /* if rx queue has been configured */ bool tx_deferred_start; /* don't start this queue in dev start */ const struct iavf_txq_ops *ops;