From patchwork Fri Mar 4 12:08:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devendra Singh Rawat X-Patchwork-Id: 108536 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BF858A0351; Fri, 4 Mar 2022 13:08:54 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4BB54427A9; Fri, 4 Mar 2022 13:08:54 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6C6C64013F; Fri, 4 Mar 2022 13:08:52 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2247N6E7002105; Fri, 4 Mar 2022 04:08:51 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0220; bh=FzbVRAmzq8dsurdFXpfJCGruNA8u9frz3Ozh/CjQwrc=; b=boWNOvDQ3XeU45YUM2d5zfFDStObVNuMZaj+L69L5lYV1pfDVH2b6yt1w91eMB8bKckY P7brnEK0J8suau/5wUfp08vpWwq2iCadB701Dif1oPbtwb4NrUIpvOKsbXiE0H0DAHY5 r2yUHc6eGEcPDp3+OXlcXEz3IDcnSRD4lJ0104mDlLZgHszomjmv+Mys2KFnJRyoFl7f B2YBqBZkrSfIW1r8fNOEL3MXBc3gxoLJYb27H/C9ClYwSC/Wa67V8jPM4fhUp1JoqkCs HJNzYMQdDkqfXiPlQiY0q5NuLAxz0phpTnjPpaDpdqAlc2wduBqHmF0SzkEd3ExSdos8 Zw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3ek4j63c2a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 04 Mar 2022 04:08:51 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 4 Mar 2022 04:08:49 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 4 Mar 2022 04:08:49 -0800 Received: from localhost.marvell.com (unknown [10.30.47.116]) by maili.marvell.com (Postfix) with ESMTP id 0359E3F7083; Fri, 4 Mar 2022 04:08:46 -0800 (PST) From: Devendra Singh Rawat To: , , , , CC: , Devendra Singh Rawat , Subject: [PATCH 1/3] net/qede: fix Tx callback completion routine Date: Fri, 4 Mar 2022 17:38:31 +0530 Message-ID: <20220304120833.312776-1-dsinghrawat@marvell.com> X-Mailer: git-send-email 2.18.2 MIME-Version: 1.0 X-Proofpoint-GUID: M5iUtdFMYt0qOJYsoHU5Sr2NVidaFo2w X-Proofpoint-ORIG-GUID: M5iUtdFMYt0qOJYsoHU5Sr2NVidaFo2w X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-04_02,2022-03-04_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Tx completion routine was first incrementing no. of free slots in Tx ring and then freeing corresponding mbufs in bulk. In some situations no. of mbufs freed were less than no. of Tx ring slots freed. This caused TX ring to get into an inconsistent state and ultimately application fails to transmit further traffic. The fix first updates Tx ring SW consumer index, then increments Tx ring free slot no. and finally frees the mbuf, this is done in a single iteration of loop. Fixes: 2c41740bf19e ("net/qede: get consumer index once") Fixes: 4996b959cde6 ("net/qede: free packets in bulk") Cc: stable@dpdk.org Signed-off-by: Devendra Singh Rawat Signed-off-by: Rasesh Mody --- drivers/net/qede/qede_rxtx.c | 79 +++++++++++++++--------------------- 1 file changed, 33 insertions(+), 46 deletions(-) diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 911bb1a260..0c52568180 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -885,68 +885,55 @@ qede_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id) } static inline void -qede_process_tx_compl(__rte_unused struct ecore_dev *edev, - struct qede_tx_queue *txq) +qede_free_tx_pkt(struct qede_tx_queue *txq) { - uint16_t hw_bd_cons; - uint16_t sw_tx_cons; - uint16_t remaining; - uint16_t mask; struct rte_mbuf *mbuf; uint16_t nb_segs; uint16_t idx; - uint16_t first_idx; - - rte_compiler_barrier(); - rte_prefetch0(txq->hw_cons_ptr); - sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl); - hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr); -#ifdef RTE_LIBRTE_QEDE_DEBUG_TX - PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n", - abs(hw_bd_cons - sw_tx_cons)); -#endif - - mask = NUM_TX_BDS(txq); - idx = txq->sw_tx_cons & mask; - remaining = hw_bd_cons - sw_tx_cons; - txq->nb_tx_avail += remaining; - first_idx = idx; - - while (remaining) { - mbuf = txq->sw_tx_ring[idx]; - RTE_ASSERT(mbuf); + idx = TX_CONS(txq); + mbuf = txq->sw_tx_ring[idx]; + if (mbuf) { nb_segs = mbuf->nb_segs; - remaining -= nb_segs; - - /* Prefetch the next mbuf. Note that at least the last 4 mbufs - * that are prefetched will not be used in the current call. - */ - rte_mbuf_prefetch_part1(txq->sw_tx_ring[(idx + 4) & mask]); - rte_mbuf_prefetch_part2(txq->sw_tx_ring[(idx + 4) & mask]); - PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u\n", nb_segs); - while (nb_segs) { + /* It's like consuming rxbuf in recv() */ ecore_chain_consume(&txq->tx_pbl); + txq->nb_tx_avail++; nb_segs--; } - - idx = (idx + 1) & mask; + rte_pktmbuf_free(mbuf); + txq->sw_tx_ring[idx] = NULL; + txq->sw_tx_cons++; PMD_TX_LOG(DEBUG, txq, "Freed tx packet\n"); - } - txq->sw_tx_cons = idx; - - if (first_idx > idx) { - rte_pktmbuf_free_bulk(&txq->sw_tx_ring[first_idx], - mask - first_idx + 1); - rte_pktmbuf_free_bulk(&txq->sw_tx_ring[0], idx); } else { - rte_pktmbuf_free_bulk(&txq->sw_tx_ring[first_idx], - idx - first_idx); + ecore_chain_consume(&txq->tx_pbl); + txq->nb_tx_avail++; } } +static inline void +qede_process_tx_compl(__rte_unused struct ecore_dev *edev, + struct qede_tx_queue *txq) +{ + uint16_t hw_bd_cons; +#ifdef RTE_LIBRTE_QEDE_DEBUG_TX + uint16_t sw_tx_cons; +#endif + + hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr); + /* read barrier prevents speculative execution on stale data */ + rte_rmb(); + +#ifdef RTE_LIBRTE_QEDE_DEBUG_TX + sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl); + PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n", + abs(hw_bd_cons - sw_tx_cons)); +#endif + while (hw_bd_cons != ecore_chain_get_cons_idx(&txq->tx_pbl)) + qede_free_tx_pkt(txq); +} + static int qede_drain_txq(struct qede_dev *qdev, struct qede_tx_queue *txq, bool allow_drain) { From patchwork Fri Mar 4 12:08:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devendra Singh Rawat X-Patchwork-Id: 108537 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6668FA0351; Fri, 4 Mar 2022 13:09:04 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5435A427CC; Fri, 4 Mar 2022 13:09:04 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9CEE9427C3; Fri, 4 Mar 2022 13:09:02 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2246v2E0002604; Fri, 4 Mar 2022 04:09:01 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=k3JQo3HLH2n9HKdrdb/sSINa7XligknWy3W7CBt4MFc=; b=WQ9qegW4uwTY/BslMF1VrqMSqp+Hk+LPu5K1FiSAJyOWTuJsgaHPqSAOQyysRvfoWQcf qxp895Wk9l2/Jwpl+vVPCswHsI4nvLfUujOdORRScGjkdief0Pve8Vggm8QLbTsYQsUL jTkdpoHR69uNdrZH+XCAUSJwpOWVLZdSTP4b1HinYDPMsJqbNKHsf7Is0yrv3nmzKzM9 maxIAQWiMN550KZ/bzWszTkjQm934xv9BLCPSjXXI0c0SDvPs4sG7WPx6+XDSJ8KUpic qNVPpoOW1QcoFd+qm+unv3KS4rqZHpUW35cgvx47eGlm0TyW4sMObILF3mNKYf5J/uxO 9Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3ek4j73d73-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 04 Mar 2022 04:09:01 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 4 Mar 2022 04:09:00 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 4 Mar 2022 04:09:00 -0800 Received: from localhost.marvell.com (unknown [10.30.47.116]) by maili.marvell.com (Postfix) with ESMTP id BFED23F7083; Fri, 4 Mar 2022 04:08:57 -0800 (PST) From: Devendra Singh Rawat To: , , , , CC: , Devendra Singh Rawat , Subject: [PATCH 2/3] net/qede: fix Rx callback Date: Fri, 4 Mar 2022 17:38:32 +0530 Message-ID: <20220304120833.312776-2-dsinghrawat@marvell.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220304120833.312776-1-dsinghrawat@marvell.com> References: <20220304120833.312776-1-dsinghrawat@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: HlOGa5I-vHZk8AfEzHAErV0hBmfQ7UMh X-Proofpoint-ORIG-GUID: HlOGa5I-vHZk8AfEzHAErV0hBmfQ7UMh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-04_02,2022-03-04_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org qede_alloc_rx_bulk_mbufs was trimming the no. of requested mbufs count to QEDE_MAX_BULK_ALLOC_COUNT. The RX callback was ignorant of this trimming and it was always resetting the no. of empty RX BD ring slots to 0. This resulted in RX BD ring getting into an inconsistent state and ultimately the application fails to receive any traffic. The fix trims the no. of requested mbufs count before making call to qede_alloc_rx_bulk_mbufs. After qede_alloc_rx_bulk_mbufs returns successfully, the no. of empty RX BD ring slots are decremented by the correct count. Fixes: 8f2312474529 ("net/qede: fix performance bottleneck in Rx path") Cc: stable@dpdk.org Signed-off-by: Devendra Singh Rawat Signed-off-by: Rasesh Mody --- drivers/net/qede/qede_rxtx.c | 68 ++++++++++++++++-------------------- 1 file changed, 31 insertions(+), 37 deletions(-) diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 0c52568180..02fa1fcaa1 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -38,48 +38,40 @@ static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq) static inline int qede_alloc_rx_bulk_mbufs(struct qede_rx_queue *rxq, int count) { + void *obj_p[QEDE_MAX_BULK_ALLOC_COUNT] __rte_cache_aligned; struct rte_mbuf *mbuf = NULL; struct eth_rx_bd *rx_bd; dma_addr_t mapping; int i, ret = 0; uint16_t idx; - uint16_t mask = NUM_RX_BDS(rxq); - - if (count > QEDE_MAX_BULK_ALLOC_COUNT) - count = QEDE_MAX_BULK_ALLOC_COUNT; idx = rxq->sw_rx_prod & NUM_RX_BDS(rxq); - if (count > mask - idx + 1) - count = mask - idx + 1; - - ret = rte_mempool_get_bulk(rxq->mb_pool, (void **)&rxq->sw_rx_ring[idx], - count); - + ret = rte_mempool_get_bulk(rxq->mb_pool, obj_p, count); if (unlikely(ret)) { PMD_RX_LOG(ERR, rxq, "Failed to allocate %d rx buffers " "sw_rx_prod %u sw_rx_cons %u mp entries %u free %u", - count, - rxq->sw_rx_prod & NUM_RX_BDS(rxq), - rxq->sw_rx_cons & NUM_RX_BDS(rxq), + count, idx, rxq->sw_rx_cons & NUM_RX_BDS(rxq), rte_mempool_avail_count(rxq->mb_pool), rte_mempool_in_use_count(rxq->mb_pool)); return -ENOMEM; } for (i = 0; i < count; i++) { - rte_prefetch0(rxq->sw_rx_ring[(idx + 1) & NUM_RX_BDS(rxq)]); - mbuf = rxq->sw_rx_ring[idx & NUM_RX_BDS(rxq)]; + mbuf = obj_p[i]; + if (likely(i < count - 1)) + rte_prefetch0(obj_p[i + 1]); + idx = rxq->sw_rx_prod & NUM_RX_BDS(rxq); + rxq->sw_rx_ring[idx] = mbuf; mapping = rte_mbuf_data_iova_default(mbuf); rx_bd = (struct eth_rx_bd *) ecore_chain_produce(&rxq->rx_bd_ring); rx_bd->addr.hi = rte_cpu_to_le_32(U64_HI(mapping)); rx_bd->addr.lo = rte_cpu_to_le_32(U64_LO(mapping)); - idx++; + rxq->sw_rx_prod++; } - rxq->sw_rx_prod = idx; return 0; } @@ -1544,25 +1536,26 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) uint8_t bitfield_val; #endif uint8_t offset, flags, bd_num; - + uint16_t count = 0; /* Allocate buffers that we used in previous loop */ if (rxq->rx_alloc_count) { - if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, - rxq->rx_alloc_count))) { + count = rxq->rx_alloc_count > QEDE_MAX_BULK_ALLOC_COUNT ? + QEDE_MAX_BULK_ALLOC_COUNT : rxq->rx_alloc_count; + + if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, count))) { struct rte_eth_dev *dev; PMD_RX_LOG(ERR, rxq, - "New buffer allocation failed," - "dropping incoming packetn"); + "New buffers allocation failed," + "dropping incoming packets\n"); dev = &rte_eth_devices[rxq->port_id]; - dev->data->rx_mbuf_alloc_failed += - rxq->rx_alloc_count; - rxq->rx_alloc_errors += rxq->rx_alloc_count; + dev->data->rx_mbuf_alloc_failed += count; + rxq->rx_alloc_errors += count; return 0; } qede_update_rx_prod(qdev, rxq); - rxq->rx_alloc_count = 0; + rxq->rx_alloc_count -= count; } hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr); @@ -1731,7 +1724,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } /* Request number of buffers to be allocated in next loop */ - rxq->rx_alloc_count = rx_alloc_count; + rxq->rx_alloc_count += rx_alloc_count; rxq->rcv_pkts += rx_pkt; rxq->rx_segs += rx_pkt; @@ -1771,25 +1764,26 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) struct qede_agg_info *tpa_info = NULL; uint32_t rss_hash; int rx_alloc_count = 0; - + uint16_t count = 0; /* Allocate buffers that we used in previous loop */ if (rxq->rx_alloc_count) { - if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, - rxq->rx_alloc_count))) { + count = rxq->rx_alloc_count > QEDE_MAX_BULK_ALLOC_COUNT ? + QEDE_MAX_BULK_ALLOC_COUNT : rxq->rx_alloc_count; + + if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, count))) { struct rte_eth_dev *dev; PMD_RX_LOG(ERR, rxq, - "New buffer allocation failed," - "dropping incoming packetn"); + "New buffers allocation failed," + "dropping incoming packets\n"); dev = &rte_eth_devices[rxq->port_id]; - dev->data->rx_mbuf_alloc_failed += - rxq->rx_alloc_count; - rxq->rx_alloc_errors += rxq->rx_alloc_count; + dev->data->rx_mbuf_alloc_failed += count; + rxq->rx_alloc_errors += count; return 0; } qede_update_rx_prod(qdev, rxq); - rxq->rx_alloc_count = 0; + rxq->rx_alloc_count -= count; } hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr); @@ -2028,7 +2022,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } /* Request number of buffers to be allocated in next loop */ - rxq->rx_alloc_count = rx_alloc_count; + rxq->rx_alloc_count += rx_alloc_count; rxq->rcv_pkts += rx_pkt; From patchwork Fri Mar 4 12:08:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devendra Singh Rawat X-Patchwork-Id: 108538 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6856CA0351; Fri, 4 Mar 2022 13:09:24 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5BA94427CD; Fri, 4 Mar 2022 13:09:24 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A746C427C8; Fri, 4 Mar 2022 13:09:22 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 2247N6EL002105; Fri, 4 Mar 2022 04:09:22 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=TUJkLwYVb6ygSnfpKcXn3882ifbGMTR46HobiKcbPWU=; b=a6sJKe+8MNfFnUQwLaDNvDuJGfCZBUA3ALFyi+JX5ifIJvV10ohmKdK3Ui6lsttNc5kw qMGhlF0Rg+AR4RDxkhkdUbwLYggosldx86itZBtrLQKUG1xqR8aCjCwtsSQ4+LQVOEQV 0YszJmN0ytHDBmQTEZPVYdMtt1bD9bW5VPd51wXPcMvYRS+02uWKleumxoXfwtiZerU+ jJ9A2F+1tLjcTiozQdngvP+1IS7vBJGLfJlRSIDmtXg4KotaC/KM7z1L/J3Xl+SgzbbJ CUY2AfvSwSHTwIYus4gnPPmNsmr2e23/V1f4oDb2RFC3gMB+Yk9x0niRaZvNJ0qgjbLS oA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3ek4j63c4s-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 04 Mar 2022 04:09:22 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 4 Mar 2022 04:09:20 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 4 Mar 2022 04:09:20 -0800 Received: from localhost.marvell.com (unknown [10.30.47.116]) by maili.marvell.com (Postfix) with ESMTP id D0E753F70BA; Fri, 4 Mar 2022 04:09:17 -0800 (PST) From: Devendra Singh Rawat To: , , , , CC: , Devendra Singh Rawat , Subject: [PATCH 3/3] net/qede: fix max Rx pktlen calculation Date: Fri, 4 Mar 2022 17:38:33 +0530 Message-ID: <20220304120833.312776-3-dsinghrawat@marvell.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220304120833.312776-1-dsinghrawat@marvell.com> References: <20220304120833.312776-1-dsinghrawat@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Pck7lx4rp0vqctUSNwQWuoap71ZsLSFg X-Proofpoint-ORIG-GUID: Pck7lx4rp0vqctUSNwQWuoap71ZsLSFg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-04_02,2022-03-04_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org size of CRC is not added to max_rx_pktlen, due to this bigger sized packets(size 1480, 1490 1500) are being dropped. This fix adds RTE_ETHER_CRC_LEN to max_rx_pktlen. Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length") Cc: stable@dpdk.org Signed-off-by: Devendra Singh Rawat Signed-off-by: Rasesh Mody --- drivers/net/qede/qede_rxtx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 02fa1fcaa1..c35585f5fd 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -235,7 +235,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, dev->data->rx_queues[qid] = NULL; } - max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN; + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; /* Fix up RX buffer size */ bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;