From patchwork Thu Aug 27 07:54:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 76069 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C38BEA04B1; Thu, 27 Aug 2020 09:58:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 839941C0C3; Thu, 27 Aug 2020 09:57:57 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id B19A51C0B8 for ; Thu, 27 Aug 2020 09:57:55 +0200 (CEST) IronPort-SDR: ZJba3tp6at7gtZNo6gigmMSV2wmBWqjn8nraZdfbhe6k6ZVnZVV2+GjbEnsIakiuNZGRVuBWWp BqlIuXrn9LyA== X-IronPort-AV: E=McAfee;i="6000,8403,9725"; a="136505752" X-IronPort-AV: E=Sophos;i="5.76,359,1592895600"; d="scan'208";a="136505752" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2020 00:57:55 -0700 IronPort-SDR: 7FTFetPFPIkFCOkcOwD45Eja0tB3Dpbo+Swr2ohuFtOLy51+PYxcFW1ExrJIvj2O2qEsX2iflU bwpBVMlJ/3gQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,359,1592895600"; d="scan'208";a="444353893" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 27 Aug 2020 00:57:52 -0700 From: Jeff Guo To: qiming.yang@intel.com, beilei.xing@intel.com, wei.zhao1@intel.com, qi.z.zhang@intel.com, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, dev@dpdk.org, jia.guo@intel.com, helin.zhang@intel.com, mb@smartsharesystems.com, ferruh.yigit@intel.com, barbette@kth.se Date: Thu, 27 Aug 2020 15:54:49 +0800 Message-Id: <20200827075452.1751-2-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200827075452.1751-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 1/4] net/ixgbe: maximize vector rx burst for ixgbe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Signed-off-by: Jeff Guo --- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 51 ++++++++++++++++++++----- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 48 ++++++++++++++++------- 2 files changed, 76 insertions(+), 23 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index aa27ee177..95e40aa13 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -206,6 +206,13 @@ desc_to_ptype_v(uint64x2_t descs[4], uint16_t pkt_type_mask, vgetq_lane_u32(tunnel_check, 3)); } +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) + * + * Notice: + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two + */ static inline uint16_t _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) @@ -226,9 +233,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16x8_t crc_adjust = {0, 0, rxq->crc_len, 0, rxq->crc_len, 0, 0, 0}; - /* nb_pkts shall be less equal than RTE_IXGBE_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_IXGBE_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_IXGBE_DESCS_PER_LOOP); @@ -382,7 +386,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/* +/** * vPMD receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) * * Notice: @@ -399,19 +403,17 @@ ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* +/** * vPMD receive routine that reassembles scattered packets * * Notice: * - don't support ol_flags for rss and csum err * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two */ -uint16_t -ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ixgbe_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_IXGBE_MAX_RX_BURST] = {0}; @@ -443,6 +445,35 @@ ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + * Main receive routine that can handle arbitrary burst sizes + * Notice: + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet + */ +uint16_t +ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_IXGBE_MAX_RX_BURST) { + uint16_t burst; + + burst = ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_IXGBE_MAX_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_IXGBE_MAX_RX_BURST) + return retval; + } + + return retval + ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index 517ca3166..96b207f94 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -302,13 +302,11 @@ desc_to_ptype_v(__m128i descs[4], uint16_t pkt_type_mask, get_packet_type(3, pkt_info, etqf_check, tunnel_check); } -/* +/** * vPMD raw receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) * * Notice: * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two */ static inline uint16_t @@ -344,9 +342,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, __m128i mbuf_init; uint8_t vlan_flags; - /* nb_pkts shall be less equal than RTE_IXGBE_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_IXGBE_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_IXGBE_DESCS_PER_LOOP); @@ -556,7 +551,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/* +/** * vPMD receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) * * Notice: @@ -572,18 +567,16 @@ ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* +/** * vPMD receive routine that reassembles scattered packets * * Notice: * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two */ -uint16_t -ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ixgbe_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_IXGBE_MAX_RX_BURST] = {0}; @@ -615,6 +608,35 @@ ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + * Main receive routine that can handle arbitrary burst sizes + * Notice: + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet + */ +uint16_t +ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_IXGBE_MAX_RX_BURST) { + uint16_t burst; + + burst = ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_IXGBE_MAX_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_IXGBE_MAX_RX_BURST) + return retval; + } + + return retval + ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) From patchwork Thu Aug 27 07:54:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 76070 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E1258A04B1; Thu, 27 Aug 2020 09:58:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EADCB1C0CA; Thu, 27 Aug 2020 09:58:01 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id DCC441C0BE for ; Thu, 27 Aug 2020 09:57:59 +0200 (CEST) IronPort-SDR: 0XHWVFK2jx81J3zDdLwIc/NDFSipw/jLzCBAjrOJ9klhbJVIyeUBtofguJ8M2RYQOWed3Twh2S o07X81wX+tow== X-IronPort-AV: E=McAfee;i="6000,8403,9725"; a="136505756" X-IronPort-AV: E=Sophos;i="5.76,359,1592895600"; d="scan'208";a="136505756" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2020 00:57:59 -0700 IronPort-SDR: akvXJH5H0hov30zaGuK9x0cY3sYQekhAhf+h+hQRuU3mwU4jbp5ryPDG1gVxjw+4prcS9NZj1f tBZYnWQKuyOg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,359,1592895600"; d="scan'208";a="444353915" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 27 Aug 2020 00:57:55 -0700 From: Jeff Guo To: qiming.yang@intel.com, beilei.xing@intel.com, wei.zhao1@intel.com, qi.z.zhang@intel.com, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, dev@dpdk.org, jia.guo@intel.com, helin.zhang@intel.com, mb@smartsharesystems.com, ferruh.yigit@intel.com, barbette@kth.se Date: Thu, 27 Aug 2020 15:54:50 +0800 Message-Id: <20200827075452.1751-3-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200827075452.1751-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 2/4] net/i40e: maximize vector rx burst for i40e X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Signed-off-by: Jeff Guo --- drivers/net/i40e/i40e_rxtx_vec_altivec.c | 60 ++++++++++++++++-------- drivers/net/i40e/i40e_rxtx_vec_avx2.c | 6 +-- drivers/net/i40e/i40e_rxtx_vec_neon.c | 49 +++++++++++++------ drivers/net/i40e/i40e_rxtx_vec_sse.c | 51 ++++++++++++++------ 4 files changed, 115 insertions(+), 51 deletions(-) diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index 6862a017e..d26b13c33 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -190,8 +190,6 @@ desc_to_ptype_v(vector unsigned long descs[4], struct rte_mbuf **rx_pkts, /* Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits */ static inline uint16_t _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -214,9 +212,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, }; vector unsigned long dd_check, eop_check; - /* nb_pkts shall be less equal than RTE_I40E_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_I40E_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_I40E_DESCS_PER_LOOP); @@ -447,11 +442,10 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } - /* Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits - */ +/** + * Notice: + * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet + */ uint16_t i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -459,15 +453,14 @@ i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } - /* vPMD receive routine that reassembles scattered packets - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits - */ -uint16_t -i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +/** + * vPMD receive routine that reassembles scattered packets + * Notice: + * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet + */ +static uint16_t +i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct i40e_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_I40E_VPMD_RX_BURST] = {0}; @@ -500,6 +493,35 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + * Main receive routine that can handle arbitrary burst sizes + * Notice: + * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet + */ +uint16_t +i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_I40E_VPMD_RX_BURST) { + uint16_t burst; + + burst = i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_I40E_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_I40E_VPMD_RX_BURST) + return retval; + } + + return retval + i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c index 3bcef1363..b3d3153f1 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c @@ -729,7 +729,7 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, return received; } -/* +/** * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet */ @@ -740,7 +740,7 @@ i40e_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec_avx2(rx_queue, rx_pkts, nb_pkts, NULL); } -/* +/** * vPMD receive routine that reassembles single burst of 32 scattered packets * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet @@ -781,7 +781,7 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } -/* +/** * vPMD receive routine that reassembles scattered packets. * Main receive routine that can handle arbitrary burst sizes * Notice: diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index 6f874e45b..5cbef7f57 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -190,8 +190,6 @@ desc_to_ptype_v(uint64x2_t descs[4], struct rte_mbuf **__rte_restrict rx_pkts, /* * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits */ static inline uint16_t _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, @@ -230,9 +228,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, 0, 0, 0 /* ignore non-length fields */ }; - /* nb_pkts shall be less equal than RTE_I40E_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_I40E_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_I40E_DESCS_PER_LOOP); @@ -426,11 +421,9 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, return nb_pkts_recd; } - /* +/** * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits */ uint16_t i40e_recv_pkts_vec(void *__rte_restrict rx_queue, @@ -439,15 +432,14 @@ i40e_recv_pkts_vec(void *__rte_restrict rx_queue, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } - /* vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles scattered packets * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits */ -uint16_t -i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct i40e_rx_queue *rxq = rx_queue; @@ -482,6 +474,35 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + * Main receive routine that can handle arbitrary burst sizes + * Notice: + * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet + */ +uint16_t +i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_I40E_VPMD_RX_BURST) { + uint16_t burst; + + burst = i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_I40E_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_I40E_VPMD_RX_BURST) + return retval; + } + + return retval + i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index 698518349..6a4bd4f45 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -342,11 +342,9 @@ desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts, rx_pkts[3]->packet_type = ptype_tbl[_mm_extract_epi8(ptype1, 8)]; } - /* +/** * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits */ static inline uint16_t _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -378,9 +376,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); __m128i dd_check, eop_check; - /* nb_pkts shall be less equal than RTE_I40E_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_I40E_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_I40E_DESCS_PER_LOOP); @@ -592,11 +587,9 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } - /* +/** * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits */ uint16_t i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -605,15 +598,14 @@ i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } - /* vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles scattered packets * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits */ -uint16_t -i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct i40e_rx_queue *rxq = rx_queue; @@ -648,6 +640,35 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + * Main receive routine that can handle arbitrary burst sizes + * Notice: + * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet + */ +uint16_t +i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_I40E_VPMD_RX_BURST) { + uint16_t burst; + + burst = i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_I40E_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_I40E_VPMD_RX_BURST) + return retval; + } + + return retval + i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) From patchwork Thu Aug 27 07:54:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 76071 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C4D27A04B1; Thu, 27 Aug 2020 09:58:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7BB9A1C0D4; Thu, 27 Aug 2020 09:58:04 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 0FD7F1C0D1 for ; Thu, 27 Aug 2020 09:58:02 +0200 (CEST) IronPort-SDR: TsfqkvZ0eh3VvsCYNVOo4Dve+LlKYId8XU+YLMfGFr9t85dhY6/1+iTuTNs6VZO6+YxAMBuNFX TxXGONFz7YPQ== X-IronPort-AV: E=McAfee;i="6000,8403,9725"; a="136505758" X-IronPort-AV: E=Sophos;i="5.76,359,1592895600"; d="scan'208";a="136505758" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2020 00:58:02 -0700 IronPort-SDR: Kq7pAMm+ZNPl0+ZXaRgzgUyZaL7wKAWaaM0Cg7EfC8yCdgBzTXzMp7jH7/I41PBKRmnQ/B5T9U Tmw6nfrxRy6Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,359,1592895600"; d="scan'208";a="444353933" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 27 Aug 2020 00:57:59 -0700 From: Jeff Guo To: qiming.yang@intel.com, beilei.xing@intel.com, wei.zhao1@intel.com, qi.z.zhang@intel.com, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, dev@dpdk.org, jia.guo@intel.com, helin.zhang@intel.com, mb@smartsharesystems.com, ferruh.yigit@intel.com, barbette@kth.se Date: Thu, 27 Aug 2020 15:54:51 +0800 Message-Id: <20200827075452.1751-4-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200827075452.1751-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 3/4] net/ice: maximize vector rx burst for ice X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Signed-off-by: Jeff Guo --- drivers/net/ice/ice_rxtx_vec_sse.c | 47 +++++++++++++++++++++--------- 1 file changed, 34 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index 382ef31f3..05f2efa10 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -207,8 +207,6 @@ ice_rx_desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts, /** * Notice: * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST - * numbers of DD bits */ static inline uint16_t _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -264,9 +262,6 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, const __m128i eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL); - /* nb_pkts shall be less equal than ICE_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, ICE_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to ICE_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, ICE_DESCS_PER_LOOP); @@ -444,8 +439,6 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, /** * Notice: * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST - * numbers of DD bits */ uint16_t ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -454,15 +447,14 @@ ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _ice_recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles scattered packets * Notice: * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST - * numbers of DD bits */ -uint16_t -ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ice_rx_queue *rxq = rx_queue; uint8_t split_flags[ICE_VPMD_RX_BURST] = {0}; @@ -496,6 +488,35 @@ ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + * Main receive routine that can handle arbitrary burst sizes + * Notice: + * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet + */ +uint16_t +ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > ICE_VPMD_RX_BURST) { + uint16_t burst; + + burst = ice_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + ICE_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < ICE_VPMD_RX_BURST) + return retval; + } + + return retval + ice_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void ice_vtx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) From patchwork Thu Aug 27 07:54:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 76072 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 63A04A04B1; Thu, 27 Aug 2020 09:58:33 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AF4B01C0C2; Thu, 27 Aug 2020 09:58:07 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 21C281C0D5 for ; Thu, 27 Aug 2020 09:58:05 +0200 (CEST) IronPort-SDR: VFPDpH05ObxnoFLdKaPVE0t3EWHsncBowq+PA2QN0IosIWs86EcjQA1vCxAZQ4eAqNQJCJWdBa VIwNbJ9rO/0Q== X-IronPort-AV: E=McAfee;i="6000,8403,9725"; a="136505766" X-IronPort-AV: E=Sophos;i="5.76,359,1592895600"; d="scan'208";a="136505766" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Aug 2020 00:58:05 -0700 IronPort-SDR: VWzPIk4yYtnO37ZVj1NcNgK09/0H/0vuMSY5MBA3v//kwq3+NAwYxIqLOB7J8dScNhrZX/tftE xuT6bsRCWkiQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,359,1592895600"; d="scan'208";a="444353954" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 27 Aug 2020 00:58:02 -0700 From: Jeff Guo To: qiming.yang@intel.com, beilei.xing@intel.com, wei.zhao1@intel.com, qi.z.zhang@intel.com, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, dev@dpdk.org, jia.guo@intel.com, helin.zhang@intel.com, mb@smartsharesystems.com, ferruh.yigit@intel.com, barbette@kth.se Date: Thu, 27 Aug 2020 15:54:52 +0800 Message-Id: <20200827075452.1751-5-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200827075452.1751-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 4/4] net/iavf: maximize vector rx burst for iavf X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Signed-off-by: Jeff Guo --- drivers/net/iavf/iavf_rxtx_vec_sse.c | 102 ++++++++++++++++++++------- 1 file changed, 76 insertions(+), 26 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 85c5bd4af..a4c97e77d 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -379,10 +379,9 @@ flex_desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts, rx_pkts[3]->packet_type = type_table[_mm_extract_epi16(ptype_all, 7)]; } -/* Notice: +/** + * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits */ static inline uint16_t _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -413,9 +412,6 @@ _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); __m128i dd_check, eop_check; - /* nb_pkts shall be less equal than IAVF_VPMD_RX_MAX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); - /* nb_pkts has to be floor-aligned to IAVF_VPMD_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_VPMD_DESCS_PER_LOOP); @@ -627,10 +623,9 @@ _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/* Notice: +/** + * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits */ static inline uint16_t _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq, @@ -688,9 +683,6 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq, const __m128i eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL); - /* nb_pkts shall be less equal than IAVF_VPMD_RX_MAX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); - /* nb_pkts has to be floor-aligned to IAVF_VPMD_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_VPMD_DESCS_PER_LOOP); @@ -921,7 +913,8 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq, return nb_pkts_recd; } -/* Notice: +/** + * Notice: * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST * numbers of DD bits @@ -933,7 +926,8 @@ iavf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* Notice: +/** + * Notice: * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST * numbers of DD bits @@ -945,15 +939,14 @@ iavf_recv_pkts_vec_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec_flex_rxd(rx_queue, rx_pkts, nb_pkts, NULL); } -/* vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles scattered packets * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits */ -uint16_t -iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct iavf_rx_queue *rxq = rx_queue; uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; @@ -986,16 +979,44 @@ iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } -/* vPMD receive routine that reassembles scattered packets for flex RxD +/** + * vPMD receive routine that reassembles scattered packets. + * Main receive routine that can handle arbitrary burst sizes * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits */ uint16_t -iavf_recv_scattered_pkts_vec_flex_rxd(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > IAVF_VPMD_RX_MAX_BURST) { + uint16_t burst; + + burst = iavf_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + IAVF_VPMD_RX_MAX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < IAVF_VPMD_RX_MAX_BURST) + return retval; + } + + return retval + iavf_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + +/** + * vPMD receive routine that reassembles scattered packets for flex RxD + * Notice: + * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet + */ +static uint16_t +iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct iavf_rx_queue *rxq = rx_queue; uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; @@ -1028,6 +1049,35 @@ iavf_recv_scattered_pkts_vec_flex_rxd(void *rx_queue, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + * Main receive routine that can handle arbitrary burst sizes + * Notice: + * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet + */ +uint16_t +iavf_recv_scattered_pkts_vec_flex_rxd(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > IAVF_VPMD_RX_MAX_BURST) { + uint16_t burst; + + burst = iavf_recv_scattered_burst_vec_flex_rxd(rx_queue, + rx_pkts + retval, + IAVF_VPMD_RX_MAX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < IAVF_VPMD_RX_MAX_BURST) + return retval; + } + + return retval + iavf_recv_scattered_burst_vec_flex_rxd(rx_queue, + rx_pkts + retval, nb_pkts); +} + static inline void vtx1(volatile struct iavf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) {