From patchwork Wed Sep 9 06:36:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 76993 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 41482A04B1; Wed, 9 Sep 2020 08:38:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4DE601C0CC; Wed, 9 Sep 2020 08:38:28 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id E7A851C0CA for ; Wed, 9 Sep 2020 08:38:26 +0200 (CEST) IronPort-SDR: h+ItvQ3FQLjWoQpxGFy0zXcVOIV63kXsO/C5Oj/0Ye1/VH3Uk6M3UOVFfZeVMMgthhgNi/dRrd nqagXH+qzUGw== X-IronPort-AV: E=McAfee;i="6000,8403,9738"; a="243094293" X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="243094293" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Sep 2020 23:38:26 -0700 IronPort-SDR: GQtyVbHwrtdpcFCLNXe8aVTRlcGe9QGn9IXpIFwxlBhF9PhsGDmB9KXdkENj+Muv7mla8hw6Yk xd0vSVXLOlww== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="300049876" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga003.jf.intel.com with ESMTP; 08 Sep 2020 23:38:22 -0700 From: Jeff Guo To: qiming.yang@intel.com, beilei.xing@intel.com, wei.zhao1@intel.com, qi.z.zhang@intel.com, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, dev@dpdk.org, jia.guo@intel.com, helin.zhang@intel.com, mb@smartsharesystems.com, ferruh.yigit@intel.com, haiyue.wang@intel.com, stephen@networkplumber.org, barbette@kth.se Date: Wed, 9 Sep 2020 14:36:32 +0800 Message-Id: <20200909063636.60205-2-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200909063636.60205-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20200909063636.60205-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 1/5] net/iavf: fix vector rx burst for iavf X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. And do some code cleaning for vector rx path. Signed-off-by: Jeff Guo --- drivers/net/iavf/iavf_rxtx.h | 1 + drivers/net/iavf/iavf_rxtx_vec_avx2.c | 78 ++++++++--------- drivers/net/iavf/iavf_rxtx_vec_sse.c | 119 ++++++++++++++++++-------- 3 files changed, 121 insertions(+), 77 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index 59625a979..f71f9fbdb 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -21,6 +21,7 @@ #define IAVF_VPMD_TX_MAX_BURST 32 #define IAVF_RXQ_REARM_THRESH 32 #define IAVF_VPMD_DESCS_PER_LOOP 4 +#define IAVF_VPMD_DESCS_PER_LOOP_AVX 8 #define IAVF_VPMD_TX_MAX_FREE_BUF 64 #define IAVF_NO_VECTOR_FLAGS ( \ diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index e5e0fd309..9816adbaa 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -29,7 +29,7 @@ iavf_rxq_rearm(struct iavf_rx_queue *rxq) __m128i dma_addr0; dma_addr0 = _mm_setzero_si128(); - for (i = 0; i < IAVF_VPMD_DESCS_PER_LOOP; i++) { + for (i = 0; i < IAVF_VPMD_DESCS_PER_LOOP_AVX; i++) { rxp[i] = &rxq->fake_mbuf; _mm_store_si128((__m128i *)&rxdp[i].read, dma_addr0); @@ -134,13 +134,19 @@ iavf_rxq_rearm(struct iavf_rx_queue *rxq) #define PKTLEN_SHIFT 10 +/** + * vPMD raw receive routine for flex RxD, + * only accept(nb_pkts >= IAVF_VPMD_DESCS_PER_LOOP_AVX) + * + * Notice: + * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP_AVX, just return no packet + * - floor align nb_pkts to a IAVF_VPMD_DESCS_PER_LOOP_AVX power-of-two + */ static inline uint16_t _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) { -#define IAVF_DESCS_PER_LOOP_AVX 8 - /* const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; */ const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl; @@ -153,8 +159,8 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq, rte_prefetch0(rxdp); - /* nb_pkts has to be floor-aligned to IAVF_DESCS_PER_LOOP_AVX */ - nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_DESCS_PER_LOOP_AVX); + /* nb_pkts has to be floor-aligned to IAVF_VPMD_DESCS_PER_LOOP_AVX */ + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_VPMD_DESCS_PER_LOOP_AVX); /* See if we need to rearm the RX queue - gives the prefetch a bit * of time to act @@ -297,8 +303,8 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq, uint16_t i, received; for (i = 0, received = 0; i < nb_pkts; - i += IAVF_DESCS_PER_LOOP_AVX, - rxdp += IAVF_DESCS_PER_LOOP_AVX) { + i += IAVF_VPMD_DESCS_PER_LOOP_AVX, + rxdp += IAVF_VPMD_DESCS_PER_LOOP_AVX) { /* step 1, copy over 8 mbuf pointers to rx_pkts array */ _mm256_storeu_si256((void *)&rx_pkts[i], _mm256_loadu_si256((void *)&sw_ring[i])); @@ -368,7 +374,7 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq, if (split_packet) { int j; - for (j = 0; j < IAVF_DESCS_PER_LOOP_AVX; j++) + for (j = 0; j < IAVF_VPMD_DESCS_PER_LOOP_AVX; j++) rte_mbuf_prefetch_part2(rx_pkts[i + j]); } @@ -583,7 +589,7 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq, split_bits = _mm_shuffle_epi8(split_bits, eop_shuffle); *(uint64_t *)split_packet = _mm_cvtsi128_si64(split_bits); - split_packet += IAVF_DESCS_PER_LOOP_AVX; + split_packet += IAVF_VPMD_DESCS_PER_LOOP_AVX; } /* perform dd_check */ @@ -599,7 +605,7 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq, (_mm_cvtsi128_si64 (_mm256_castsi256_si128(status0_7))); received += burst; - if (burst != IAVF_DESCS_PER_LOOP_AVX) + if (burst != IAVF_VPMD_DESCS_PER_LOOP_AVX) break; } @@ -633,13 +639,19 @@ flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7) return fdir_flags; } +/** + * vPMD raw receive routine, + * only accept(nb_pkts >= IAVF_VPMD_DESCS_PER_LOOP_AVX) + * + * Notice: + * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP_AVX, just return no packet + * - floor align nb_pkts to a IAVF_VPMD_DESCS_PER_LOOP_AVX power-of-two + */ static inline uint16_t _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) { -#define IAVF_DESCS_PER_LOOP_AVX 8 - const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl; const __m256i mbuf_init = _mm256_set_epi64x(0, 0, @@ -650,8 +662,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, rte_prefetch0(rxdp); - /* nb_pkts has to be floor-aligned to IAVF_DESCS_PER_LOOP_AVX */ - nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_DESCS_PER_LOOP_AVX); + /* nb_pkts has to be floor-aligned to IAVF_VPMD_DESCS_PER_LOOP_AVX */ + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_VPMD_DESCS_PER_LOOP_AVX); /* See if we need to rearm the RX queue - gives the prefetch a bit * of time to act @@ -794,8 +806,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, uint16_t i, received; for (i = 0, received = 0; i < nb_pkts; - i += IAVF_DESCS_PER_LOOP_AVX, - rxdp += IAVF_DESCS_PER_LOOP_AVX) { + i += IAVF_VPMD_DESCS_PER_LOOP_AVX, + rxdp += IAVF_VPMD_DESCS_PER_LOOP_AVX) { /* step 1, copy over 8 mbuf pointers to rx_pkts array */ _mm256_storeu_si256((void *)&rx_pkts[i], _mm256_loadu_si256((void *)&sw_ring[i])); @@ -851,7 +863,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, if (split_packet) { int j; - for (j = 0; j < IAVF_DESCS_PER_LOOP_AVX; j++) + for (j = 0; j < IAVF_VPMD_DESCS_PER_LOOP_AVX; j++) rte_mbuf_prefetch_part2(rx_pkts[i + j]); } @@ -1193,7 +1205,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, split_bits = _mm_shuffle_epi8(split_bits, eop_shuffle); *(uint64_t *)split_packet = _mm_cvtsi128_si64(split_bits); - split_packet += IAVF_DESCS_PER_LOOP_AVX; + split_packet += IAVF_VPMD_DESCS_PER_LOOP_AVX; } /* perform dd_check */ @@ -1209,7 +1221,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, (_mm_cvtsi128_si64 (_mm256_castsi256_si128(status0_7))); received += burst; - if (burst != IAVF_DESCS_PER_LOOP_AVX) + if (burst != IAVF_VPMD_DESCS_PER_LOOP_AVX) break; } @@ -1224,10 +1236,6 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, return received; } -/** - * Notice: - * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet - */ uint16_t iavf_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -1235,10 +1243,6 @@ iavf_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, return _iavf_recv_raw_pkts_vec_avx2(rx_queue, rx_pkts, nb_pkts, NULL); } -/** - * Notice: - * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet - */ uint16_t iavf_recv_pkts_vec_avx2_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -1249,8 +1253,6 @@ iavf_recv_pkts_vec_avx2_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, /** * vPMD receive routine that reassembles single burst of 32 scattered packets - * Notice: - * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet */ static uint16_t iavf_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -1259,6 +1261,9 @@ iavf_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, struct iavf_rx_queue *rxq = rx_queue; uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; + /* split_flags only can support max of IAVF_VPMD_RX_MAX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _iavf_recv_raw_pkts_vec_avx2(rxq, rx_pkts, nb_pkts, split_flags); @@ -1290,9 +1295,6 @@ iavf_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, /** * vPMD receive routine that reassembles scattered packets. - * Main receive routine that can handle arbitrary burst sizes - * Notice: - * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet */ uint16_t iavf_recv_scattered_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -1313,10 +1315,8 @@ iavf_recv_scattered_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, } /** - * vPMD receive routine that reassembles single burst of - * 32 scattered packets for flex RxD - * Notice: - * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet + * vPMD receive routine that reassembles single burst of 32 scattered packets + * for flex RxD */ static uint16_t iavf_recv_scattered_burst_vec_avx2_flex_rxd(void *rx_queue, @@ -1326,6 +1326,9 @@ iavf_recv_scattered_burst_vec_avx2_flex_rxd(void *rx_queue, struct iavf_rx_queue *rxq = rx_queue; uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; + /* split_flags only can support max of IAVF_VPMD_RX_MAX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _iavf_recv_raw_pkts_vec_avx2_flex_rxd(rxq, rx_pkts, nb_pkts, split_flags); @@ -1357,9 +1360,6 @@ iavf_recv_scattered_burst_vec_avx2_flex_rxd(void *rx_queue, /** * vPMD receive routine that reassembles scattered packets for flex RxD. - * Main receive routine that can handle arbitrary burst sizes - * Notice: - * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet */ uint16_t iavf_recv_scattered_pkts_vec_avx2_flex_rxd(void *rx_queue, diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 85c5bd4af..b5362ecf3 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -379,10 +379,12 @@ flex_desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts, rx_pkts[3]->packet_type = type_table[_mm_extract_epi16(ptype_all, 7)]; } -/* Notice: +/** + * vPMD raw receive routine, only accept(nb_pkts >= IAVF_VPMD_DESCS_PER_LOOP) + * + * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits + * - floor align nb_pkts to a IAVF_VPMD_DESCS_PER_LOOP power-of-two */ static inline uint16_t _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -413,9 +415,6 @@ _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); __m128i dd_check, eop_check; - /* nb_pkts shall be less equal than IAVF_VPMD_RX_MAX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); - /* nb_pkts has to be floor-aligned to IAVF_VPMD_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_VPMD_DESCS_PER_LOOP); @@ -627,10 +626,13 @@ _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/* Notice: +/** + * vPMD raw receive routine for flex RxD, + * only accept(nb_pkts >= IAVF_VPMD_DESCS_PER_LOOP) + * + * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits + * - floor align nb_pkts to a IAVF_VPMD_DESCS_PER_LOOP power-of-two */ static inline uint16_t _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq, @@ -688,9 +690,6 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq, const __m128i eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL); - /* nb_pkts shall be less equal than IAVF_VPMD_RX_MAX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); - /* nb_pkts has to be floor-aligned to IAVF_VPMD_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_VPMD_DESCS_PER_LOOP); @@ -921,11 +920,6 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq, return nb_pkts_recd; } -/* Notice: - * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet - * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits - */ uint16_t iavf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -933,11 +927,6 @@ iavf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* Notice: - * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet - * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits - */ uint16_t iavf_recv_pkts_vec_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -945,20 +934,20 @@ iavf_recv_pkts_vec_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec_flex_rxd(rx_queue, rx_pkts, nb_pkts, NULL); } -/* vPMD receive routine that reassembles scattered packets - * Notice: - * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets */ -uint16_t -iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct iavf_rx_queue *rxq = rx_queue; uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; unsigned int i = 0; + /* split_flags only can support max of IAVF_VPMD_RX_MAX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, split_flags); @@ -986,21 +975,48 @@ iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } -/* vPMD receive routine that reassembles scattered packets for flex RxD - * Notice: - * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits +/** + * vPMD receive routine that reassembles scattered packets. */ uint16_t -iavf_recv_scattered_pkts_vec_flex_rxd(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > IAVF_VPMD_RX_MAX_BURST) { + uint16_t burst; + + burst = iavf_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + IAVF_VPMD_RX_MAX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < IAVF_VPMD_RX_MAX_BURST) + return retval; + } + + return retval + iavf_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets + * for flex RxD + */ +static uint16_t +iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct iavf_rx_queue *rxq = rx_queue; uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; unsigned int i = 0; + /* split_flags only can support max of IAVF_VPMD_RX_MAX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec_flex_rxd(rxq, rx_pkts, nb_pkts, split_flags); @@ -1028,6 +1044,33 @@ iavf_recv_scattered_pkts_vec_flex_rxd(void *rx_queue, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets for flex RxD + */ +uint16_t +iavf_recv_scattered_pkts_vec_flex_rxd(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > IAVF_VPMD_RX_MAX_BURST) { + uint16_t burst; + + burst = iavf_recv_scattered_burst_vec_flex_rxd(rx_queue, + rx_pkts + retval, + IAVF_VPMD_RX_MAX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < IAVF_VPMD_RX_MAX_BURST) + return retval; + } + + return retval + iavf_recv_scattered_burst_vec_flex_rxd(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct iavf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) { From patchwork Wed Sep 9 06:36:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 76994 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B87DEA04B1; Wed, 9 Sep 2020 08:38:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 921411C0D2; Wed, 9 Sep 2020 08:38:33 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id A7FF11C0D2 for ; Wed, 9 Sep 2020 08:38:31 +0200 (CEST) IronPort-SDR: kAbDInd7ekNfVocaf/derTy7GZo5JF+yUuX6XdIVbETvzMnJCcIHWEYz878T13SfSxGS2/iw3w LPSbVcFYXttg== X-IronPort-AV: E=McAfee;i="6000,8403,9738"; a="243094296" X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="243094296" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Sep 2020 23:38:29 -0700 IronPort-SDR: ynufGgwcn85jOlxxFBkh1Rfrv9z+6HGqO7B2hqzfrjIPLk9YQyMqoDYbbq2VIDc5c0JiWj/hw0 2u8++7JeQ+fw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="300049900" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga003.jf.intel.com with ESMTP; 08 Sep 2020 23:38:26 -0700 From: Jeff Guo To: qiming.yang@intel.com, beilei.xing@intel.com, wei.zhao1@intel.com, qi.z.zhang@intel.com, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, dev@dpdk.org, jia.guo@intel.com, helin.zhang@intel.com, mb@smartsharesystems.com, ferruh.yigit@intel.com, haiyue.wang@intel.com, stephen@networkplumber.org, barbette@kth.se Date: Wed, 9 Sep 2020 14:36:33 +0800 Message-Id: <20200909063636.60205-3-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200909063636.60205-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20200909063636.60205-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 2/5] net/ixgbe: fix vector rx burst for ixgbe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. And do some code cleaning for vector rx path. Signed-off-by: Jeff Guo Tested-by: Feifei Wang --- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 77 +++++++++++++------------ drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 61 +++++++++++--------- 2 files changed, 76 insertions(+), 62 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index aa27ee177..7692c5d59 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -130,17 +130,6 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2, rx_pkts[3]->ol_flags = vol.e[3]; } -/* - * vPMD raw receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) - * - * Notice: - * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit - * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two - * - don't support ol_flags for rss and csum err - */ - #define IXGBE_VPMD_DESC_EOP_MASK 0x02020202 #define IXGBE_UINT8_BIT (CHAR_BIT * sizeof(uint8_t)) @@ -206,6 +195,13 @@ desc_to_ptype_v(uint64x2_t descs[4], uint16_t pkt_type_mask, vgetq_lane_u32(tunnel_check, 3)); } +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) + * + * Notice: + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two + */ static inline uint16_t _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) @@ -226,9 +222,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16x8_t crc_adjust = {0, 0, rxq->crc_len, 0, rxq->crc_len, 0, 0, 0}; - /* nb_pkts shall be less equal than RTE_IXGBE_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_IXGBE_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_IXGBE_DESCS_PER_LOOP); @@ -382,16 +375,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/* - * vPMD receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) - * - * Notice: - * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit - * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two - * - don't support ol_flags for rss and csum err - */ uint16_t ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -399,23 +382,19 @@ ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* - * vPMD receive routine that reassembles scattered packets - * - * Notice: - * - don't support ol_flags for rss and csum err - * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit - * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets */ -uint16_t -ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ixgbe_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_IXGBE_MAX_RX_BURST] = {0}; + /* split_flags only can support max of RTE_IXGBE_MAX_RX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, split_flags); @@ -443,6 +422,32 @@ ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_IXGBE_MAX_RX_BURST) { + uint16_t burst; + + burst = ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_IXGBE_MAX_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_IXGBE_MAX_RX_BURST) + return retval; + } + + return retval + ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index 517ca3166..cf54ff128 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -302,13 +302,11 @@ desc_to_ptype_v(__m128i descs[4], uint16_t pkt_type_mask, get_packet_type(3, pkt_info, etqf_check, tunnel_check); } -/* +/** * vPMD raw receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) * * Notice: * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two */ static inline uint16_t @@ -344,9 +342,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, __m128i mbuf_init; uint8_t vlan_flags; - /* nb_pkts shall be less equal than RTE_IXGBE_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_IXGBE_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_IXGBE_DESCS_PER_LOOP); @@ -556,15 +551,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/* - * vPMD receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) - * - * Notice: - * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit - * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two - */ uint16_t ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -572,22 +558,19 @@ ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* - * vPMD receive routine that reassembles scattered packets - * - * Notice: - * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit - * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets */ -uint16_t -ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ixgbe_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_IXGBE_MAX_RX_BURST] = {0}; + /* split_flags only can support max of RTE_IXGBE_MAX_RX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, split_flags); @@ -615,6 +598,32 @@ ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_IXGBE_MAX_RX_BURST) { + uint16_t burst; + + burst = ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_IXGBE_MAX_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_IXGBE_MAX_RX_BURST) + return retval; + } + + return retval + ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) From patchwork Wed Sep 9 06:36:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 76995 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E07EBA04B1; Wed, 9 Sep 2020 08:38:51 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3F9D51C10D; Wed, 9 Sep 2020 08:38:35 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 57F241C0D9 for ; Wed, 9 Sep 2020 08:38:34 +0200 (CEST) IronPort-SDR: vxd0IjlOcrbVhSFho46aSWhazUhummY9RjVS8ovbmLg8kwgaqaAhs/DzcUZwloqc6tCx0/YZ7k M3LDfz3McXlw== X-IronPort-AV: E=McAfee;i="6000,8403,9738"; a="243094302" X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="243094302" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Sep 2020 23:38:33 -0700 IronPort-SDR: XeDOw96EiFp440Mmg6RfQndF0R0CI+I3/qodWzzeozNHp4cYJmgOX8Vnq+HOfeax8U4UTCG5/1 Dc52LCSBHIEA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="300049913" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga003.jf.intel.com with ESMTP; 08 Sep 2020 23:38:30 -0700 From: Jeff Guo To: qiming.yang@intel.com, beilei.xing@intel.com, wei.zhao1@intel.com, qi.z.zhang@intel.com, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, dev@dpdk.org, jia.guo@intel.com, helin.zhang@intel.com, mb@smartsharesystems.com, ferruh.yigit@intel.com, haiyue.wang@intel.com, stephen@networkplumber.org, barbette@kth.se Date: Wed, 9 Sep 2020 14:36:34 +0800 Message-Id: <20200909063636.60205-4-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200909063636.60205-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20200909063636.60205-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 3/5] net/i40e: fix vector rx burst for i40e X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. And do some code cleaning for vector rx path. Signed-off-by: Jeff Guo --- drivers/net/i40e/i40e_rxtx.h | 1 + drivers/net/i40e/i40e_rxtx_vec_altivec.c | 64 ++++++++++++++++-------- drivers/net/i40e/i40e_rxtx_vec_avx2.c | 29 ++++++----- drivers/net/i40e/i40e_rxtx_vec_neon.c | 58 +++++++++++++-------- drivers/net/i40e/i40e_rxtx_vec_sse.c | 58 +++++++++++++-------- 5 files changed, 133 insertions(+), 77 deletions(-) diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 57d7b4160..01d4609f9 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -14,6 +14,7 @@ #define RTE_I40E_MAX_RX_BURST RTE_I40E_RXQ_REARM_THRESH #define RTE_I40E_TX_MAX_FREE_BUF_SZ 64 #define RTE_I40E_DESCS_PER_LOOP 4 +#define RTE_I40E_DESCS_PER_LOOP_AVX 8 #define I40E_RXBUF_SZ_1024 1024 #define I40E_RXBUF_SZ_2048 2048 diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index 6862a017e..345c63aa7 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -188,11 +188,13 @@ desc_to_ptype_v(vector unsigned long descs[4], struct rte_mbuf **rx_pkts, ptype_tbl[(*(vector unsigned char *)&ptype1)[8]]; } - /* Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits - */ +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_I40E_DESCS_PER_LOOP) + * + * Notice: + * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet + * - floor align nb_pkts to a RTE_I40E_DESCS_PER_LOOP power-of-two + */ static inline uint16_t _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) @@ -214,9 +216,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, }; vector unsigned long dd_check, eop_check; - /* nb_pkts shall be less equal than RTE_I40E_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_I40E_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_I40E_DESCS_PER_LOOP); @@ -447,11 +446,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } - /* Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits - */ uint16_t i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -459,19 +453,19 @@ i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } - /* vPMD receive routine that reassembles scattered packets - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits - */ -uint16_t -i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets + */ +static uint16_t +i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct i40e_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_I40E_VPMD_RX_BURST] = {0}; + /* split_flags only can support max of RTE_I40E_VPMD_RX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_VPMD_RX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, split_flags); @@ -500,6 +494,32 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_I40E_VPMD_RX_BURST) { + uint16_t burst; + + burst = i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_I40E_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_I40E_VPMD_RX_BURST) + return retval; + } + + return retval + i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c index 3bcef1363..b5e6867d0 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c @@ -36,7 +36,7 @@ i40e_rxq_rearm(struct i40e_rx_queue *rxq) rxq->nb_rx_desc) { __m128i dma_addr0; dma_addr0 = _mm_setzero_si128(); - for (i = 0; i < RTE_I40E_DESCS_PER_LOOP; i++) { + for (i = 0; i < RTE_I40E_DESCS_PER_LOOP_AVX; i++) { rxep[i].mbuf = &rxq->fake_mbuf; _mm_store_si128((__m128i *)&rxdp[i].read, dma_addr0); @@ -219,13 +219,18 @@ desc_fdir_processing_32b(volatile union i40e_rx_desc *rxdp, #define PKTLEN_SHIFT 10 -/* Force inline as some compilers will not inline by default. */ +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_I40E_DESCS_PER_LOOP_AVX) + * + * Notice: + * - nb_pkts < RTE_I40E_DESCS_PER_LOOP_AVX, just return no packet + * - floor align nb_pkts to a RTE_I40E_DESCS_PER_LOOP_AVX power-of-two + * - force inline as some compilers will not inline by default + */ static __rte_always_inline uint16_t _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) { -#define RTE_I40E_DESCS_PER_LOOP_AVX 8 - const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0, rxq->mbuf_initializer); @@ -729,10 +734,6 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, return received; } -/* - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - */ uint16_t i40e_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -740,10 +741,8 @@ i40e_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec_avx2(rx_queue, rx_pkts, nb_pkts, NULL); } -/* +/** * vPMD receive routine that reassembles single burst of 32 scattered packets - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet */ static uint16_t i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -752,6 +751,9 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, struct i40e_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_I40E_VPMD_RX_BURST] = {0}; + /* split_flags only can support max of RTE_I40E_VPMD_RX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_VPMD_RX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec_avx2(rxq, rx_pkts, nb_pkts, split_flags); @@ -781,11 +783,8 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } -/* +/** * vPMD receive routine that reassembles scattered packets. - * Main receive routine that can handle arbitrary burst sizes - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet */ uint16_t i40e_recv_scattered_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index 6f874e45b..143cdf4a5 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -187,11 +187,12 @@ desc_to_ptype_v(uint64x2_t descs[4], struct rte_mbuf **__rte_restrict rx_pkts, } - /* +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_I40E_DESCS_PER_LOOP) + * * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits + * - floor align nb_pkts to a RTE_I40E_DESCS_PER_LOOP power-of-two */ static inline uint16_t _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, @@ -230,9 +231,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, 0, 0, 0 /* ignore non-length fields */ }; - /* nb_pkts shall be less equal than RTE_I40E_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_I40E_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_I40E_DESCS_PER_LOOP); @@ -426,12 +424,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, return nb_pkts_recd; } - /* - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits - */ uint16_t i40e_recv_pkts_vec(void *__rte_restrict rx_queue, struct rte_mbuf **__rte_restrict rx_pkts, uint16_t nb_pkts) @@ -439,20 +431,20 @@ i40e_recv_pkts_vec(void *__rte_restrict rx_queue, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } - /* vPMD receive routine that reassembles scattered packets - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets */ -uint16_t -i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct i40e_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_I40E_VPMD_RX_BURST] = {0}; + /* split_flags only can support max of RTE_I40E_VPMD_RX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_VPMD_RX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, split_flags); @@ -482,6 +474,32 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_I40E_VPMD_RX_BURST) { + uint16_t burst; + + burst = i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_I40E_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_I40E_VPMD_RX_BURST) + return retval; + } + + return retval + i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index 698518349..605912246 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -342,11 +342,12 @@ desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts, rx_pkts[3]->packet_type = ptype_tbl[_mm_extract_epi8(ptype1, 8)]; } - /* +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_I40E_DESCS_PER_LOOP) + * * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits + * - floor align nb_pkts to a RTE_I40E_DESCS_PER_LOOP power-of-two */ static inline uint16_t _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -378,9 +379,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); __m128i dd_check, eop_check; - /* nb_pkts shall be less equal than RTE_I40E_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_I40E_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_I40E_DESCS_PER_LOOP); @@ -592,12 +590,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } - /* - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits - */ uint16_t i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -605,20 +597,20 @@ i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } - /* vPMD receive routine that reassembles scattered packets - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets */ -uint16_t -i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct i40e_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_I40E_VPMD_RX_BURST] = {0}; + /* split_flags only can support max of RTE_I40E_VPMD_RX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_VPMD_RX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, split_flags); @@ -648,6 +640,32 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_I40E_VPMD_RX_BURST) { + uint16_t burst; + + burst = i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_I40E_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_I40E_VPMD_RX_BURST) + return retval; + } + + return retval + i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) From patchwork Wed Sep 9 06:36:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 76996 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1C967A04B1; Wed, 9 Sep 2020 08:39:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 833BE1C0DC; Wed, 9 Sep 2020 08:38:39 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 1C6DF1C0DB for ; Wed, 9 Sep 2020 08:38:37 +0200 (CEST) IronPort-SDR: srWoopoUTtVksVF8jXN3ez5BdEZb3n+M3YHvaymNAgq+bZM0yKViwgDU9YBG4d/hjzlqJoiROU FSHpnGi44kFA== X-IronPort-AV: E=McAfee;i="6000,8403,9738"; a="243094306" X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="243094306" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Sep 2020 23:38:37 -0700 IronPort-SDR: NJcD9ouVQ33FEAjiiJ9TJujPvsR87vZVg7SnFSWjP0TP/mkBZHoc1XUkR+grmBpOpI3M1kMJ1F JrwJ6KghCGhA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="300049920" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga003.jf.intel.com with ESMTP; 08 Sep 2020 23:38:34 -0700 From: Jeff Guo To: qiming.yang@intel.com, beilei.xing@intel.com, wei.zhao1@intel.com, qi.z.zhang@intel.com, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, dev@dpdk.org, jia.guo@intel.com, helin.zhang@intel.com, mb@smartsharesystems.com, ferruh.yigit@intel.com, haiyue.wang@intel.com, stephen@networkplumber.org, barbette@kth.se Date: Wed, 9 Sep 2020 14:36:35 +0800 Message-Id: <20200909063636.60205-5-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200909063636.60205-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20200909063636.60205-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 4/5] net/ice: fix vector rx burst for ice X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. And do some code cleaning for vector rx path. Signed-off-by: Jeff Guo Tested-by: Yingya Han Signed-off-by: Jeff Guo --- drivers/net/ice/ice_rxtx.h | 1 + drivers/net/ice/ice_rxtx_vec_avx2.c | 23 ++++++------ drivers/net/ice/ice_rxtx_vec_sse.c | 56 +++++++++++++++++++---------- 3 files changed, 49 insertions(+), 31 deletions(-) diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index 2fdcfb7d0..3ef5f300d 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -35,6 +35,7 @@ #define ICE_MAX_RX_BURST ICE_RXQ_REARM_THRESH #define ICE_TX_MAX_FREE_BUF_SZ 64 #define ICE_DESCS_PER_LOOP 4 +#define ICE_DESCS_PER_LOOP_AVX 8 #define ICE_FDIR_PKT_LEN 512 diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c index be50677c2..843e4f32a 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c @@ -29,7 +29,7 @@ ice_rxq_rearm(struct ice_rx_queue *rxq) __m128i dma_addr0; dma_addr0 = _mm_setzero_si128(); - for (i = 0; i < ICE_DESCS_PER_LOOP; i++) { + for (i = 0; i < ICE_DESCS_PER_LOOP_AVX; i++) { rxep[i].mbuf = &rxq->fake_mbuf; _mm_store_si128((__m128i *)&rxdp[i].read, dma_addr0); @@ -132,12 +132,17 @@ ice_rxq_rearm(struct ice_rx_queue *rxq) ICE_PCI_REG_WRITE(rxq->qrx_tail, rx_id); } +/** + * vPMD raw receive routine, only accept(nb_pkts >= ICE_DESCS_PER_LOOP_AVX) + * + * Notice: + * - nb_pkts < ICE_DESCS_PER_LOOP_AVX, just return no packet + * - floor align nb_pkts to a ICE_DESCS_PER_LOOP_AVX power-of-two + */ static inline uint16_t _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) { -#define ICE_DESCS_PER_LOOP_AVX 8 - const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0, rxq->mbuf_initializer); @@ -603,10 +608,6 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, return received; } -/** - * Notice: - * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet - */ uint16_t ice_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -616,8 +617,6 @@ ice_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, /** * vPMD receive routine that reassembles single burst of 32 scattered packets - * Notice: - * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet */ static uint16_t ice_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -626,6 +625,9 @@ ice_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, struct ice_rx_queue *rxq = rx_queue; uint8_t split_flags[ICE_VPMD_RX_BURST] = {0}; + /* split_flags only can support max of ICE_VPMD_RX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, ICE_VPMD_RX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _ice_recv_raw_pkts_vec_avx2(rxq, rx_pkts, nb_pkts, split_flags); @@ -657,9 +659,6 @@ ice_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, /** * vPMD receive routine that reassembles scattered packets. - * Main receive routine that can handle arbitrary burst sizes - * Notice: - * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet */ uint16_t ice_recv_scattered_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index 382ef31f3..c03e24092 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -205,10 +205,11 @@ ice_rx_desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts, } /** + * vPMD raw receive routine, only accept(nb_pkts >= ICE_DESCS_PER_LOOP) + * * Notice: * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST - * numbers of DD bits + * - floor align nb_pkts to a ICE_DESCS_PER_LOOP power-of-two */ static inline uint16_t _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -264,9 +265,6 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, const __m128i eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL); - /* nb_pkts shall be less equal than ICE_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, ICE_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to ICE_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, ICE_DESCS_PER_LOOP); @@ -441,12 +439,6 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/** - * Notice: - * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST - * numbers of DD bits - */ uint16_t ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -454,19 +446,19 @@ ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _ice_recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* vPMD receive routine that reassembles scattered packets - * Notice: - * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST - * numbers of DD bits +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets */ -uint16_t -ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ice_rx_queue *rxq = rx_queue; uint8_t split_flags[ICE_VPMD_RX_BURST] = {0}; + /* split_flags only can support max of ICE_VPMD_RX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, ICE_VPMD_RX_BURST); + /* get some new buffers */ uint16_t nb_bufs = _ice_recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, split_flags); @@ -496,6 +488,32 @@ ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > ICE_VPMD_RX_BURST) { + uint16_t burst; + + burst = ice_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + ICE_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < ICE_VPMD_RX_BURST) + return retval; + } + + return retval + ice_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void ice_vtx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) From patchwork Wed Sep 9 06:36:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 76997 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 838C0A04B1; Wed, 9 Sep 2020 08:39:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B78AF1C116; Wed, 9 Sep 2020 08:38:42 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 939FB1C116 for ; Wed, 9 Sep 2020 08:38:41 +0200 (CEST) IronPort-SDR: QfaiururjdsQc3EBQo+etJGsEUf9T/6f6faS4tR62oSyhBbX6Y60Yy3vy3zlsj5Y86KXx9XhYs tXWDGvli0ZOw== X-IronPort-AV: E=McAfee;i="6000,8403,9738"; a="243094311" X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="243094311" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Sep 2020 23:38:41 -0700 IronPort-SDR: W12f1wK1DgPzctZ8uIFghcm3cd+ZLcZ8qbDrWkWn4exBeO3m6fd8Kq/mhz7PDTyeH4T6nErBf6 b5p36l8Xh2kg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,408,1592895600"; d="scan'208";a="300049936" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga003.jf.intel.com with ESMTP; 08 Sep 2020 23:38:37 -0700 From: Jeff Guo To: qiming.yang@intel.com, beilei.xing@intel.com, wei.zhao1@intel.com, qi.z.zhang@intel.com, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, dev@dpdk.org, jia.guo@intel.com, helin.zhang@intel.com, mb@smartsharesystems.com, ferruh.yigit@intel.com, haiyue.wang@intel.com, stephen@networkplumber.org, barbette@kth.se Date: Wed, 9 Sep 2020 14:36:36 +0800 Message-Id: <20200909063636.60205-6-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200909063636.60205-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20200909063636.60205-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 5/5] net/fm10k: fix vector rx burst for fm10k X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The scattered receive path should use a wrapper function to achieve the goal of burst maximizing. And do some code cleaning for vector rx path. Signed-off-by: Jeff Guo --- drivers/net/fm10k/fm10k_rxtx_vec.c | 42 +++++++++++++++++++++++------- 1 file changed, 33 insertions(+), 9 deletions(-) diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c index eff3933b5..3b25c570b 100644 --- a/drivers/net/fm10k/fm10k_rxtx_vec.c +++ b/drivers/net/fm10k/fm10k_rxtx_vec.c @@ -645,25 +645,23 @@ fm10k_reassemble_packets(struct fm10k_rx_queue *rxq, return pkt_idx; } -/* - * vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets * * Notice: * - don't support ol_flags for rss and csum err - * - nb_pkts > RTE_FM10K_MAX_RX_BURST, only scan RTE_FM10K_MAX_RX_BURST - * numbers of DD bit */ -uint16_t -fm10k_recv_scattered_pkts_vec(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +fm10k_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct fm10k_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_FM10K_MAX_RX_BURST] = {0}; unsigned i = 0; - /* Split_flags only can support max of RTE_FM10K_MAX_RX_BURST */ + /* split_flags only can support max of RTE_FM10K_MAX_RX_BURST */ nb_pkts = RTE_MIN(nb_pkts, RTE_FM10K_MAX_RX_BURST); + /* get some new buffers */ uint16_t nb_bufs = fm10k_recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, split_flags); @@ -691,6 +689,32 @@ fm10k_recv_scattered_pkts_vec(void *rx_queue, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +fm10k_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_FM10K_MAX_RX_BURST) { + uint16_t burst; + + burst = fm10k_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_FM10K_MAX_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_FM10K_MAX_RX_BURST) + return retval; + } + + return retval + fm10k_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static const struct fm10k_txq_ops vec_txq_ops = { .reset = fm10k_reset_tx_queue, };