From patchwork Fri Dec 22 21:56:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135548 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CC2043762; Fri, 22 Dec 2023 22:59:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9154342ECA; Fri, 22 Dec 2023 22:57:33 +0100 (CET) Received: from mail-io1-f45.google.com (mail-io1-f45.google.com [209.85.166.45]) by mails.dpdk.org (Postfix) with ESMTP id 78CE942EC0 for ; Fri, 22 Dec 2023 22:57:31 +0100 (CET) Received: by mail-io1-f45.google.com with SMTP id ca18e2360f4ac-7b7f3eda169so115554439f.2 for ; Fri, 22 Dec 2023 13:57:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703282250; x=1703887050; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=AsPYp+PggWDifJ5ya6+znWwsdnWZIl/AZQCU1eX9ZBE=; b=YualbPDtl4lcuPszcdzdG8oS+YWrn2xh3bWHjdGttzMnSsoQyO1YR05WMbC/uSKsCx cnRc62GxnZHFICfNQAlQHXBsfFfgy6TKLGw6G0yZWiOQM7RvD2AAXuihPJH9dMh1IFY/ AyHcebHTrVZl3w4mRND1ZDvUkCWQeI2aDNksQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703282250; x=1703887050; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AsPYp+PggWDifJ5ya6+znWwsdnWZIl/AZQCU1eX9ZBE=; b=eTr+FG/3R5KQ8wr2jkuIPQpgTgPMOe4byFGyMd1/H/pWTidvGehGK/mpD4BcWOG/p/ UzrrX1Tp46wKlK4EZ0Zys+y6267trs90POFYMGmvtxKQFKdYBCoJqhPxqEzl3W5AjVby doLWMBxTDVS/U/CjmGgEU2JhBlr+uVTaHpckgRXzuT7mO/pe1gVcavTGnWg9oJ2NxJto QuaPcdAPdbdRUhPKT17obVXvgAifgYEeaed7lQEg0ieNNq0N4bOXOmzntBRRC8788z1a oAK6IFFnufAVYe7Abui646rKw8/hdnFhjxl/uoQU/bIMLLmt5nbfG8nCH1/m/nLhNgxX 9/xA== X-Gm-Message-State: AOJu0Yy3ZGuBhjgkLDjRB+CsCqaRaypULG4m+aAaVcz6Zsrdmu3SFnTN /rmJkPV7P0guKgyrLcUi5Sy2/KHcd8ChlxOEnCOhSFfQiaPJ6XIyeghkr5sJNl94h87yM1oh7QC g1WEVBFC98Xo13grYQZEu/r9qtHGutKJWpp0RbWxppxSlbNN0OknREdHDwEClAxwJywYY2whvy0 w= X-Google-Smtp-Source: AGHT+IGBu3WjaNMFcm4ULYLdu+mPMF4U3V3SEsARgwXxdGFmjLbGFL9yC76jboInCX3PvRdHHduIQQ== X-Received: by 2002:a05:6e02:1c2b:b0:35f:7532:59de with SMTP id m11-20020a056e021c2b00b0035f753259demr2349161ilh.56.1703282250289; Fri, 22 Dec 2023 13:57:30 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id sr5-20020a17090b4e8500b0028afd8b1e0bsm3540700pjb.57.2023.12.22.13.57.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 13:57:29 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v2 18/18] net/bnxt: enable SSE mode for compressed CQE Date: Fri, 22 Dec 2023 13:56:59 -0800 Message-Id: <20231222215659.64993-19-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231222215659.64993-1-ajit.khaparde@broadcom.com> References: <20231222215659.64993-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org P7 device family supports 16 byte Rx completions. Enable SSE vector mode for compressed Rx CQE processing. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt_ethdev.c | 16 ++- drivers/net/bnxt/bnxt_rxr.h | 2 + drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 167 +++++++++++++++++++++++++-- 3 files changed, 173 insertions(+), 12 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index bd8c7557dd..f9cd234bb6 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -1377,7 +1377,8 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev) * asynchronous completions and receive completions can be placed in * the same completion ring. */ - if (BNXT_TRUFLOW_EN(bp) || !BNXT_NUM_ASYNC_CPR(bp)) + if ((BNXT_TRUFLOW_EN(bp) && !BNXT_CHIP_P7(bp)) || + !BNXT_NUM_ASYNC_CPR(bp)) goto use_scalar_rx; /* @@ -1410,12 +1411,19 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev) return bnxt_crx_pkts_vec_avx2; return bnxt_recv_pkts_vec_avx2; } - #endif +#endif if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { PMD_DRV_LOG(INFO, "Using SSE vector mode receive for port %d\n", eth_dev->data->port_id); bp->flags |= BNXT_FLAG_RX_VECTOR_PKT_MODE; + if (bnxt_compressed_rx_cqe_mode_enabled(bp)) { +#if defined(RTE_ARCH_ARM64) + goto use_scalar_rx; +#else + return bnxt_crx_pkts_vec; +#endif + } return bnxt_recv_pkts_vec; } @@ -1445,7 +1453,8 @@ bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev) */ if (eth_dev->data->scattered_rx || (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) || - BNXT_TRUFLOW_EN(bp) || bp->ieee_1588) + (BNXT_TRUFLOW_EN(bp) && !BNXT_CHIP_P7(bp)) || + bp->ieee_1588) goto use_scalar_tx; #if defined(RTE_ARCH_X86) @@ -3125,6 +3134,7 @@ static const struct { } bnxt_rx_burst_info[] = { {bnxt_recv_pkts, "Scalar"}, #if defined(RTE_ARCH_X86) + {bnxt_crx_pkts_vec, "Vector SSE"}, {bnxt_recv_pkts_vec, "Vector SSE"}, #endif #if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT) diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index a474a69ae3..d36cbded1d 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -156,6 +156,8 @@ int bnxt_flush_rx_cmp(struct bnxt_cp_ring_info *cpr); #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) uint16_t bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t bnxt_crx_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); int bnxt_rxq_vec_setup(struct bnxt_rx_queue *rxq); #endif diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c index e99a547f58..c04b33a382 100644 --- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c +++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c @@ -54,15 +54,9 @@ static inline void descs_to_mbufs(__m128i mm_rxcmp[4], __m128i mm_rxcmp1[4], - __m128i mbuf_init, struct rte_mbuf **mbuf, - struct bnxt_rx_ring_info *rxr) + __m128i mbuf_init, const __m128i shuf_msk, + struct rte_mbuf **mbuf, struct bnxt_rx_ring_info *rxr) { - const __m128i shuf_msk = - _mm_set_epi8(15, 14, 13, 12, /* rss */ - 0xFF, 0xFF, /* vlan_tci (zeroes) */ - 3, 2, /* data_len */ - 0xFF, 0xFF, 3, 2, /* pkt_len */ - 0xFF, 0xFF, 0xFF, 0xFF); /* pkt_type (zeroes) */ const __m128i flags_type_mask = _mm_set1_epi32(RX_PKT_CMPL_FLAGS_ITYPE_MASK); const __m128i flags2_mask1 = @@ -166,6 +160,12 @@ recv_burst_vec_sse(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) int nb_rx_pkts = 0; const __m128i valid_target = _mm_set1_epi32(!!(raw_cons & cp_ring_size)); + const __m128i shuf_msk = + _mm_set_epi8(15, 14, 13, 12, /* rss */ + 0xFF, 0xFF, /* vlan_tci (zeroes) */ + 3, 2, /* data_len */ + 0xFF, 0xFF, 3, 2, /* pkt_len */ + 0xFF, 0xFF, 0xFF, 0xFF); /* pkt_type (zeroes) */ int i; /* If Rx Q was stopped return */ @@ -264,7 +264,7 @@ recv_burst_vec_sse(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (num_valid == 0) break; - descs_to_mbufs(rxcmp, rxcmp1, mbuf_init, &rx_pkts[nb_rx_pkts], + descs_to_mbufs(rxcmp, rxcmp1, mbuf_init, shuf_msk, &rx_pkts[nb_rx_pkts], rxr); nb_rx_pkts += num_valid; @@ -283,6 +283,134 @@ recv_burst_vec_sse(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) return nb_rx_pkts; } +static uint16_t +crx_burst_vec_sse(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct bnxt_rx_queue *rxq = rx_queue; + const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer); + struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + uint16_t cp_ring_size = cpr->cp_ring_struct->ring_size; + uint16_t rx_ring_size = rxr->rx_ring_struct->ring_size; + struct cmpl_base *cp_desc_ring = cpr->cp_desc_ring; + uint64_t valid, desc_valid_mask = ~0ULL; + const __m128i info3_v_mask = _mm_set1_epi32(CMPL_BASE_V); + uint32_t raw_cons = cpr->cp_raw_cons; + uint32_t cons, mbcons; + int nb_rx_pkts = 0; + const __m128i valid_target = + _mm_set1_epi32(!!(raw_cons & cp_ring_size)); + const __m128i shuf_msk = + _mm_set_epi8(7, 6, 5, 4, /* rss */ + 0xFF, 0xFF, /* vlan_tci (zeroes) */ + 3, 2, /* data_len */ + 0xFF, 0xFF, 3, 2, /* pkt_len */ + 0xFF, 0xFF, 0xFF, 0xFF); /* pkt_type (zeroes) */ + int i; + + /* If Rx Q was stopped return */ + if (unlikely(!rxq->rx_started)) + return 0; + + if (rxq->rxrearm_nb >= rxq->rx_free_thresh) + bnxt_rxq_rearm(rxq, rxr); + + cons = raw_cons & (cp_ring_size - 1); + mbcons = raw_cons & (rx_ring_size - 1); + + /* Prefetch first four descriptor pairs. */ + rte_prefetch0(&cp_desc_ring[cons]); + + /* Ensure that we do not go past the ends of the rings. */ + nb_pkts = RTE_MIN(nb_pkts, RTE_MIN(rx_ring_size - mbcons, + cp_ring_size - cons)); + /* + * If we are at the end of the ring, ensure that descriptors after the + * last valid entry are not treated as valid. Otherwise, force the + * maximum number of packets to receive to be a multiple of the per- + * loop count. + */ + if (nb_pkts < BNXT_RX_DESCS_PER_LOOP_VEC128) { + desc_valid_mask >>= + 16 * (BNXT_RX_DESCS_PER_LOOP_VEC128 - nb_pkts); + } else { + nb_pkts = + RTE_ALIGN_FLOOR(nb_pkts, BNXT_RX_DESCS_PER_LOOP_VEC128); + } + + /* Handle RX burst request */ + for (i = 0; i < nb_pkts; i += BNXT_RX_DESCS_PER_LOOP_VEC128, + cons += BNXT_RX_DESCS_PER_LOOP_VEC128, + mbcons += BNXT_RX_DESCS_PER_LOOP_VEC128) { + __m128i rxcmp1[BNXT_RX_DESCS_PER_LOOP_VEC128]; + __m128i rxcmp[BNXT_RX_DESCS_PER_LOOP_VEC128]; + __m128i tmp0, tmp1, info3_v; + uint32_t num_valid; + + /* Copy four mbuf pointers to output array. */ + tmp0 = _mm_loadu_si128((void *)&rxr->rx_buf_ring[mbcons]); +#ifdef RTE_ARCH_X86_64 + tmp1 = _mm_loadu_si128((void *)&rxr->rx_buf_ring[mbcons + 2]); +#endif + _mm_storeu_si128((void *)&rx_pkts[i], tmp0); +#ifdef RTE_ARCH_X86_64 + _mm_storeu_si128((void *)&rx_pkts[i + 2], tmp1); +#endif + + /* Prefetch four descriptor pairs for next iteration. */ + if (i + BNXT_RX_DESCS_PER_LOOP_VEC128 < nb_pkts) + rte_prefetch0(&cp_desc_ring[cons + 4]); + + /* + * Load the four current descriptors into SSE registers in + * reverse order to ensure consistent state. + */ + rxcmp[3] = _mm_load_si128((void *)&cp_desc_ring[cons + 3]); + rte_compiler_barrier(); + rxcmp[2] = _mm_load_si128((void *)&cp_desc_ring[cons + 2]); + rte_compiler_barrier(); + rxcmp[1] = _mm_load_si128((void *)&cp_desc_ring[cons + 1]); + rte_compiler_barrier(); + rxcmp[0] = _mm_load_si128((void *)&cp_desc_ring[cons + 0]); + + tmp1 = _mm_unpackhi_epi32(rxcmp[2], rxcmp[3]); + tmp0 = _mm_unpackhi_epi32(rxcmp[0], rxcmp[1]); + + /* Isolate descriptor valid flags. */ + info3_v = _mm_and_si128(_mm_unpacklo_epi64(tmp0, tmp1), + info3_v_mask); + info3_v = _mm_xor_si128(info3_v, valid_target); + + /* + * Pack the 128-bit array of valid descriptor flags into 64 + * bits and count the number of set bits in order to determine + * the number of valid descriptors. + */ + valid = _mm_cvtsi128_si64(_mm_packs_epi32(info3_v, info3_v)); + num_valid = __builtin_popcountll(valid & desc_valid_mask); + + if (num_valid == 0) + break; + + descs_to_mbufs(rxcmp, rxcmp1, mbuf_init, shuf_msk, &rx_pkts[nb_rx_pkts], + rxr); + nb_rx_pkts += num_valid; + + if (num_valid < BNXT_RX_DESCS_PER_LOOP_VEC128) + break; + } + + if (nb_rx_pkts) { + rxr->rx_raw_prod = RING_ADV(rxr->rx_raw_prod, nb_rx_pkts); + + rxq->rxrearm_nb += nb_rx_pkts; + cpr->cp_raw_cons += nb_rx_pkts; + bnxt_db_cq(cpr); + } + + return nb_rx_pkts; +} + uint16_t bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { @@ -304,6 +432,27 @@ bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) return cnt + recv_burst_vec_sse(rx_queue, rx_pkts + cnt, nb_pkts); } +uint16_t +bnxt_crx_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + uint16_t cnt = 0; + + while (nb_pkts > RTE_BNXT_MAX_RX_BURST) { + uint16_t burst; + + burst = crx_burst_vec_sse(rx_queue, rx_pkts + cnt, + RTE_BNXT_MAX_RX_BURST); + + cnt += burst; + nb_pkts -= burst; + + if (burst < RTE_BNXT_MAX_RX_BURST) + return cnt; + } + + return cnt + crx_burst_vec_sse(rx_queue, rx_pkts + cnt, nb_pkts); +} + static void bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq) {