From patchwork Wed Sep 9 15:57:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Richardson X-Patchwork-Id: 77074 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7DABDA04B5; Wed, 9 Sep 2020 17:57:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 639571C0CA; Wed, 9 Sep 2020 17:57:07 +0200 (CEST) Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by dpdk.org (Postfix) with ESMTP id 498D6DE0 for ; Wed, 9 Sep 2020 17:57:06 +0200 (CEST) Received: by mail-pj1-f66.google.com with SMTP id gf14so1557626pjb.5 for ; Wed, 09 Sep 2020 08:57:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Z0wuH47Dp7l+9fYBEMjXWrPerQL8vAgUKcxWfTvRD84=; b=PVs1VF+60JCCNFA9HmxaoP+bNTQylv6ifLt8wPtWq5QqHJJRKWtHHreuu6nx8T1u6E PrrAgxcnISJRBhVKg5hTTRDXDLI2m8C8YwtOKDjWGKkg6qmvinsrTyADVwwJpbjZ842e +MHeDHjIotlY0APiHHoGn/llk2hfcFVpIffSE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Z0wuH47Dp7l+9fYBEMjXWrPerQL8vAgUKcxWfTvRD84=; b=BpOAr7KoQSwDafo2kjLg8nf0GklOBrgX0mpfXU5BSeh/5SHi7hkzwwlEG/SVnq410c 3W0fekPAt1BztgIwT/ySu9uylH24O0jqvvIdheKuvWz1xwnbVZMCKViLibcfSTTDanhv 3d6EXTuOxJW8l3eSUhIKYg6UfdMlBb8eZG8x6/fnCQ6BAnzHgjsMEJJZkXOIO3Ai+FgT WTXvkimmjxMNbtWJ/ZGtYtgpc/6Mh7qc7Z9UPg6OtkzroDig5nllnRuyuafDfN1lno30 hmucYX6X2K/kafOjMlJISFBzn0CukqffFtru4S7i6aDCRjJfqHe/jLJ78492yW+8P3bf /MgA== X-Gm-Message-State: AOAM533yUwWnbidWmk4+GD5MK1BI5ZlcbVi4j8fpCTrsPpEHW1Y9BcqK BaIqi7csqw/BOJj5bOxuz/wrHw== X-Google-Smtp-Source: ABdhPJzZt+V1j6cfB4xFnaOnuCRCl7SEXYIIaW7NjNBAJAQaUM3O9XEW19cVHR1xo2+gqsn1P2YxjA== X-Received: by 2002:a17:90a:a081:: with SMTP id r1mr1342421pjp.115.1599667025343; Wed, 09 Sep 2020 08:57:05 -0700 (PDT) Received: from localhost.localdomain ([192.19.231.250]) by smtp.gmail.com with ESMTPSA id y128sm3025107pfy.74.2020.09.09.08.57.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Sep 2020 08:57:04 -0700 (PDT) From: Lance Richardson To: Ajit Khaparde , Somnath Kotur Cc: dev@dpdk.org Date: Wed, 9 Sep 2020 11:57:00 -0400 Message-Id: <20200909155700.29016-1-lance.richardson@broadcom.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 10/12] net/bnxt: optimize vector mode mbuf allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Simplify and optimize receive mbuf allocation function used by the vector mode PMDs. Reviewed-by: Ajit Kumar Khaparde Signed-off-by: Lance Richardson --- drivers/net/bnxt/bnxt_rxtx_vec_common.h | 40 ++++++++++++++ drivers/net/bnxt/bnxt_rxtx_vec_neon.c | 70 ------------------------- drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 70 ------------------------- 3 files changed, 40 insertions(+), 140 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h index fc2a12272b..819b8290e4 100644 --- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h +++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h @@ -56,4 +56,44 @@ bnxt_rxq_vec_setup_common(struct bnxt_rx_queue *rxq) rxq->rxrearm_start = 0; return 0; } + +static inline void +bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr) +{ + struct rx_prod_pkt_bd *rxbds = &rxr->rx_desc_ring[rxq->rxrearm_start]; + struct rte_mbuf **rx_bufs = &rxr->rx_buf_ring[rxq->rxrearm_start]; + int nb, i; + + /* + * Number of mbufs to allocate must be a multiple of four. The + * allocation must not go past the end of the ring. + */ + nb = RTE_MIN(rxq->rxrearm_nb & ~0x3, + rxq->nb_rx_desc - rxq->rxrearm_start); + + /* Allocate new mbufs into the software ring. */ + if (rte_mempool_get_bulk(rxq->mb_pool, (void *)rx_bufs, nb) < 0) { + rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed += nb; + + return; + } + + /* Initialize the mbufs in vector, process 4 mbufs per loop. */ + for (i = 0; i < nb; i += 4) { + rxbds[0].address = rte_mbuf_data_iova_default(rx_bufs[0]); + rxbds[1].address = rte_mbuf_data_iova_default(rx_bufs[1]); + rxbds[2].address = rte_mbuf_data_iova_default(rx_bufs[2]); + rxbds[3].address = rte_mbuf_data_iova_default(rx_bufs[3]); + + rxbds += 4; + rx_bufs += 4; + } + + rxq->rxrearm_start += nb; + bnxt_db_write(&rxr->rx_db, rxq->rxrearm_start - 1); + if (rxq->rxrearm_start >= rxq->nb_rx_desc) + rxq->rxrearm_start = 0; + + rxq->rxrearm_nb -= nb; +} #endif /* _BNXT_RXTX_VEC_COMMON_H_ */ diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c index 37b8c83656..24f9fc3c39 100644 --- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c +++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c @@ -22,76 +22,6 @@ * RX Ring handling */ -static inline void -bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr) -{ - struct rx_prod_pkt_bd *rxbds = &rxr->rx_desc_ring[rxq->rxrearm_start]; - struct rte_mbuf **rx_bufs = &rxr->rx_buf_ring[rxq->rxrearm_start]; - struct rte_mbuf *mb0, *mb1; - int nb, i; - - const uint64x2_t hdr_room = {0, RTE_PKTMBUF_HEADROOM}; - const uint64x2_t addrmask = {0, UINT64_MAX}; - - /* - * Number of mbufs to allocate must be a multiple of two. The - * allocation must not go past the end of the ring. - */ - nb = RTE_MIN(rxq->rxrearm_nb & ~0x1, - rxq->nb_rx_desc - rxq->rxrearm_start); - - /* Allocate new mbufs into the software ring */ - if (rte_mempool_get_bulk(rxq->mb_pool, (void *)rx_bufs, nb) < 0) { - rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed += nb; - - return; - } - - /* Initialize the mbufs in vector, process 2 mbufs in one loop */ - for (i = 0; i < nb; i += 2, rx_bufs += 2) { - uint64x2_t buf_addr0, buf_addr1; - uint64x2_t rxbd0, rxbd1; - - mb0 = rx_bufs[0]; - mb1 = rx_bufs[1]; - - /* Load address fields from both mbufs */ - buf_addr0 = vld1q_u64((uint64_t *)&mb0->buf_addr); - buf_addr1 = vld1q_u64((uint64_t *)&mb1->buf_addr); - - /* Load both rx descriptors (preserving some existing fields) */ - rxbd0 = vld1q_u64((uint64_t *)(rxbds + 0)); - rxbd1 = vld1q_u64((uint64_t *)(rxbds + 1)); - - /* Add default offset to buffer address. */ - buf_addr0 = vaddq_u64(buf_addr0, hdr_room); - buf_addr1 = vaddq_u64(buf_addr1, hdr_room); - - /* Clear all fields except address. */ - buf_addr0 = vandq_u64(buf_addr0, addrmask); - buf_addr1 = vandq_u64(buf_addr1, addrmask); - - /* Clear address field in descriptor. */ - rxbd0 = vbicq_u64(rxbd0, addrmask); - rxbd1 = vbicq_u64(rxbd1, addrmask); - - /* Set address field in descriptor. */ - rxbd0 = vaddq_u64(rxbd0, buf_addr0); - rxbd1 = vaddq_u64(rxbd1, buf_addr1); - - /* Store descriptors to memory. */ - vst1q_u64((uint64_t *)(rxbds++), rxbd0); - vst1q_u64((uint64_t *)(rxbds++), rxbd1); - } - - rxq->rxrearm_start += nb; - bnxt_db_write(&rxr->rx_db, rxq->rxrearm_start - 1); - if (rxq->rxrearm_start >= rxq->nb_rx_desc) - rxq->rxrearm_start = 0; - - rxq->rxrearm_nb -= nb; -} - static uint32_t bnxt_parse_pkt_type(uint32x4_t mm_rxcmp, uint32x4_t mm_rxcmp1) { diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c index 761d835963..7e87555408 100644 --- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c +++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c @@ -26,76 +26,6 @@ * RX Ring handling */ -static inline void -bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr) -{ - struct rx_prod_pkt_bd *rxbds = &rxr->rx_desc_ring[rxq->rxrearm_start]; - struct rte_mbuf **rx_bufs = &rxr->rx_buf_ring[rxq->rxrearm_start]; - struct rte_mbuf *mb0, *mb1; - int nb, i; - - const __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, 0); - const __m128i addrmask = _mm_set_epi64x(UINT64_MAX, 0); - - /* - * Number of mbufs to allocate must be a multiple of two. The - * allocation must not go past the end of the ring. - */ - nb = RTE_MIN(rxq->rxrearm_nb & ~0x1, - rxq->nb_rx_desc - rxq->rxrearm_start); - - /* Allocate new mbufs into the software ring */ - if (rte_mempool_get_bulk(rxq->mb_pool, (void *)rx_bufs, nb) < 0) { - rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed += nb; - - return; - } - - /* Initialize the mbufs in vector, process 2 mbufs in one loop */ - for (i = 0; i < nb; i += 2, rx_bufs += 2) { - __m128i buf_addr0, buf_addr1; - __m128i rxbd0, rxbd1; - - mb0 = rx_bufs[0]; - mb1 = rx_bufs[1]; - - /* Load address fields from both mbufs */ - buf_addr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr); - buf_addr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr); - - /* Load both rx descriptors (preserving some existing fields) */ - rxbd0 = _mm_loadu_si128((__m128i *)(rxbds + 0)); - rxbd1 = _mm_loadu_si128((__m128i *)(rxbds + 1)); - - /* Add default offset to buffer address. */ - buf_addr0 = _mm_add_epi64(buf_addr0, hdr_room); - buf_addr1 = _mm_add_epi64(buf_addr1, hdr_room); - - /* Clear all fields except address. */ - buf_addr0 = _mm_and_si128(buf_addr0, addrmask); - buf_addr1 = _mm_and_si128(buf_addr1, addrmask); - - /* Clear address field in descriptor. */ - rxbd0 = _mm_andnot_si128(addrmask, rxbd0); - rxbd1 = _mm_andnot_si128(addrmask, rxbd1); - - /* Set address field in descriptor. */ - rxbd0 = _mm_add_epi64(rxbd0, buf_addr0); - rxbd1 = _mm_add_epi64(rxbd1, buf_addr1); - - /* Store descriptors to memory. */ - _mm_store_si128((__m128i *)(rxbds++), rxbd0); - _mm_store_si128((__m128i *)(rxbds++), rxbd1); - } - - rxq->rxrearm_start += nb; - bnxt_db_write(&rxr->rx_db, rxq->rxrearm_start - 1); - if (rxq->rxrearm_start >= rxq->nb_rx_desc) - rxq->rxrearm_start = 0; - - rxq->rxrearm_nb -= nb; -} - static __m128i bnxt_parse_pkt_type(__m128i mm_rxcmp, __m128i mm_rxcmp1) {