From patchwork Thu Dec 7 07:46:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenzhuo Lu X-Patchwork-Id: 134909 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 255B143696; Thu, 7 Dec 2023 08:23:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1E8E842ED2; Thu, 7 Dec 2023 08:23:48 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 5AEF642EC4 for ; Thu, 7 Dec 2023 08:23:46 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701933826; x=1733469826; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ecKx1YfrTAq5yDAW/WIp+qeWoq+an6+Znf4XYNaCOVM=; b=QlyJXl73pcQN+IJyfL1C+EMgYjZIffoDFDTBCPKwKMJI9USpJVEYMk17 qjrlG8cCz1ytO8PaJSdVfoNSj8dMgc4atWjjwPAVvT2gsI5z+Wj3A9hCy jenVPIPmEMx3F0w6zsrszzuGp1r1gR0XMaEkIVULjpXNLiAZjBEzutqzv K2zcxVh+mq0tvFVq+pd2OChR41VJKYy35YmJM/S5kU7+Homhus4vxZsvl RZheK8kx/o2YwJrn5hxoQHCgvtV5DJXDDqKAjdzHP0Px0DUxbxBPCUgzb NWM0/cKj6glPHHzTya4UC3itUcunJpgWJ7aEHQId5byO8HuXQtwDl4Eps A==; X-IronPort-AV: E=McAfee;i="6600,9927,10916"; a="425340676" X-IronPort-AV: E=Sophos;i="6.04,256,1695711600"; d="scan'208";a="425340676" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 23:23:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10916"; a="771623748" X-IronPort-AV: E=Sophos;i="6.04,256,1695711600"; d="scan'208";a="771623748" Received: from dpdk-wenzhuo-icelake.sh.intel.com ([10.67.111.210]) by orsmga002.jf.intel.com with ESMTP; 06 Dec 2023 23:23:44 -0800 From: Wenzhuo Lu To: dev@dpdk.org Cc: Wenzhuo Lu Subject: [PATCH v2 2/2] common/idpf: enable AVX2 for single queue Tx Date: Thu, 7 Dec 2023 07:46:36 +0000 Message-Id: <20231207074636.2175645-3-wenzhuo.lu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231207074636.2175645-1-wenzhuo.lu@intel.com> References: <20231207063514.2001192-1-wenzhuo.lu@intel.com> <20231207074636.2175645-1-wenzhuo.lu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In case some CPUs don't support AVX512. Enable AVX2 for them to get better per-core performance. Signed-off-by: Wenzhuo Lu --- doc/guides/rel_notes/release_24_03.rst | 3 + drivers/common/idpf/idpf_common_device.h | 1 + drivers/common/idpf/idpf_common_rxtx.h | 4 + drivers/common/idpf/idpf_common_rxtx_avx2.c | 225 ++++++++++++++++++++ drivers/common/idpf/version.map | 1 + drivers/net/idpf/idpf_rxtx.c | 14 ++ 6 files changed, 248 insertions(+) diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst index e9c9717706..08c8ee07c3 100644 --- a/doc/guides/rel_notes/release_24_03.rst +++ b/doc/guides/rel_notes/release_24_03.rst @@ -55,6 +55,9 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= + * **Added support of vector instructions on IDPF.** + + Added support of AVX2 instructions in IDPF single queue RX and TX path. Removed Items ------------- diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index afe3d48798..60f8cab53a 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -115,6 +115,7 @@ struct idpf_vport { bool rx_vec_allowed; bool tx_vec_allowed; bool rx_use_avx2; + bool tx_use_avx2; bool rx_use_avx512; bool tx_use_avx512; diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h index 4d64063718..a92d328313 100644 --- a/drivers/common/idpf/idpf_common_rxtx.h +++ b/drivers/common/idpf/idpf_common_rxtx.h @@ -306,5 +306,9 @@ __rte_internal uint16_t idpf_dp_singleq_recv_pkts_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +__rte_internal +uint16_t idpf_dp_singleq_xmit_pkts_avx2(void *tx_queue, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #endif /* _IDPF_COMMON_RXTX_H_ */ diff --git a/drivers/common/idpf/idpf_common_rxtx_avx2.c b/drivers/common/idpf/idpf_common_rxtx_avx2.c index 02ce0534c4..9560999c5e 100644 --- a/drivers/common/idpf/idpf_common_rxtx_avx2.c +++ b/drivers/common/idpf/idpf_common_rxtx_avx2.c @@ -588,3 +588,228 @@ idpf_dp_singleq_recv_pkts_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, { return _idpf_singleq_recv_raw_pkts_vec_avx2(rx_queue, rx_pkts, nb_pkts, NULL); } + +static __rte_always_inline void +idpf_tx_backlog_entry(struct idpf_tx_entry *txep, + struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + int i; + + for (i = 0; i < (int)nb_pkts; ++i) + txep[i].mbuf = tx_pkts[i]; +} + +static __rte_always_inline int +idpf_singleq_tx_free_bufs_vec(struct idpf_tx_queue *txq) +{ + struct idpf_tx_entry *txep; + uint32_t n; + uint32_t i; + int nb_free = 0; + struct rte_mbuf *m, *free[txq->rs_thresh]; + + /* check DD bits on threshold descriptor */ + if ((txq->tx_ring[txq->next_dd].qw1 & + rte_cpu_to_le_64(IDPF_TXD_QW1_DTYPE_M)) != + rte_cpu_to_le_64(IDPF_TX_DESC_DTYPE_DESC_DONE)) + return 0; + + n = txq->rs_thresh; + + /* first buffer to free from S/W ring is at index + * next_dd - (rs_thresh-1) + */ + txep = &txq->sw_ring[txq->next_dd - (n - 1)]; + m = rte_pktmbuf_prefree_seg(txep[0].mbuf); + if (likely(m)) { + free[0] = m; + nb_free = 1; + for (i = 1; i < n; i++) { + m = rte_pktmbuf_prefree_seg(txep[i].mbuf); + if (likely(m)) { + if (likely(m->pool == free[0]->pool)) { + free[nb_free++] = m; + } else { + rte_mempool_put_bulk(free[0]->pool, + (void *)free, + nb_free); + free[0] = m; + nb_free = 1; + } + } + } + rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); + } else { + for (i = 1; i < n; i++) { + m = rte_pktmbuf_prefree_seg(txep[i].mbuf); + if (m) + rte_mempool_put(m->pool, m); + } + } + + /* buffers were freed, update counters */ + txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh); + txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh); + if (txq->next_dd >= txq->nb_tx_desc) + txq->next_dd = (uint16_t)(txq->rs_thresh - 1); + + return txq->rs_thresh; +} + +static inline void +idpf_singleq_vtx1(volatile struct idpf_base_tx_desc *txdp, + struct rte_mbuf *pkt, uint64_t flags) +{ + uint64_t high_qw = + (IDPF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IDPF_TXD_QW1_CMD_S) | + ((uint64_t)pkt->data_len << IDPF_TXD_QW1_TX_BUF_SZ_S)); + + __m128i descriptor = _mm_set_epi64x(high_qw, + pkt->buf_iova + pkt->data_off); + _mm_store_si128((__m128i *)txdp, descriptor); +} + +static inline void +idpf_singleq_vtx(volatile struct idpf_base_tx_desc *txdp, + struct rte_mbuf **pkt, uint16_t nb_pkts, uint64_t flags) +{ + const uint64_t hi_qw_tmpl = (IDPF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IDPF_TXD_QW1_CMD_S)); + + /* if unaligned on 32-bit boundary, do one to align */ + if (((uintptr_t)txdp & 0x1F) != 0 && nb_pkts != 0) { + idpf_singleq_vtx1(txdp, *pkt, flags); + nb_pkts--, txdp++, pkt++; + } + + /* do two at a time while possible, in bursts */ + for (; nb_pkts > 3; txdp += 4, pkt += 4, nb_pkts -= 4) { + uint64_t hi_qw3 = + hi_qw_tmpl | + ((uint64_t)pkt[3]->data_len << + IDPF_TXD_QW1_TX_BUF_SZ_S); + uint64_t hi_qw2 = + hi_qw_tmpl | + ((uint64_t)pkt[2]->data_len << + IDPF_TXD_QW1_TX_BUF_SZ_S); + uint64_t hi_qw1 = + hi_qw_tmpl | + ((uint64_t)pkt[1]->data_len << + IDPF_TXD_QW1_TX_BUF_SZ_S); + uint64_t hi_qw0 = + hi_qw_tmpl | + ((uint64_t)pkt[0]->data_len << + IDPF_TXD_QW1_TX_BUF_SZ_S); + + __m256i desc2_3 = + _mm256_set_epi64x + (hi_qw3, + pkt[3]->buf_iova + pkt[3]->data_off, + hi_qw2, + pkt[2]->buf_iova + pkt[2]->data_off); + __m256i desc0_1 = + _mm256_set_epi64x + (hi_qw1, + pkt[1]->buf_iova + pkt[1]->data_off, + hi_qw0, + pkt[0]->buf_iova + pkt[0]->data_off); + _mm256_store_si256((void *)(txdp + 2), desc2_3); + _mm256_store_si256((void *)txdp, desc0_1); + } + + /* do any last ones */ + while (nb_pkts) { + idpf_singleq_vtx1(txdp, *pkt, flags); + txdp++, pkt++, nb_pkts--; + } +} + +static inline uint16_t +idpf_singleq_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue; + volatile struct idpf_base_tx_desc *txdp; + struct idpf_tx_entry *txep; + uint16_t n, nb_commit, tx_id; + uint64_t flags = IDPF_TX_DESC_CMD_EOP; + uint64_t rs = IDPF_TX_DESC_CMD_RS | flags; + + /* cross rx_thresh boundary is not allowed */ + nb_pkts = RTE_MIN(nb_pkts, txq->rs_thresh); + + if (txq->nb_free < txq->free_thresh) + idpf_singleq_tx_free_bufs_vec(txq); + + nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts); + if (unlikely(nb_pkts == 0)) + return 0; + + tx_id = txq->tx_tail; + txdp = &txq->tx_ring[tx_id]; + txep = &txq->sw_ring[tx_id]; + + txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts); + + n = (uint16_t)(txq->nb_tx_desc - tx_id); + if (nb_commit >= n) { + idpf_tx_backlog_entry(txep, tx_pkts, n); + + idpf_singleq_vtx(txdp, tx_pkts, n - 1, flags); + tx_pkts += (n - 1); + txdp += (n - 1); + + idpf_singleq_vtx1(txdp, *tx_pkts++, rs); + + nb_commit = (uint16_t)(nb_commit - n); + + tx_id = 0; + txq->next_rs = (uint16_t)(txq->rs_thresh - 1); + + /* avoid reach the end of ring */ + txdp = &txq->tx_ring[tx_id]; + txep = &txq->sw_ring[tx_id]; + } + + idpf_tx_backlog_entry(txep, tx_pkts, nb_commit); + + idpf_singleq_vtx(txdp, tx_pkts, nb_commit, flags); + + tx_id = (uint16_t)(tx_id + nb_commit); + if (tx_id > txq->next_rs) { + txq->tx_ring[txq->next_rs].qw1 |= + rte_cpu_to_le_64(((uint64_t)IDPF_TX_DESC_CMD_RS) << + IDPF_TXD_QW1_CMD_S); + txq->next_rs = + (uint16_t)(txq->next_rs + txq->rs_thresh); + } + + txq->tx_tail = tx_id; + + IDPF_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail); + + return nb_pkts; +} + +uint16_t +idpf_dp_singleq_xmit_pkts_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx = 0; + struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue; + + while (nb_pkts) { + uint16_t ret, num; + + num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh); + ret = idpf_singleq_xmit_fixed_burst_vec_avx2(tx_queue, &tx_pkts[nb_tx], + num); + nb_tx += ret; + nb_pkts -= ret; + if (ret < num) + break; + } + + return nb_tx; +} diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 4510aae6b3..eadcb9a2cf 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -15,6 +15,7 @@ INTERNAL { idpf_dp_splitq_xmit_pkts; idpf_dp_splitq_xmit_pkts_avx512; idpf_dp_singleq_recv_pkts_avx2; + idpf_dp_singleq_xmit_pkts_avx2; idpf_qc_rx_thresh_check; idpf_qc_rx_queue_release; diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index b155c9ccd1..45c791515d 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -884,6 +884,12 @@ idpf_set_tx_function(struct rte_eth_dev *dev) if (idpf_tx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH && rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { vport->tx_vec_allowed = true; + + if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1 || + rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256) + vport->tx_use_avx2 = true; + if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) #ifdef CC_AVX512_SUPPORT { @@ -943,6 +949,14 @@ idpf_set_tx_function(struct rte_eth_dev *dev) return; } #endif /* CC_AVX512_SUPPORT */ + if (vport->tx_use_avx2) { + PMD_DRV_LOG(NOTICE, + "Using Single AVX2 Vector Tx (port %d).", + dev->data->port_id); + dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx2; + dev->tx_pkt_prepare = idpf_dp_prep_pkts; + return; + } } PMD_DRV_LOG(NOTICE, "Using Single Scalar Tx (port %d).",