From patchwork Tue Dec 3 16:41:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148985 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78ED545E16; Tue, 3 Dec 2024 17:41:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 76BEA40653; Tue, 3 Dec 2024 17:41:55 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id DA27840264 for ; Tue, 3 Dec 2024 17:41:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244113; x=1764780113; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oS4EPbtHaXIkOj73yqGTTHn65RNakdPRiSss+wq/9a0=; b=XoIKRRbGdie+KPTTMaHwkuqRxUdOJlAhqVIYoTf2uk6ik6RvaqZTfYqL EI4FY1IK6EXymraW7LV4ZR/jcJxgDG77C4qNTY00nEUSpCUIw7lBEBQpZ B4usSSgGmv8gQyEZBPUY6FnESo2tAchf46uqbk/5Ip0YQI4OiqZoGX8qb WsZajpObVxCmgNK9eyhxvV6Yp7lmQlvuRiYGpUhLtZmgjVfm/LU3No80J YiMVciJIR36yGPHySP6ANdtdGx7eiqaEEd56Cn+Fm1OtjnhOd25Om3S0b QybTONE7+JXVcmcx2pZCVPU2h2fN60RBsFnyvL8begCK2FZgvmZxRtIcU Q==; X-CSE-ConnectionGUID: /+3mGXy+S9CAl+ziH42yPQ== X-CSE-MsgGUID: B0hg2JpyTSabFrkiHsfPHw== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620728" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620728" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:41:53 -0800 X-CSE-ConnectionGUID: Lo6IaIcOTB25nH1YCzerSQ== X-CSE-MsgGUID: 1CUGEYNfReuYL4UrUkUQDg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357703" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:41:51 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , David Christensen , Ian Stokes , Konstantin Ananyev , Wathsala Vithanage , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v2 01/22] net/_common_intel: add pkt reassembly fn for intel drivers Date: Tue, 3 Dec 2024 16:41:07 +0000 Message-ID: <20241203164132.2686558-2-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The code for reassembling a single, multi-mbuf packet from multiple buffers received from the NIC is duplicated across many drivers. Rather than having multiple copies of this function, we can create an "_common_intel" directory to hold such functions and consolidate multiple functions down to a single one for easier maintenance. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/rx.h | 79 +++++++++++++++++++++++ drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +- drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +- drivers/net/i40e/i40e_rxtx_vec_avx512.c | 4 +- drivers/net/i40e/i40e_rxtx_vec_common.h | 64 +----------------- drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +- drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +- drivers/net/i40e/meson.build | 2 +- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 8 +-- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 +-- drivers/net/iavf/iavf_rxtx_vec_common.h | 65 +------------------ drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 +-- drivers/net/iavf/meson.build | 2 +- drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +- drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +-- drivers/net/ice/ice_rxtx_vec_common.h | 66 +------------------ drivers/net/ice/ice_rxtx_vec_sse.c | 4 +- drivers/net/ice/meson.build | 2 +- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 63 +----------------- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 +- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 +- drivers/net/ixgbe/meson.build | 2 +- 22 files changed, 121 insertions(+), 292 deletions(-) create mode 100644 drivers/net/_common_intel/rx.h diff --git a/drivers/net/_common_intel/rx.h b/drivers/net/_common_intel/rx.h new file mode 100644 index 0000000000..5bd2fea7e3 --- /dev/null +++ b/drivers/net/_common_intel/rx.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 Intel Corporation + */ + +#ifndef _COMMON_INTEL_RX_H_ +#define _COMMON_INTEL_RX_H_ + +#include +#include +#include + +#define CI_RX_BURST 32 + +static inline uint16_t +ci_rx_reassemble_packets(struct rte_mbuf **rx_bufs, uint16_t nb_bufs, uint8_t *split_flags, + struct rte_mbuf **pkt_first_seg, struct rte_mbuf **pkt_last_seg, + const uint8_t crc_len) +{ + struct rte_mbuf *pkts[CI_RX_BURST] = {0}; /*finished pkts*/ + struct rte_mbuf *start = *pkt_first_seg; + struct rte_mbuf *end = *pkt_last_seg; + unsigned int pkt_idx, buf_idx; + + for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { + if (end) { + /* processing a split packet */ + end->next = rx_bufs[buf_idx]; + rx_bufs[buf_idx]->data_len += crc_len; + + start->nb_segs++; + start->pkt_len += rx_bufs[buf_idx]->data_len; + end = end->next; + + if (!split_flags[buf_idx]) { + /* it's the last packet of the set */ + start->hash = end->hash; + start->vlan_tci = end->vlan_tci; + start->ol_flags = end->ol_flags; + /* we need to strip crc for the whole packet */ + start->pkt_len -= crc_len; + if (end->data_len > crc_len) { + end->data_len -= crc_len; + } else { + /* free up last mbuf */ + struct rte_mbuf *secondlast = start; + + start->nb_segs--; + while (secondlast->next != end) + secondlast = secondlast->next; + secondlast->data_len -= (crc_len - end->data_len); + secondlast->next = NULL; + rte_pktmbuf_free_seg(end); + } + pkts[pkt_idx++] = start; + start = NULL; + end = NULL; + } + } else { + /* not processing a split packet */ + if (!split_flags[buf_idx]) { + /* not a split packet, save and skip */ + pkts[pkt_idx++] = rx_bufs[buf_idx]; + continue; + } + start = rx_bufs[buf_idx]; + end = start; + rx_bufs[buf_idx]->data_len += crc_len; + rx_bufs[buf_idx]->pkt_len += crc_len; + } + } + + /* save the partial packet for next time */ + *pkt_first_seg = start; + *pkt_last_seg = end; + memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts))); + return pkt_idx; +} + +#endif /* _COMMON_INTEL_RX_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index b6b0d38ec1..95829f65d5 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -494,8 +494,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, if (i == nb_bufs) return nb_bufs; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c index 19cf0ac718..6dd6e55d9c 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c @@ -657,8 +657,8 @@ i40e_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /* diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c index 3b2750221b..506f1b5878 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c @@ -725,8 +725,8 @@ i40e_recv_scattered_burst_vec_avx512(void *rx_queue, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index 8b745630e4..1248cecacd 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -8,6 +8,7 @@ #include #include +#include <_common_intel/rx.h> #include "i40e_ethdev.h" #include "i40e_rxtx.h" @@ -15,69 +16,6 @@ #pragma GCC diagnostic ignored "-Wcast-qual" #endif -static inline uint16_t -reassemble_packets(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_bufs, - uint16_t nb_bufs, uint8_t *split_flags) -{ - struct rte_mbuf *pkts[RTE_I40E_VPMD_RX_BURST]; /*finished pkts*/ - struct rte_mbuf *start = rxq->pkt_first_seg; - struct rte_mbuf *end = rxq->pkt_last_seg; - unsigned pkt_idx, buf_idx; - - for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { - if (end != NULL) { - /* processing a split packet */ - end->next = rx_bufs[buf_idx]; - rx_bufs[buf_idx]->data_len += rxq->crc_len; - - start->nb_segs++; - start->pkt_len += rx_bufs[buf_idx]->data_len; - end = end->next; - - if (!split_flags[buf_idx]) { - /* it's the last packet of the set */ - start->hash = end->hash; - start->vlan_tci = end->vlan_tci; - start->ol_flags = end->ol_flags; - /* we need to strip crc for the whole packet */ - start->pkt_len -= rxq->crc_len; - if (end->data_len > rxq->crc_len) - end->data_len -= rxq->crc_len; - else { - /* free up last mbuf */ - struct rte_mbuf *secondlast = start; - - start->nb_segs--; - while (secondlast->next != end) - secondlast = secondlast->next; - secondlast->data_len -= (rxq->crc_len - - end->data_len); - secondlast->next = NULL; - rte_pktmbuf_free_seg(end); - } - pkts[pkt_idx++] = start; - start = end = NULL; - } - } else { - /* not processing a split packet */ - if (!split_flags[buf_idx]) { - /* not a split packet, save and skip */ - pkts[pkt_idx++] = rx_bufs[buf_idx]; - continue; - } - end = start = rx_bufs[buf_idx]; - rx_bufs[buf_idx]->data_len += rxq->crc_len; - rx_bufs[buf_idx]->pkt_len += rxq->crc_len; - } - } - - /* save the partial packet for next time */ - rxq->pkt_first_seg = start; - rxq->pkt_last_seg = end; - memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts))); - return pkt_idx; -} - static __rte_always_inline int i40e_tx_free_bufs(struct i40e_tx_queue *txq) { diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index e1c5c7041b..159d971796 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -623,8 +623,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index ad560d2b6b..3a8128e014 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -641,8 +641,8 @@ i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/i40e/meson.build b/drivers/net/i40e/meson.build index 5c93493124..0e0b416b8f 100644 --- a/drivers/net/i40e/meson.build +++ b/drivers/net/i40e/meson.build @@ -36,7 +36,7 @@ sources = files( testpmd_sources = files('i40e_testpmd.c') deps += ['hash'] -includes += include_directories('base') +includes += include_directories('base', '..') if arch_subdir == 'x86' sources += files('i40e_rxtx_vec_sse.c') diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index 49d41af953..0baf5045c8 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -1508,8 +1508,8 @@ iavf_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** @@ -1597,8 +1597,8 @@ iavf_recv_scattered_burst_vec_avx2_flex_rxd(void *rx_queue, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index d6a861bf80..5a88007096 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -1685,8 +1685,8 @@ iavf_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** @@ -1761,8 +1761,8 @@ iavf_recv_scattered_burst_vec_avx512_flex_rxd(void *rx_queue, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 5c5220048d..26b6f07614 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -8,6 +8,7 @@ #include #include +#include <_common_intel/rx.h> #include "iavf.h" #include "iavf_rxtx.h" @@ -15,70 +16,6 @@ #pragma GCC diagnostic ignored "-Wcast-qual" #endif -static __rte_always_inline uint16_t -reassemble_packets(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_bufs, - uint16_t nb_bufs, uint8_t *split_flags) -{ - struct rte_mbuf *pkts[IAVF_VPMD_RX_MAX_BURST]; - struct rte_mbuf *start = rxq->pkt_first_seg; - struct rte_mbuf *end = rxq->pkt_last_seg; - unsigned int pkt_idx, buf_idx; - - for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { - if (end) { - /* processing a split packet */ - end->next = rx_bufs[buf_idx]; - rx_bufs[buf_idx]->data_len += rxq->crc_len; - - start->nb_segs++; - start->pkt_len += rx_bufs[buf_idx]->data_len; - end = end->next; - - if (!split_flags[buf_idx]) { - /* it's the last packet of the set */ - start->hash = end->hash; - start->vlan_tci = end->vlan_tci; - start->ol_flags = end->ol_flags; - /* we need to strip crc for the whole packet */ - start->pkt_len -= rxq->crc_len; - if (end->data_len > rxq->crc_len) { - end->data_len -= rxq->crc_len; - } else { - /* free up last mbuf */ - struct rte_mbuf *secondlast = start; - - start->nb_segs--; - while (secondlast->next != end) - secondlast = secondlast->next; - secondlast->data_len -= (rxq->crc_len - - end->data_len); - secondlast->next = NULL; - rte_pktmbuf_free_seg(end); - } - pkts[pkt_idx++] = start; - start = NULL; - end = NULL; - } - } else { - /* not processing a split packet */ - if (!split_flags[buf_idx]) { - /* not a split packet, save and skip */ - pkts[pkt_idx++] = rx_bufs[buf_idx]; - continue; - } - end = start = rx_bufs[buf_idx]; - rx_bufs[buf_idx]->data_len += rxq->crc_len; - rx_bufs[buf_idx]->pkt_len += rxq->crc_len; - } - } - - /* save the partial packet for next time */ - rxq->pkt_first_seg = start; - rxq->pkt_last_seg = end; - memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts))); - return pkt_idx; -} - static __rte_always_inline int iavf_tx_free_bufs(struct iavf_tx_queue *txq) { diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 0db6fa8bd4..48b01462ea 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1238,8 +1238,8 @@ iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** @@ -1307,8 +1307,8 @@ iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build index b48bb83438..9106e016ef 100644 --- a/drivers/net/iavf/meson.build +++ b/drivers/net/iavf/meson.build @@ -5,7 +5,7 @@ if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0 subdir_done() endif -includes += include_directories('../../common/iavf') +includes += include_directories('../../common/iavf', '..') testpmd_sources = files('iavf_testpmd.c') diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c index d6e88dbb29..ca247b155c 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c @@ -726,8 +726,8 @@ ice_recv_scattered_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c index add095ef06..1e603d5d8f 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c @@ -763,8 +763,8 @@ ice_recv_scattered_burst_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** @@ -805,8 +805,8 @@ ice_recv_scattered_burst_vec_avx512_offload(void *rx_queue, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 4b73465af5..dd7da4761f 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -5,77 +5,13 @@ #ifndef _ICE_RXTX_VEC_COMMON_H_ #define _ICE_RXTX_VEC_COMMON_H_ +#include <_common_intel/rx.h> #include "ice_rxtx.h" #ifndef __INTEL_COMPILER #pragma GCC diagnostic ignored "-Wcast-qual" #endif -static inline uint16_t -ice_rx_reassemble_packets(struct ice_rx_queue *rxq, struct rte_mbuf **rx_bufs, - uint16_t nb_bufs, uint8_t *split_flags) -{ - struct rte_mbuf *pkts[ICE_VPMD_RX_BURST] = {0}; /*finished pkts*/ - struct rte_mbuf *start = rxq->pkt_first_seg; - struct rte_mbuf *end = rxq->pkt_last_seg; - unsigned int pkt_idx, buf_idx; - - for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { - if (end) { - /* processing a split packet */ - end->next = rx_bufs[buf_idx]; - rx_bufs[buf_idx]->data_len += rxq->crc_len; - - start->nb_segs++; - start->pkt_len += rx_bufs[buf_idx]->data_len; - end = end->next; - - if (!split_flags[buf_idx]) { - /* it's the last packet of the set */ - start->hash = end->hash; - start->vlan_tci = end->vlan_tci; - start->ol_flags = end->ol_flags; - /* we need to strip crc for the whole packet */ - start->pkt_len -= rxq->crc_len; - if (end->data_len > rxq->crc_len) { - end->data_len -= rxq->crc_len; - } else { - /* free up last mbuf */ - struct rte_mbuf *secondlast = start; - - start->nb_segs--; - while (secondlast->next != end) - secondlast = secondlast->next; - secondlast->data_len -= (rxq->crc_len - - end->data_len); - secondlast->next = NULL; - rte_pktmbuf_free_seg(end); - } - pkts[pkt_idx++] = start; - start = NULL; - end = NULL; - } - } else { - /* not processing a split packet */ - if (!split_flags[buf_idx]) { - /* not a split packet, save and skip */ - pkts[pkt_idx++] = rx_bufs[buf_idx]; - continue; - } - start = rx_bufs[buf_idx]; - end = start; - rx_bufs[buf_idx]->data_len += rxq->crc_len; - rx_bufs[buf_idx]->pkt_len += rxq->crc_len; - } - } - - /* save the partial packet for next time */ - rxq->pkt_first_seg = start; - rxq->pkt_last_seg = end; - memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts))); - return pkt_idx; -} - static __rte_always_inline int ice_tx_free_bufs_vec(struct ice_tx_queue *txq) { diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index c01d8ede29..01533454ba 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -640,8 +640,8 @@ ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + ice_rx_reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build index 1c9dc0cc6d..02c028db73 100644 --- a/drivers/net/ice/meson.build +++ b/drivers/net/ice/meson.build @@ -19,7 +19,7 @@ sources = files( testpmd_sources = files('ice_testpmd.c') deps += ['hash', 'net', 'common_iavf'] -includes += include_directories('base', '../../common/iavf') +includes += include_directories('base', '..') if arch_subdir == 'x86' sources += files('ice_rxtx_vec_sse.c') diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index a4d9ec9b08..2bab17c934 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -7,71 +7,10 @@ #include #include +#include <_common_intel/rx.h> #include "ixgbe_ethdev.h" #include "ixgbe_rxtx.h" -static inline uint16_t -reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs, - uint16_t nb_bufs, uint8_t *split_flags) -{ - struct rte_mbuf *pkts[nb_bufs]; /*finished pkts*/ - struct rte_mbuf *start = rxq->pkt_first_seg; - struct rte_mbuf *end = rxq->pkt_last_seg; - unsigned int pkt_idx, buf_idx; - - for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { - if (end != NULL) { - /* processing a split packet */ - end->next = rx_bufs[buf_idx]; - rx_bufs[buf_idx]->data_len += rxq->crc_len; - - start->nb_segs++; - start->pkt_len += rx_bufs[buf_idx]->data_len; - end = end->next; - - if (!split_flags[buf_idx]) { - /* it's the last packet of the set */ - start->hash = end->hash; - start->ol_flags = end->ol_flags; - /* we need to strip crc for the whole packet */ - start->pkt_len -= rxq->crc_len; - if (end->data_len > rxq->crc_len) - end->data_len -= rxq->crc_len; - else { - /* free up last mbuf */ - struct rte_mbuf *secondlast = start; - - start->nb_segs--; - while (secondlast->next != end) - secondlast = secondlast->next; - secondlast->data_len -= (rxq->crc_len - - end->data_len); - secondlast->next = NULL; - rte_pktmbuf_free_seg(end); - } - pkts[pkt_idx++] = start; - start = end = NULL; - } - } else { - /* not processing a split packet */ - if (!split_flags[buf_idx]) { - /* not a split packet, save and skip */ - pkts[pkt_idx++] = rx_bufs[buf_idx]; - continue; - } - end = start = rx_bufs[buf_idx]; - rx_bufs[buf_idx]->data_len += rxq->crc_len; - rx_bufs[buf_idx]->pkt_len += rxq->crc_len; - } - } - - /* save the partial packet for next time */ - rxq->pkt_first_seg = start; - rxq->pkt_last_seg = end; - memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts))); - return pkt_idx; -} - static __rte_always_inline int ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) { diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index 952b032eb6..7b35093075 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -516,8 +516,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index a77370cdb7..a709bf8c7f 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -639,8 +639,8 @@ ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_bufs; rxq->pkt_first_seg = rx_pkts[i]; } - return i + reassemble_packets(rxq, &rx_pkts[i], nb_bufs - i, - &split_flags[i]); + return i + ci_rx_reassemble_packets(&rx_pkts[i], nb_bufs - i, &split_flags[i], + &rxq->pkt_first_seg, &rxq->pkt_last_seg, rxq->crc_len); } /** diff --git a/drivers/net/ixgbe/meson.build b/drivers/net/ixgbe/meson.build index 0ae12dd5ff..a65ff51379 100644 --- a/drivers/net/ixgbe/meson.build +++ b/drivers/net/ixgbe/meson.build @@ -35,6 +35,6 @@ elif arch_subdir == 'arm' sources += files('ixgbe_recycle_mbufs_vec_common.c') endif -includes += include_directories('base') +includes += include_directories('base', '..') headers = files('rte_pmd_ixgbe.h') From patchwork Tue Dec 3 16:41:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148986 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 71C4545E16; Tue, 3 Dec 2024 17:42:09 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 38B7740693; Tue, 3 Dec 2024 17:41:57 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 73A8D4064C for ; Tue, 3 Dec 2024 17:41:55 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244116; x=1764780116; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I6GGlt/rMvmqba4A6+V3QxgxW2bCBLZBDE8/C67Qs2Y=; b=n83H1r91rfgRGbf2LaEV9nzUagzxq2ziRliRtt9n3vqzydbQMwipFRRo h8mdOzU+KJXv3fSQ8A1z+HVWZ2m/hDmb+iu4/md48zqsDsrggo6qdmJCA pvWE18lvFCwmiQeAPxPTDb1ePZRJcDAkSZwsaIXvZG4ITwm/SqvXE0jHE zv73z8bKvpwaXRhuyDFkgI3Gy8PHcSbqXcGqnG0ekbn5rSf7AeH7Wquoe nE+v5r5HqHhWWJTyqsE5i1r46wiXxX6pNmJUGbboYRXTW1FNMMv1WJQHg 9U4+7VkqgvfijLaSAlJyR6fgdgBuEUeKIlvxfhlPynNhZYWBELZhYWRMx w==; X-CSE-ConnectionGUID: N8A+AYqgQf6v9R8JwLrffg== X-CSE-MsgGUID: 66mfwgACQnavqdt8uFDyTg== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620745" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620745" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:41:55 -0800 X-CSE-ConnectionGUID: sHRLW81MTOeqjzG23P7WWQ== X-CSE-MsgGUID: 27lxEgMpS16ZOdzGMWtV+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357711" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:41:53 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , David Christensen , Konstantin Ananyev , Wathsala Vithanage , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v2 02/22] net/_common_intel: provide common Tx entry structures Date: Tue, 3 Dec 2024 16:41:08 +0000 Message-ID: <20241203164132.2686558-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The Tx entry structures, both vector and scalar, are common across Intel drivers, so provide a single definition to be used everywhere. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 27 +++++++++++++++++++ .../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +- drivers/net/i40e/i40e_rxtx.c | 18 ++++++------- drivers/net/i40e/i40e_rxtx.h | 14 +++------- drivers/net/i40e/i40e_rxtx_vec_altivec.c | 2 +- drivers/net/i40e/i40e_rxtx_vec_avx2.c | 2 +- drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 ++--- drivers/net/i40e/i40e_rxtx_vec_common.h | 4 +-- drivers/net/i40e/i40e_rxtx_vec_neon.c | 2 +- drivers/net/i40e/i40e_rxtx_vec_sse.c | 2 +- drivers/net/iavf/iavf_rxtx.c | 12 ++++----- drivers/net/iavf/iavf_rxtx.h | 14 +++------- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 2 +- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 10 +++---- drivers/net/iavf/iavf_rxtx_vec_common.h | 4 +-- drivers/net/iavf/iavf_rxtx_vec_sse.c | 2 +- drivers/net/ice/ice_dcf_ethdev.c | 2 +- drivers/net/ice/ice_rxtx.c | 16 +++++------ drivers/net/ice/ice_rxtx.h | 13 ++------- drivers/net/ice/ice_rxtx_vec_avx2.c | 2 +- drivers/net/ice/ice_rxtx_vec_avx512.c | 6 ++--- drivers/net/ice/ice_rxtx_vec_common.h | 6 ++--- drivers/net/ice/ice_rxtx_vec_sse.c | 2 +- .../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++++------ drivers/net/ixgbe/ixgbe_rxtx.h | 22 +++------------ drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 8 +++--- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +- 29 files changed, 105 insertions(+), 117 deletions(-) create mode 100644 drivers/net/_common_intel/tx.h diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h new file mode 100644 index 0000000000..384352b9db --- /dev/null +++ b/drivers/net/_common_intel/tx.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 Intel Corporation + */ + +#ifndef _COMMON_INTEL_TX_H_ +#define _COMMON_INTEL_TX_H_ + +#include +#include + +/** + * Structure associated with each descriptor of the TX ring of a TX queue. + */ +struct ci_tx_entry { + struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */ + uint16_t next_id; /* Index of next descriptor in ring. */ + uint16_t last_id; /* Index of last scattered descriptor. */ +}; + +/** + * Structure associated with each descriptor of the TX ring of a TX queue in vector Tx. + */ +struct ci_tx_entry_vec { + struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */ +}; + +#endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c index 14424c9921..260d238ce4 100644 --- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c +++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c @@ -56,7 +56,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue, struct rte_eth_recycle_rxq_info *recycle_rxq_info) { struct i40e_tx_queue *txq = tx_queue; - struct i40e_tx_entry *txep; + struct ci_tx_entry *txep; struct rte_mbuf **rxep; int i, n; uint16_t nb_recycle_mbufs; diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 839c8a5442..2e1f07d2a1 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -378,7 +378,7 @@ i40e_build_ctob(uint32_t td_cmd, static inline int i40e_xmit_cleanup(struct i40e_tx_queue *txq) { - struct i40e_tx_entry *sw_ring = txq->sw_ring; + struct ci_tx_entry *sw_ring = txq->sw_ring; volatile struct i40e_tx_desc *txd = txq->tx_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; @@ -1081,8 +1081,8 @@ uint16_t i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct i40e_tx_queue *txq; - struct i40e_tx_entry *sw_ring; - struct i40e_tx_entry *txe, *txn; + struct ci_tx_entry *sw_ring; + struct ci_tx_entry *txe, *txn; volatile struct i40e_tx_desc *txd; volatile struct i40e_tx_desc *txr; struct rte_mbuf *tx_pkt; @@ -1331,7 +1331,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) static __rte_always_inline int i40e_tx_free_bufs(struct i40e_tx_queue *txq) { - struct i40e_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t tx_rs_thresh = txq->tx_rs_thresh; uint16_t i = 0, j = 0; struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ]; @@ -1418,7 +1418,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq, uint16_t nb_pkts) { volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]); - struct i40e_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]); + struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail]; const int N_PER_LOOP = 4; const int N_PER_LOOP_MASK = N_PER_LOOP - 1; int mainpart, leftover; @@ -2555,7 +2555,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate software ring */ txq->sw_ring = rte_zmalloc_socket("i40e tx sw ring", - sizeof(struct i40e_tx_entry) * nb_desc, + sizeof(struct ci_tx_entry) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); if (!txq->sw_ring) { @@ -2723,7 +2723,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq) */ #ifdef CC_AVX512_SUPPORT if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) { - struct i40e_vec_tx_entry *swr = (void *)txq->sw_ring; + struct ci_tx_entry_vec *swr = (void *)txq->sw_ring; i = txq->tx_next_dd - txq->tx_rs_thresh + 1; if (txq->tx_tail < i) { @@ -2768,7 +2768,7 @@ static int i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq, uint32_t free_cnt) { - struct i40e_tx_entry *swr_ring = txq->sw_ring; + struct ci_tx_entry *swr_ring = txq->sw_ring; uint16_t i, tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; @@ -2874,7 +2874,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt) void i40e_reset_tx_queue(struct i40e_tx_queue *txq) { - struct i40e_tx_entry *txe; + struct ci_tx_entry *txe; uint16_t i, prev, size; if (!txq) { diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 33fc9770d9..0f5d3cb0b7 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -5,6 +5,8 @@ #ifndef _I40E_RXTX_H_ #define _I40E_RXTX_H_ +#include <_common_intel/tx.h> + #define RTE_PMD_I40E_RX_MAX_BURST 32 #define RTE_PMD_I40E_TX_MAX_BURST 32 @@ -122,16 +124,6 @@ struct i40e_rx_queue { const struct rte_memzone *mz; }; -struct i40e_tx_entry { - struct rte_mbuf *mbuf; - uint16_t next_id; - uint16_t last_id; -}; - -struct i40e_vec_tx_entry { - struct rte_mbuf *mbuf; -}; - /* * Structure associated with each TX queue. */ @@ -139,7 +131,7 @@ struct i40e_tx_queue { uint16_t nb_tx_desc; /**< number of TX descriptors */ uint64_t tx_ring_phys_addr; /**< TX ring DMA address */ volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */ - struct i40e_tx_entry *sw_ring; /**< virtual address of SW ring */ + struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */ uint16_t tx_tail; /**< current value of tail register */ volatile uint8_t *qtx_tail; /**< register address of tail */ uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */ diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index 95829f65d5..ca1038eaa6 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -553,7 +553,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c index 6dd6e55d9c..e8441de759 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c @@ -745,7 +745,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c index 506f1b5878..8b8a16daa8 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c @@ -757,7 +757,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue, static __rte_always_inline int i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq) { - struct i40e_vec_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -920,7 +920,7 @@ vtx(volatile struct i40e_tx_desc *txdp, } static __rte_always_inline void -tx_backlog_entry_avx512(struct i40e_vec_tx_entry *txep, +tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -935,7 +935,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_vec_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index 1248cecacd..619fb89110 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -19,7 +19,7 @@ static __rte_always_inline int i40e_tx_free_bufs(struct i40e_tx_queue *txq) { - struct i40e_tx_entry *txep; + struct ci_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -85,7 +85,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq) } static __rte_always_inline void -tx_backlog_entry(struct i40e_tx_entry *txep, +tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index 159d971796..9b90a32e28 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -681,7 +681,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index 3a8128e014..e1fa2ed543 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -700,7 +700,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct i40e_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 6a093c6746..e337f20073 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -284,7 +284,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq) static inline void reset_tx_queue(struct iavf_tx_queue *txq) { - struct iavf_tx_entry *txe; + struct ci_tx_entry *txe; uint32_t i, size; uint16_t prev; @@ -860,7 +860,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate software ring */ txq->sw_ring = rte_zmalloc_socket("iavf tx sw ring", - sizeof(struct iavf_tx_entry) * nb_desc, + sizeof(struct ci_tx_entry) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); if (!txq->sw_ring) { @@ -2379,7 +2379,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue, static inline int iavf_xmit_cleanup(struct iavf_tx_queue *txq) { - struct iavf_tx_entry *sw_ring = txq->sw_ring; + struct ci_tx_entry *sw_ring = txq->sw_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; uint16_t desc_to_clean_to; @@ -2797,8 +2797,8 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct iavf_tx_queue *txq = tx_queue; volatile struct iavf_tx_desc *txr = txq->tx_ring; - struct iavf_tx_entry *txe_ring = txq->sw_ring; - struct iavf_tx_entry *txe, *txn; + struct ci_tx_entry *txe_ring = txq->sw_ring; + struct ci_tx_entry *txe, *txn; struct rte_mbuf *mb, *mb_seg; uint64_t buf_dma_addr; uint16_t desc_idx, desc_idx_last; @@ -4268,7 +4268,7 @@ static int iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, uint32_t free_cnt) { - struct iavf_tx_entry *swr_ring = txq->sw_ring; + struct ci_tx_entry *swr_ring = txq->sw_ring; uint16_t tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index 7b56076d32..1a191f2c89 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -5,6 +5,8 @@ #ifndef _IAVF_RXTX_H_ #define _IAVF_RXTX_H_ +#include <_common_intel/tx.h> + /* In QLEN must be whole number of 32 descriptors. */ #define IAVF_ALIGN_RING_DESC 32 #define IAVF_MIN_RING_DESC 64 @@ -271,22 +273,12 @@ struct iavf_rx_queue { uint64_t hw_time_update; }; -struct iavf_tx_entry { - struct rte_mbuf *mbuf; - uint16_t next_id; - uint16_t last_id; -}; - -struct iavf_tx_vec_entry { - struct rte_mbuf *mbuf; -}; - /* Structure associated with each TX queue. */ struct iavf_tx_queue { const struct rte_memzone *mz; /* memzone for Tx ring */ volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */ uint64_t tx_ring_phys_addr; /* Tx ring DMA address */ - struct iavf_tx_entry *sw_ring; /* address array of SW ring */ + struct ci_tx_entry *sw_ring; /* address array of SW ring */ uint16_t nb_tx_desc; /* ring length */ uint16_t tx_tail; /* current value of tail */ volatile uint8_t *qtx_tail; /* register address of tail */ diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index 0baf5045c8..e7d3d52655 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -1736,7 +1736,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct iavf_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; /* bit2 is reserved and must be set to 1 according to Spec */ uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index 5a88007096..a899309f94 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -1847,7 +1847,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue, static __rte_always_inline int iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq) { - struct iavf_tx_vec_entry *txep; + struct ci_tx_entry_vec *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -1960,7 +1960,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq) } static __rte_always_inline void -tx_backlog_entry_avx512(struct iavf_tx_vec_entry *txep, +tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -2313,7 +2313,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, { struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct iavf_tx_vec_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; /* bit2 is reserved and must be set to 1 according to Spec */ uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; @@ -2380,7 +2380,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, { struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct iavf_tx_vec_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, nb_mbuf, tx_id; /* bit2 is reserved and must be set to 1 according to Spec */ uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; @@ -2478,7 +2478,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq) const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */ const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */ - struct iavf_tx_vec_entry *swr = (void *)txq->sw_ring; + struct ci_tx_entry_vec *swr = (void *)txq->sw_ring; if (!txq->sw_ring || txq->nb_free == max_desc) return; diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 26b6f07614..df40857218 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -19,7 +19,7 @@ static __rte_always_inline int iavf_tx_free_bufs(struct iavf_tx_queue *txq) { - struct iavf_tx_entry *txep; + struct ci_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -74,7 +74,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq) } static __rte_always_inline void -tx_backlog_entry(struct iavf_tx_entry *txep, +tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 48b01462ea..0a30b1ef64 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1368,7 +1368,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct iavf_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */ uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 91f4943a11..4b98e4066b 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -389,7 +389,7 @@ reset_rx_queue(struct ice_rx_queue *rxq) static inline void reset_tx_queue(struct ice_tx_queue *txq) { - struct ice_tx_entry *txe; + struct ci_tx_entry *txe; uint32_t i, size; uint16_t prev; diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 0c7106c7e0..d584086a36 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -1028,7 +1028,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq) static void ice_reset_tx_queue(struct ice_tx_queue *txq) { - struct ice_tx_entry *txe; + struct ci_tx_entry *txe; uint16_t i, prev, size; if (!txq) { @@ -1509,7 +1509,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate software ring */ txq->sw_ring = rte_zmalloc_socket(NULL, - sizeof(struct ice_tx_entry) * nb_desc, + sizeof(struct ci_tx_entry) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); if (!txq->sw_ring) { @@ -2837,7 +2837,7 @@ ice_txd_enable_checksum(uint64_t ol_flags, static inline int ice_xmit_cleanup(struct ice_tx_queue *txq) { - struct ice_tx_entry *sw_ring = txq->sw_ring; + struct ci_tx_entry *sw_ring = txq->sw_ring; volatile struct ice_tx_desc *txd = txq->tx_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; @@ -2961,8 +2961,8 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct ice_tx_queue *txq; volatile struct ice_tx_desc *tx_ring; volatile struct ice_tx_desc *txd; - struct ice_tx_entry *sw_ring; - struct ice_tx_entry *txe, *txn; + struct ci_tx_entry *sw_ring; + struct ci_tx_entry *txe, *txn; struct rte_mbuf *tx_pkt; struct rte_mbuf *m_seg; uint32_t cd_tunneling_params; @@ -3184,7 +3184,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) static __rte_always_inline int ice_tx_free_bufs(struct ice_tx_queue *txq) { - struct ice_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t i; if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & @@ -3221,7 +3221,7 @@ static int ice_tx_done_cleanup_full(struct ice_tx_queue *txq, uint32_t free_cnt) { - struct ice_tx_entry *swr_ring = txq->sw_ring; + struct ci_tx_entry *swr_ring = txq->sw_ring; uint16_t i, tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; @@ -3361,7 +3361,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail]; - struct ice_tx_entry *txep = &txq->sw_ring[txq->tx_tail]; + struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail]; const int N_PER_LOOP = 4; const int N_PER_LOOP_MASK = N_PER_LOOP - 1; int mainpart, leftover; diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index 45f25b3609..8d1a1a8676 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -5,6 +5,7 @@ #ifndef _ICE_RXTX_H_ #define _ICE_RXTX_H_ +#include <_common_intel/tx.h> #include "ice_ethdev.h" #define ICE_ALIGN_RING_DESC 32 @@ -144,21 +145,11 @@ struct ice_rx_queue { bool ts_enable; /* if rxq timestamp is enabled */ }; -struct ice_tx_entry { - struct rte_mbuf *mbuf; - uint16_t next_id; - uint16_t last_id; -}; - -struct ice_vec_tx_entry { - struct rte_mbuf *mbuf; -}; - struct ice_tx_queue { uint16_t nb_tx_desc; /* number of TX descriptors */ rte_iova_t tx_ring_dma; /* TX ring DMA address */ volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */ - struct ice_tx_entry *sw_ring; /* virtual address of SW ring */ + struct ci_tx_entry *sw_ring; /* virtual address of SW ring */ uint16_t tx_tail; /* current value of tail register */ volatile uint8_t *qtx_tail; /* register address of tail */ uint16_t nb_tx_used; /* number of TX desc used since RS bit set */ diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c index ca247b155c..cf1862263a 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c @@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ice_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c index 1e603d5d8f..6b6aa3f1fe 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c @@ -862,7 +862,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue, static __rte_always_inline int ice_tx_free_bufs_avx512(struct ice_tx_queue *txq) { - struct ice_vec_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -1040,7 +1040,7 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt, } static __rte_always_inline void -ice_tx_backlog_entry_avx512(struct ice_vec_tx_entry *txep, +ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -1055,7 +1055,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ice_vec_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index dd7da4761f..3dc6061e84 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -15,7 +15,7 @@ static __rte_always_inline int ice_tx_free_bufs_vec(struct ice_tx_queue *txq) { - struct ice_tx_entry *txep; + struct ci_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; @@ -70,7 +70,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq) } static __rte_always_inline void -ice_tx_backlog_entry(struct ice_tx_entry *txep, +ice_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -135,7 +135,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq) if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 || dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) { - struct ice_vec_tx_entry *swr = (void *)txq->sw_ring; + struct ci_tx_entry_vec *swr = (void *)txq->sw_ring; if (txq->tx_tail < i) { for (; i < txq->nb_tx_desc; i++) { diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index 01533454ba..889b754cc1 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ice_tx_entry *txep; + struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c index d451562269..2241726ad8 100644 --- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c +++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c @@ -52,7 +52,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue, struct rte_eth_recycle_rxq_info *recycle_rxq_info) { struct ixgbe_tx_queue *txq = tx_queue; - struct ixgbe_tx_entry *txep; + struct ci_tx_entry *txep; struct rte_mbuf **rxep; int i, n; uint32_t status; diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 7d16eb9df7..db4b993ebc 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -100,7 +100,7 @@ static __rte_always_inline int ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) { - struct ixgbe_tx_entry *txep; + struct ci_tx_entry *txep; uint32_t status; int i, nb_free = 0; struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ]; @@ -199,7 +199,7 @@ ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]); - struct ixgbe_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]); + struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail]; const int N_PER_LOOP = 4; const int N_PER_LOOP_MASK = N_PER_LOOP-1; int mainpart, leftover; @@ -563,7 +563,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags) static inline int ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) { - struct ixgbe_tx_entry *sw_ring = txq->sw_ring; + struct ci_tx_entry *sw_ring = txq->sw_ring; volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; @@ -624,8 +624,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct ixgbe_tx_queue *txq; - struct ixgbe_tx_entry *sw_ring; - struct ixgbe_tx_entry *txe, *txn; + struct ci_tx_entry *sw_ring; + struct ci_tx_entry *txe, *txn; volatile union ixgbe_adv_tx_desc *txr; volatile union ixgbe_adv_tx_desc *txd, *txp; struct rte_mbuf *tx_pkt; @@ -2352,7 +2352,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq) static int ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt) { - struct ixgbe_tx_entry *swr_ring = txq->sw_ring; + struct ci_tx_entry *swr_ring = txq->sw_ring; uint16_t i, tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; @@ -2490,7 +2490,7 @@ static void __rte_cold ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) { static const union ixgbe_adv_tx_desc zeroed_desc = {{0}}; - struct ixgbe_tx_entry *txe = txq->sw_ring; + struct ci_tx_entry *txe = txq->sw_ring; uint16_t prev, i; /* Zero out HW ring memory */ @@ -2795,7 +2795,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate software ring */ txq->sw_ring = rte_zmalloc_socket("txq->sw_ring", - sizeof(struct ixgbe_tx_entry) * nb_desc, + sizeof(struct ci_tx_entry) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); if (txq->sw_ring == NULL) { ixgbe_tx_queue_release(txq); diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 0550c1da60..1647396419 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -5,6 +5,8 @@ #ifndef _IXGBE_RXTX_H_ #define _IXGBE_RXTX_H_ +#include <_common_intel/tx.h> + /* * Rings setup and release. * @@ -75,22 +77,6 @@ struct ixgbe_scattered_rx_entry { struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */ }; -/** - * Structure associated with each descriptor of the TX ring of a TX queue. - */ -struct ixgbe_tx_entry { - struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */ - uint16_t next_id; /**< Index of next descriptor in ring. */ - uint16_t last_id; /**< Index of last scattered descriptor. */ -}; - -/** - * Structure associated with each descriptor of the TX ring of a TX queue. - */ -struct ixgbe_tx_entry_v { - struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */ -}; - /** * Structure associated with each RX queue. */ @@ -202,8 +188,8 @@ struct ixgbe_tx_queue { volatile union ixgbe_adv_tx_desc *tx_ring; uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */ union { - struct ixgbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */ - struct ixgbe_tx_entry_v *sw_ring_v; /**< address of SW ring for vector PMD */ + struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */ + struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */ }; volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */ uint16_t nb_tx_desc; /**< number of TX descriptors. */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index 2bab17c934..e9592c0d08 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -14,7 +14,7 @@ static __rte_always_inline int ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) { - struct ixgbe_tx_entry_v *txep; + struct ci_tx_entry_vec *txep; uint32_t status; uint32_t n; uint32_t i; @@ -69,7 +69,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) } static __rte_always_inline void -tx_backlog_entry(struct ixgbe_tx_entry_v *txep, +tx_backlog_entry(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -82,7 +82,7 @@ static inline void _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) { unsigned int i; - struct ixgbe_tx_entry_v *txe; + struct ci_tx_entry_vec *txe; const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc) @@ -149,7 +149,7 @@ static inline void _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq) { static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } }; - struct ixgbe_tx_entry_v *txe = txq->sw_ring_v; + struct ci_tx_entry_vec *txe = txq->sw_ring_v; uint16_t i; /* Zero out HW ring memory */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index 7b35093075..02b53c008e 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -573,7 +573,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; volatile union ixgbe_adv_tx_desc *txdp; - struct ixgbe_tx_entry_v *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = DCMD_DTYP_FLAGS; uint64_t rs = IXGBE_ADVTXD_DCMD_RS | DCMD_DTYP_FLAGS; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index a709bf8c7f..c8b5377c9f 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -695,7 +695,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; volatile union ixgbe_adv_tx_desc *txdp; - struct ixgbe_tx_entry_v *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = DCMD_DTYP_FLAGS; uint64_t rs = IXGBE_ADVTXD_DCMD_RS|DCMD_DTYP_FLAGS; From patchwork Tue Dec 3 16:41:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148987 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 069C445E16; Tue, 3 Dec 2024 17:42:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6EF940A6F; Tue, 3 Dec 2024 17:41:58 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id A9076406A2 for ; Tue, 3 Dec 2024 17:41:57 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244118; x=1764780118; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0R01fIOvpvE4WyvG8afjscgutzJZFm/ZK12Le4QhZYY=; b=I81czSQd9PBbVxiRum96oPgvikrFGzEXjhPv8h8hPd6LCd4ziFmzb5pE lFnQsTf6OCwfHClBhbp2CkqTwWnUxkc/vqlyb14fcbEKDUa+2KRTSAZlY EYUuhnPa7oXdlTG54NKHThCeY8OIU9w+9CfD4YyOt8hitFl0lWEMHN4t0 FvhMXnH3rjBkFj2h+E94HOfoBIroy396f2DlqSBOKmxEHv93EZ+Uv+41z DbP/ShoSbCe+vpAkDWk5k7zYvBhazA7tk0/jrpILv40HNqFu/TSC1ryyG aw0+Z8wFCL90vl2nfiZ9+rBTs1m+IzaeGtiEyWOVtjlIy1wmxgsbJKVq3 g==; X-CSE-ConnectionGUID: h6DbqMXmTbqHbccUgZzxiQ== X-CSE-MsgGUID: I2ZpAaxAQ+GJZKY+/2UD5w== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620755" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620755" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:41:57 -0800 X-CSE-ConnectionGUID: QsSXUKBXTlmzIvYuvQLw2A== X-CSE-MsgGUID: EWT9/eVES3SJNfzu8SgKHQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357717" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:41:56 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , David Christensen , Ian Stokes , Konstantin Ananyev , Wathsala Vithanage , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v2 03/22] net/_common_intel: add Tx mbuf ring replenish fn Date: Tue, 3 Dec 2024 16:41:09 +0000 Message-ID: <20241203164132.2686558-4-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move the short function used to place mbufs on the SW Tx ring to common code to avoid duplication. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 7 +++++++ drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 ++-- drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 ++-- drivers/net/i40e/i40e_rxtx_vec_common.h | 10 ---------- drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 ++-- drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 ++-- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 ++-- drivers/net/iavf/iavf_rxtx_vec_common.h | 10 ---------- drivers/net/iavf/iavf_rxtx_vec_sse.c | 4 ++-- drivers/net/ice/ice_rxtx_vec_avx2.c | 4 ++-- drivers/net/ice/ice_rxtx_vec_common.h | 10 ---------- drivers/net/ice/ice_rxtx_vec_sse.c | 4 ++-- 12 files changed, 23 insertions(+), 46 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index 384352b9db..5397007411 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -24,4 +24,11 @@ struct ci_tx_entry_vec { struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */ }; +static __rte_always_inline void +ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + for (uint16_t i = 0; i < (int)nb_pkts; ++i) + txep[i].mbuf = tx_pkts[i]; +} + #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index ca1038eaa6..80f07a3e10 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -575,7 +575,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -592,7 +592,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txep = &txq->sw_ring[tx_id]; } - tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c index e8441de759..b26bae4757 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c @@ -765,7 +765,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry(txep, tx_pkts, n); vtx(txdp, tx_pkts, n - 1, flags); tx_pkts += (n - 1); @@ -783,7 +783,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, txep = &txq->sw_ring[tx_id]; } - tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index 619fb89110..325e99c1a4 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -84,16 +84,6 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq) return txq->tx_rs_thresh; } -static __rte_always_inline void -tx_backlog_entry(struct ci_tx_entry *txep, - struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - int i; - - for (i = 0; i < (int)nb_pkts; ++i) - txep[i].mbuf = tx_pkts[i]; -} - static inline void _i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq) { diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index 9b90a32e28..26bc345a0a 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -702,7 +702,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -719,7 +719,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, txep = &txq->sw_ring[tx_id]; } - tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index e1fa2ed543..ebc32b0d27 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -721,7 +721,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -738,7 +738,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txep = &txq->sw_ring[tx_id]; } - tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index e7d3d52655..28885800e0 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -1757,7 +1757,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry(txep, tx_pkts, n); iavf_vtx(txdp, tx_pkts, n - 1, flags, offload); tx_pkts += (n - 1); @@ -1775,7 +1775,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, txep = &txq->sw_ring[tx_id]; } - tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry(txep, tx_pkts, nb_commit); iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload); diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index df40857218..2c118cc059 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -73,16 +73,6 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq) return txq->rs_thresh; } -static __rte_always_inline void -tx_backlog_entry(struct ci_tx_entry *txep, - struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - int i; - - for (i = 0; i < (int)nb_pkts; ++i) - txep[i].mbuf = tx_pkts[i]; -} - static inline void _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq) { diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 0a30b1ef64..bc4b8f14c8 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1390,7 +1390,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -1407,7 +1407,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txep = &txq->sw_ring[tx_id]; } - tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry(txep, tx_pkts, nb_commit); iavf_vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c index cf1862263a..336697e72d 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c @@ -881,7 +881,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ice_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry(txep, tx_pkts, n); ice_vtx(txdp, tx_pkts, n - 1, flags, offload); tx_pkts += (n - 1); @@ -899,7 +899,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, txep = &txq->sw_ring[tx_id]; } - ice_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry(txep, tx_pkts, nb_commit); ice_vtx(txdp, tx_pkts, nb_commit, flags, offload); diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 3dc6061e84..32e4541267 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -69,16 +69,6 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq) return txq->tx_rs_thresh; } -static __rte_always_inline void -ice_tx_backlog_entry(struct ci_tx_entry *txep, - struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - int i; - - for (i = 0; i < (int)nb_pkts; ++i) - txep[i].mbuf = tx_pkts[i]; -} - static inline void _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq) { diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index 889b754cc1..debdd8f6a2 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -724,7 +724,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ice_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) ice_vtx1(txdp, *tx_pkts, flags); @@ -741,7 +741,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txep = &txq->sw_ring[tx_id]; } - ice_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry(txep, tx_pkts, nb_commit); ice_vtx(txdp, tx_pkts, nb_commit, flags); From patchwork Tue Dec 3 16:41:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148988 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DFD4A45E16; Tue, 3 Dec 2024 17:42:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F38B6406A2; Tue, 3 Dec 2024 17:42:01 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 1CEF140A77 for ; Tue, 3 Dec 2024 17:41:59 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244120; x=1764780120; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6E+8Bn6t1U07i7OKUR+5LIkEd2CFQu1X58cIpCERikc=; b=BbwrHrw8IT5ZaQ1UP2WgKzMWC4gPnWR8I0vvL7/hvLmfcl8hJvfe26n7 XQMFDnGBWc7siByZePdeEhnE4k0HizjwqzZTYFr5HqrIaqKPMHJFzA/a0 mt2yNIyemWGU54h5KFdQCm3JWNi8PGBOqZo7c9zH25FYeLaP8xEQW88Te 2j92qnt2Juwy9v2a25qBRjdFXB8HqI/+TvtFz5kwSgxLJQsNATRiRgpe0 T/HpXpcGiJHlCzgBIf6FQmDFwlinkgJHv1xUKdNjHZh9NOz1vQkrjDbzB kHQ/yONu3469C0Z1gDaGWGNxPWrmp0COBQGsuFjKobWT5EYuCOx5ofCjm w==; X-CSE-ConnectionGUID: 9snsm4r3S7aW1gQbbr16hg== X-CSE-MsgGUID: W0HN2ul6S8ekvoYCVYpBaA== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620765" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620765" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:00 -0800 X-CSE-ConnectionGUID: lMCaZ12WQpKfQvrQ5bkrzA== X-CSE-MsgGUID: QkduNIumT1euXdXtfRPjtw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357724" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:41:58 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , Vladimir Medvedkin , Konstantin Ananyev , Anatoly Burakov , Wathsala Vithanage Subject: [PATCH v2 04/22] drivers/net: align Tx queue struct field names Date: Tue, 3 Dec 2024 16:41:10 +0000 Message-ID: <20241203164132.2686558-5-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Across the various Intel drivers sometimes different names are given to fields in the Tx queue structure which have the same function. Do some renaming to align things better for future merging. Signed-off-by: Bruce Richardson --- drivers/net/i40e/i40e_rxtx.c | 6 +-- drivers/net/i40e/i40e_rxtx.h | 2 +- drivers/net/iavf/iavf_rxtx.c | 60 ++++++++++++------------- drivers/net/iavf/iavf_rxtx.h | 14 +++--- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 19 ++++---- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 57 +++++++++++------------ drivers/net/iavf/iavf_rxtx_vec_common.h | 24 +++++----- drivers/net/iavf/iavf_rxtx_vec_sse.c | 18 ++++---- drivers/net/iavf/iavf_vchnl.c | 2 +- drivers/net/ixgbe/base/ixgbe_osdep.h | 2 +- drivers/net/ixgbe/ixgbe_rxtx.c | 16 +++---- drivers/net/ixgbe/ixgbe_rxtx.h | 6 +-- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +- 14 files changed, 116 insertions(+), 114 deletions(-) diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 2e1f07d2a1..b0bb20fe9a 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -2549,7 +2549,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->vsi = vsi; txq->tx_deferred_start = tx_conf->tx_deferred_start; - txq->tx_ring_phys_addr = tz->iova; + txq->tx_ring_dma = tz->iova; txq->tx_ring = (struct i40e_tx_desc *)tz->addr; /* Allocate software ring */ @@ -2923,7 +2923,7 @@ i40e_tx_queue_init(struct i40e_tx_queue *txq) /* clear the context structure first */ memset(&tx_ctx, 0, sizeof(tx_ctx)); tx_ctx.new_context = 1; - tx_ctx.base = txq->tx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT; + tx_ctx.base = txq->tx_ring_dma / I40E_QUEUE_BASE_ADDR_UNIT; tx_ctx.qlen = txq->nb_tx_desc; #ifdef RTE_LIBRTE_IEEE1588 @@ -3209,7 +3209,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf) txq->reg_idx = pf->fdir.fdir_vsi->base_queue; txq->vsi = pf->fdir.fdir_vsi; - txq->tx_ring_phys_addr = tz->iova; + txq->tx_ring_dma = tz->iova; txq->tx_ring = (struct i40e_tx_desc *)tz->addr; /* diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 0f5d3cb0b7..f420c98687 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -129,7 +129,7 @@ struct i40e_rx_queue { */ struct i40e_tx_queue { uint16_t nb_tx_desc; /**< number of TX descriptors */ - uint64_t tx_ring_phys_addr; /**< TX ring DMA address */ + rte_iova_t tx_ring_dma; /**< TX ring DMA address */ volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */ struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */ uint16_t tx_tail; /**< current value of tail register */ diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index e337f20073..adaaeb4625 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -216,8 +216,8 @@ static inline bool check_tx_vec_allow(struct iavf_tx_queue *txq) { if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) && - txq->rs_thresh >= IAVF_VPMD_TX_MAX_BURST && - txq->rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) { + txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST && + txq->tx_rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) { PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq."); return true; } @@ -309,13 +309,13 @@ reset_tx_queue(struct iavf_tx_queue *txq) } txq->tx_tail = 0; - txq->nb_used = 0; + txq->nb_tx_used = 0; txq->last_desc_cleaned = txq->nb_tx_desc - 1; - txq->nb_free = txq->nb_tx_desc - 1; + txq->nb_tx_free = txq->nb_tx_desc - 1; - txq->next_dd = txq->rs_thresh - 1; - txq->next_rs = txq->rs_thresh - 1; + txq->tx_next_dd = txq->tx_rs_thresh - 1; + txq->tx_next_rs = txq->tx_rs_thresh - 1; } static int @@ -845,8 +845,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, } txq->nb_tx_desc = nb_desc; - txq->rs_thresh = tx_rs_thresh; - txq->free_thresh = tx_free_thresh; + txq->tx_rs_thresh = tx_rs_thresh; + txq->tx_free_thresh = tx_free_thresh; txq->queue_id = queue_idx; txq->port_id = dev->data->port_id; txq->offloads = offloads; @@ -881,7 +881,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, rte_free(txq); return -ENOMEM; } - txq->tx_ring_phys_addr = mz->iova; + txq->tx_ring_dma = mz->iova; txq->tx_ring = (struct iavf_tx_desc *)mz->addr; txq->mz = mz; @@ -2387,7 +2387,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq) volatile struct iavf_tx_desc *txd = txq->tx_ring; - desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh); + desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh); if (desc_to_clean_to >= nb_tx_desc) desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc); @@ -2411,7 +2411,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq) txd[desc_to_clean_to].cmd_type_offset_bsz = 0; txq->last_desc_cleaned = desc_to_clean_to; - txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean); return 0; } @@ -2807,7 +2807,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Check if the descriptor ring needs to be cleaned. */ - if (txq->nb_free < txq->free_thresh) + if (txq->nb_tx_free < txq->tx_free_thresh) iavf_xmit_cleanup(txq); desc_idx = txq->tx_tail; @@ -2862,14 +2862,14 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) "port_id=%u queue_id=%u tx_first=%u tx_last=%u", txq->port_id, txq->queue_id, desc_idx, desc_idx_last); - if (nb_desc_required > txq->nb_free) { + if (nb_desc_required > txq->nb_tx_free) { if (iavf_xmit_cleanup(txq)) { if (idx == 0) return 0; goto end_of_tx; } - if (unlikely(nb_desc_required > txq->rs_thresh)) { - while (nb_desc_required > txq->nb_free) { + if (unlikely(nb_desc_required > txq->tx_rs_thresh)) { + while (nb_desc_required > txq->nb_tx_free) { if (iavf_xmit_cleanup(txq)) { if (idx == 0) return 0; @@ -2991,10 +2991,10 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* The last packet data descriptor needs End Of Packet (EOP) */ ddesc_cmd = IAVF_TX_DESC_CMD_EOP; - txq->nb_used = (uint16_t)(txq->nb_used + nb_desc_required); - txq->nb_free = (uint16_t)(txq->nb_free - nb_desc_required); + txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_desc_required); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_desc_required); - if (txq->nb_used >= txq->rs_thresh) { + if (txq->nb_tx_used >= txq->tx_rs_thresh) { PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id=" "%4u (port=%d queue=%d)", desc_idx_last, txq->port_id, txq->queue_id); @@ -3002,7 +3002,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) ddesc_cmd |= IAVF_TX_DESC_CMD_RS; /* Update txq RS bit counters */ - txq->nb_used = 0; + txq->nb_tx_used = 0; } ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd << @@ -4278,11 +4278,11 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, tx_id = txq->tx_tail; tx_last = tx_id; - if (txq->nb_free == 0 && iavf_xmit_cleanup(txq)) + if (txq->nb_tx_free == 0 && iavf_xmit_cleanup(txq)) return 0; - nb_tx_to_clean = txq->nb_free; - nb_tx_free_last = txq->nb_free; + nb_tx_to_clean = txq->nb_tx_free; + nb_tx_free_last = txq->nb_tx_free; if (!free_cnt) free_cnt = txq->nb_tx_desc; @@ -4305,16 +4305,16 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, tx_id = swr_ring[tx_id].next_id; } while (--nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last); - if (txq->rs_thresh > txq->nb_tx_desc - - txq->nb_free || tx_id == tx_last) + if (txq->tx_rs_thresh > txq->nb_tx_desc - + txq->nb_tx_free || tx_id == tx_last) break; if (pkt_cnt < free_cnt) { if (iavf_xmit_cleanup(txq)) break; - nb_tx_to_clean = txq->nb_free - nb_tx_free_last; - nb_tx_free_last = txq->nb_free; + nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last; + nb_tx_free_last = txq->nb_tx_free; } } @@ -4356,8 +4356,8 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, qinfo->nb_desc = txq->nb_tx_desc; - qinfo->conf.tx_free_thresh = txq->free_thresh; - qinfo->conf.tx_rs_thresh = txq->rs_thresh; + qinfo->conf.tx_free_thresh = txq->tx_free_thresh; + qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh; qinfo->conf.offloads = txq->offloads; qinfo->conf.tx_deferred_start = txq->tx_deferred_start; } @@ -4432,8 +4432,8 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset) desc = txq->tx_tail + offset; /* go to next desc that has the RS bit */ - desc = ((desc + txq->rs_thresh - 1) / txq->rs_thresh) * - txq->rs_thresh; + desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) * + txq->tx_rs_thresh; if (desc >= txq->nb_tx_desc) { desc -= txq->nb_tx_desc; if (desc >= txq->nb_tx_desc) diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index 1a191f2c89..44e2de731c 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -277,25 +277,25 @@ struct iavf_rx_queue { struct iavf_tx_queue { const struct rte_memzone *mz; /* memzone for Tx ring */ volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */ - uint64_t tx_ring_phys_addr; /* Tx ring DMA address */ + rte_iova_t tx_ring_dma; /* Tx ring DMA address */ struct ci_tx_entry *sw_ring; /* address array of SW ring */ uint16_t nb_tx_desc; /* ring length */ uint16_t tx_tail; /* current value of tail */ volatile uint8_t *qtx_tail; /* register address of tail */ /* number of used desc since RS bit set */ - uint16_t nb_used; - uint16_t nb_free; + uint16_t nb_tx_used; + uint16_t nb_tx_free; uint16_t last_desc_cleaned; /* last desc have been cleaned*/ - uint16_t free_thresh; - uint16_t rs_thresh; + uint16_t tx_free_thresh; + uint16_t tx_rs_thresh; uint8_t rel_mbufs_type; struct iavf_vsi *vsi; /**< the VSI this queue belongs to */ uint16_t port_id; uint16_t queue_id; uint64_t offloads; - uint16_t next_dd; /* next to set RS, for VPMD */ - uint16_t next_rs; /* next to check DD, for VPMD */ + uint16_t tx_next_dd; /* next to set RS, for VPMD */ + uint16_t tx_next_rs; /* next to check DD, for VPMD */ uint16_t ipsec_crypto_pkt_md_offset; uint64_t mbuf_errors; diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index 28885800e0..42e09a2adf 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -1742,18 +1742,19 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; - if (txq->nb_free < txq->free_thresh) + if (txq->nb_tx_free < txq->tx_free_thresh) iavf_tx_free_bufs(txq); - nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts); + nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) return 0; + nb_commit = nb_pkts; tx_id = txq->tx_tail; txdp = &txq->tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; - txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { @@ -1768,7 +1769,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, nb_commit = (uint16_t)(nb_commit - n); tx_id = 0; - txq->next_rs = (uint16_t)(txq->rs_thresh - 1); + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ txdp = &txq->tx_ring[tx_id]; @@ -1780,12 +1781,12 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload); tx_id = (uint16_t)(tx_id + nb_commit); - if (tx_id > txq->next_rs) { - txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |= + if (tx_id > txq->tx_next_rs) { + txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << IAVF_TXD_QW1_CMD_SHIFT); - txq->next_rs = - (uint16_t)(txq->next_rs + txq->rs_thresh); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); } txq->tx_tail = tx_id; @@ -1806,7 +1807,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t ret, num; /* cross rs_thresh boundary is not allowed */ - num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh); + num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh); ret = iavf_xmit_fixed_burst_vec_avx2(tx_queue, &tx_pkts[nb_tx], num, offload); nb_tx += ret; diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index a899309f94..dc1fef24f0 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -1854,18 +1854,18 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq) struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF]; /* check DD bits on threshold descriptor */ - if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz & + if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) != rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE)) return 0; - n = txq->rs_thresh >> txq->use_ctx; + n = txq->tx_rs_thresh >> txq->use_ctx; /* first buffer to free from S/W ring is at index * tx_next_dd - (tx_rs_thresh-1) */ txep = (void *)txq->sw_ring; - txep += (txq->next_dd >> txq->use_ctx) - (n - 1); + txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1); if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) { struct rte_mempool *mp = txep[0].mbuf->pool; @@ -1951,12 +1951,12 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq) done: /* buffers were freed, update counters */ - txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh); - txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh); - if (txq->next_dd >= txq->nb_tx_desc) - txq->next_dd = (uint16_t)(txq->rs_thresh - 1); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); + txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); + if (txq->tx_next_dd >= txq->nb_tx_desc) + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - return txq->rs_thresh; + return txq->tx_rs_thresh; } static __rte_always_inline void @@ -2319,19 +2319,20 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; - if (txq->nb_free < txq->free_thresh) + if (txq->nb_tx_free < txq->tx_free_thresh) iavf_tx_free_bufs_avx512(txq); - nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts); + nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) return 0; + nb_commit = nb_pkts; tx_id = txq->tx_tail; txdp = &txq->tx_ring[tx_id]; txep = (void *)txq->sw_ring; txep += tx_id; - txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { @@ -2346,7 +2347,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, nb_commit = (uint16_t)(nb_commit - n); tx_id = 0; - txq->next_rs = (uint16_t)(txq->rs_thresh - 1); + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ txdp = &txq->tx_ring[tx_id]; @@ -2359,12 +2360,12 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload); tx_id = (uint16_t)(tx_id + nb_commit); - if (tx_id > txq->next_rs) { - txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |= + if (tx_id > txq->tx_next_rs) { + txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << IAVF_TXD_QW1_CMD_SHIFT); - txq->next_rs = - (uint16_t)(txq->next_rs + txq->rs_thresh); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); } txq->tx_tail = tx_id; @@ -2386,10 +2387,10 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; - if (txq->nb_free < txq->free_thresh) + if (txq->nb_tx_free < txq->tx_free_thresh) iavf_tx_free_bufs_avx512(txq); - nb_commit = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts << 1); + nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1); nb_commit &= 0xFFFE; if (unlikely(nb_commit == 0)) return 0; @@ -2400,7 +2401,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, txep = (void *)txq->sw_ring; txep += (tx_id >> 1); - txq->nb_free = (uint16_t)(txq->nb_free - nb_commit); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_commit); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (n != 0 && nb_commit >= n) { @@ -2414,7 +2415,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, nb_commit = (uint16_t)(nb_commit - n); - txq->next_rs = (uint16_t)(txq->rs_thresh - 1); + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); tx_id = 0; /* avoid reach the end of ring */ txdp = txq->tx_ring; @@ -2427,12 +2428,12 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, ctx_vtx(txdp, tx_pkts, nb_mbuf, flags, offload, txq->vlan_flag); tx_id = (uint16_t)(tx_id + nb_commit); - if (tx_id > txq->next_rs) { - txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |= + if (tx_id > txq->tx_next_rs) { + txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << IAVF_TXD_QW1_CMD_SHIFT); - txq->next_rs = - (uint16_t)(txq->next_rs + txq->rs_thresh); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); } txq->tx_tail = tx_id; @@ -2452,7 +2453,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t ret, num; /* cross rs_thresh boundary is not allowed */ - num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh); + num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh); ret = iavf_xmit_fixed_burst_vec_avx512(tx_queue, &tx_pkts[nb_tx], num, offload); nb_tx += ret; @@ -2480,10 +2481,10 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq) const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */ struct ci_tx_entry_vec *swr = (void *)txq->sw_ring; - if (!txq->sw_ring || txq->nb_free == max_desc) + if (!txq->sw_ring || txq->nb_tx_free == max_desc) return; - i = (txq->next_dd - txq->rs_thresh + 1) >> txq->use_ctx; + i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx; while (i != end_desc) { rte_pktmbuf_free_seg(swr[i].mbuf); swr[i].mbuf = NULL; @@ -2517,7 +2518,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t ret, num; /* cross rs_thresh boundary is not allowed */ - num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->rs_thresh); + num = (uint16_t)RTE_MIN(nb_pkts << 1, txq->tx_rs_thresh); num = num >> 1; ret = iavf_xmit_fixed_burst_vec_avx512_ctx(tx_queue, &tx_pkts[nb_tx], num, offload); diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 2c118cc059..ff24055c34 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -26,17 +26,17 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq) struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF]; /* check DD bits on threshold descriptor */ - if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz & + if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) != rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE)) return 0; - n = txq->rs_thresh; + n = txq->tx_rs_thresh; /* first buffer to free from S/W ring is at index * tx_next_dd - (tx_rs_thresh-1) */ - txep = &txq->sw_ring[txq->next_dd - (n - 1)]; + txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; m = rte_pktmbuf_prefree_seg(txep[0].mbuf); if (likely(m != NULL)) { free[0] = m; @@ -65,12 +65,12 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq) } /* buffers were freed, update counters */ - txq->nb_free = (uint16_t)(txq->nb_free + txq->rs_thresh); - txq->next_dd = (uint16_t)(txq->next_dd + txq->rs_thresh); - if (txq->next_dd >= txq->nb_tx_desc) - txq->next_dd = (uint16_t)(txq->rs_thresh - 1); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); + txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); + if (txq->tx_next_dd >= txq->nb_tx_desc) + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - return txq->rs_thresh; + return txq->tx_rs_thresh; } static inline void @@ -109,10 +109,10 @@ _iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq) unsigned i; const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); - if (!txq->sw_ring || txq->nb_free == max_desc) + if (!txq->sw_ring || txq->nb_tx_free == max_desc) return; - i = txq->next_dd - txq->rs_thresh + 1; + i = txq->tx_next_dd - txq->tx_rs_thresh + 1; while (i != txq->tx_tail) { rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); txq->sw_ring[i].mbuf = NULL; @@ -169,8 +169,8 @@ iavf_tx_vec_queue_default(struct iavf_tx_queue *txq) if (!txq) return -1; - if (txq->rs_thresh < IAVF_VPMD_TX_MAX_BURST || - txq->rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF) + if (txq->tx_rs_thresh < IAVF_VPMD_TX_MAX_BURST || + txq->tx_rs_thresh > IAVF_VPMD_TX_MAX_FREE_BUF) return -1; if (txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index bc4b8f14c8..ed8455d669 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1374,10 +1374,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; int i; - if (txq->nb_free < txq->free_thresh) + if (txq->nb_tx_free < txq->tx_free_thresh) iavf_tx_free_bufs(txq); - nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts); + nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) return 0; nb_commit = nb_pkts; @@ -1386,7 +1386,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txdp = &txq->tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; - txq->nb_free = (uint16_t)(txq->nb_free - nb_pkts); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { @@ -1400,7 +1400,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, nb_commit = (uint16_t)(nb_commit - n); tx_id = 0; - txq->next_rs = (uint16_t)(txq->rs_thresh - 1); + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ txdp = &txq->tx_ring[tx_id]; @@ -1412,12 +1412,12 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, iavf_vtx(txdp, tx_pkts, nb_commit, flags); tx_id = (uint16_t)(tx_id + nb_commit); - if (tx_id > txq->next_rs) { - txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |= + if (tx_id > txq->tx_next_rs) { + txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << IAVF_TXD_QW1_CMD_SHIFT); - txq->next_rs = - (uint16_t)(txq->next_rs + txq->rs_thresh); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); } txq->tx_tail = tx_id; @@ -1441,7 +1441,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t ret, num; /* cross rs_thresh boundary is not allowed */ - num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh); + num = (uint16_t)RTE_MIN(nb_pkts, txq->tx_rs_thresh); ret = iavf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num); nb_tx += ret; nb_pkts -= ret; diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 065ab3594c..0646a2f978 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -1247,7 +1247,7 @@ iavf_configure_queues(struct iavf_adapter *adapter, /* Virtchnnl configure tx queues by pairs */ if (i < adapter->dev_data->nb_tx_queues) { vc_qp->txq.ring_len = txq[i]->nb_tx_desc; - vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_phys_addr; + vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma; } vc_qp->rxq.vsi_id = vf->vsi_res->vsi_id; diff --git a/drivers/net/ixgbe/base/ixgbe_osdep.h b/drivers/net/ixgbe/base/ixgbe_osdep.h index 502f386b56..95dbe2bedd 100644 --- a/drivers/net/ixgbe/base/ixgbe_osdep.h +++ b/drivers/net/ixgbe/base/ixgbe_osdep.h @@ -124,7 +124,7 @@ static inline uint32_t ixgbe_read_addr(volatile void* addr) rte_write32_wc_relaxed((rte_cpu_to_le_32(value)), reg) #define IXGBE_PCI_REG_ADDR(hw, reg) \ - ((volatile uint32_t *)((char *)(hw)->hw_addr + (reg))) + ((volatile void *)((char *)(hw)->hw_addr + (reg))) #define IXGBE_PCI_REG_ARRAY_ADDR(hw, reg, index) \ IXGBE_PCI_REG_ADDR((hw), (reg) + ((index) << 2)) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index db4b993ebc..0a80b944f0 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -308,7 +308,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* update tail pointer */ rte_wmb(); - IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, txq->tx_tail); + IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, txq->tx_tail); return nb_pkts; } @@ -946,7 +946,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u", (unsigned) txq->port_id, (unsigned) txq->queue_id, (unsigned) tx_id, (unsigned) nb_tx); - IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, tx_id); + IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->qtx_tail, tx_id); txq->tx_tail = tx_id; return nb_tx; @@ -2786,11 +2786,11 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, hw->mac.type == ixgbe_mac_X550_vf || hw->mac.type == ixgbe_mac_X550EM_x_vf || hw->mac.type == ixgbe_mac_X550EM_a_vf) - txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx)); + txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx)); else - txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx)); + txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx)); - txq->tx_ring_phys_addr = tz->iova; + txq->tx_ring_dma = tz->iova; txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr; /* Allocate software ring */ @@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64, - txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr); + txq->sw_ring, txq->tx_ring, txq->tx_ring_dma); /* set up vector or scalar TX function as appropriate */ ixgbe_set_tx_function(dev, txq); @@ -5303,7 +5303,7 @@ ixgbe_dev_tx_init(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_tx_queues; i++) { txq = dev->data->tx_queues[i]; - bus_addr = txq->tx_ring_phys_addr; + bus_addr = txq->tx_ring_dma; IXGBE_WRITE_REG(hw, IXGBE_TDBAL(txq->reg_idx), (uint32_t)(bus_addr & 0x00000000ffffffffULL)); IXGBE_WRITE_REG(hw, IXGBE_TDBAH(txq->reg_idx), @@ -5887,7 +5887,7 @@ ixgbevf_dev_tx_init(struct rte_eth_dev *dev) /* Setup the Base and Length of the Tx Descriptor Rings */ for (i = 0; i < dev->data->nb_tx_queues; i++) { txq = dev->data->tx_queues[i]; - bus_addr = txq->tx_ring_phys_addr; + bus_addr = txq->tx_ring_dma; IXGBE_WRITE_REG(hw, IXGBE_VFTDBAL(i), (uint32_t)(bus_addr & 0x00000000ffffffffULL)); IXGBE_WRITE_REG(hw, IXGBE_VFTDBAH(i), diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 1647396419..00e2009b3e 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -186,12 +186,12 @@ struct ixgbe_advctx_info { struct ixgbe_tx_queue { /** TX ring virtual address. */ volatile union ixgbe_adv_tx_desc *tx_ring; - uint64_t tx_ring_phys_addr; /**< TX ring DMA address. */ + rte_iova_t tx_ring_dma; /**< TX ring DMA address. */ union { struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */ struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */ }; - volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */ + volatile uint8_t *qtx_tail; /**< Address of TDT register. */ uint16_t nb_tx_desc; /**< number of TX descriptors. */ uint16_t tx_tail; /**< current value of TDT reg. */ /**< Start freeing TX buffers if there are less free descriptors than @@ -218,7 +218,7 @@ struct ixgbe_tx_queue { /** Hardware context0 history. */ struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM]; const struct ixgbe_txq_ops *ops; /**< txq ops */ - uint8_t tx_deferred_start; /**< not in global dev start. */ + bool tx_deferred_start; /**< not in global dev start. */ #ifdef RTE_LIB_SECURITY uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index 02b53c008e..871c1a7cd2 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -628,7 +628,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_tail = tx_id; - IXGBE_PCI_REG_WRITE(txq->tdt_reg_addr, txq->tx_tail); + IXGBE_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail); return nb_pkts; } diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index c8b5377c9f..37f2079519 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -751,7 +751,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_tail = tx_id; - IXGBE_PCI_REG_WC_WRITE(txq->tdt_reg_addr, txq->tx_tail); + IXGBE_PCI_REG_WC_WRITE(txq->qtx_tail, txq->tx_tail); return nb_pkts; } From patchwork Tue Dec 3 16:41:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148989 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2628D45E16; Tue, 3 Dec 2024 17:42:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B2B0340A6E; Tue, 3 Dec 2024 17:42:05 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 3A29340A81 for ; Tue, 3 Dec 2024 17:42:03 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244123; x=1764780123; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PfsZ/3Jgs/KbRX8I4rkZJIB6LJROlGVDQpXWX4ZnPgw=; b=V/IaDAoPORBZkOn6eN2tXukvqZFlbc3j2qMXm2QFA2zG7ge1mXaqo364 3eUePb8ab2gTnEcdEoNLqFEcJHTZzNL39I9uNK8byju6ac1wRHtyyDyhK mTPbjuyt7pvwwareb+NblO0EM/piFvHRBgq6vR9ofSqLwrJJ0CZPjliob 45stWhk19Ln0Ml+9iHWBqSN/3jP4xHrI+MqfRPb/gGx0HjmBNSL8Q701P lz3Iz1OB1ymUipvvlg0+zner2LrAi0ZzttAgdbT3Y/u4rtnZkFqpKpi9l I1rdi5+vcrVYIFSpTfpqxSheKIJVzWuMK+LVPXQVYTbj2U/ZrhqfFEji8 A==; X-CSE-ConnectionGUID: UnYH8jiTS0SlcuYH0w4qtA== X-CSE-MsgGUID: 5uOcDbNjQNaLn4f4y/q3TA== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620770" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620770" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:03 -0800 X-CSE-ConnectionGUID: s1orWWqzR2azLF9LheIJNw== X-CSE-MsgGUID: cG6+Js5uSPWPE1Z42mdN4g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357731" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:01 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , David Christensen , Konstantin Ananyev , Wathsala Vithanage , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v2 05/22] drivers/net: add prefix for driver-specific structs Date: Tue, 3 Dec 2024 16:41:11 +0000 Message-ID: <20241203164132.2686558-6-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In preparation for merging the Tx structs for multiple drivers into a single struct, rename the driver-specific pointers in each struct to have a prefix on it, to avoid conflicts. Signed-off-by: Bruce Richardson --- drivers/net/i40e/i40e_fdir.c | 6 +-- .../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +- drivers/net/i40e/i40e_rxtx.c | 30 ++++++------ drivers/net/i40e/i40e_rxtx.h | 4 +- drivers/net/i40e/i40e_rxtx_vec_altivec.c | 6 +-- drivers/net/i40e/i40e_rxtx_vec_avx2.c | 6 +-- drivers/net/i40e/i40e_rxtx_vec_avx512.c | 8 ++-- drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +- drivers/net/i40e/i40e_rxtx_vec_neon.c | 6 +-- drivers/net/i40e/i40e_rxtx_vec_sse.c | 6 +-- drivers/net/iavf/iavf_rxtx.c | 24 +++++----- drivers/net/iavf/iavf_rxtx.h | 4 +- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 6 +-- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++--- drivers/net/iavf/iavf_rxtx_vec_common.h | 2 +- drivers/net/iavf/iavf_rxtx_vec_sse.c | 6 +-- drivers/net/ice/ice_dcf_ethdev.c | 4 +- drivers/net/ice/ice_rxtx.c | 48 +++++++++---------- drivers/net/ice/ice_rxtx.h | 4 +- drivers/net/ice/ice_rxtx_vec_avx2.c | 6 +-- drivers/net/ice/ice_rxtx_vec_avx512.c | 8 ++-- drivers/net/ice/ice_rxtx_vec_common.h | 4 +- drivers/net/ice/ice_rxtx_vec_sse.c | 6 +-- .../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx.c | 22 ++++----- drivers/net/ixgbe/ixgbe_rxtx.h | 2 +- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 6 +-- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 6 +-- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 6 +-- 29 files changed, 128 insertions(+), 128 deletions(-) diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c index 47f79ecf11..c600167634 100644 --- a/drivers/net/i40e/i40e_fdir.c +++ b/drivers/net/i40e/i40e_fdir.c @@ -1383,7 +1383,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev) volatile struct i40e_tx_desc *tmp_txdp; tmp_tail = txq->tx_tail; - tmp_txdp = &txq->tx_ring[tmp_tail + 1]; + tmp_txdp = &txq->i40e_tx_ring[tmp_tail + 1]; do { if ((tmp_txdp->cmd_type_offset_bsz & @@ -1640,7 +1640,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf, PMD_DRV_LOG(INFO, "filling filter programming descriptor."); fdirdp = (volatile struct i40e_filter_program_desc *) - (&txq->tx_ring[txq->tx_tail]); + (&txq->i40e_tx_ring[txq->tx_tail]); fdirdp->qindex_flex_ptype_vsi = rte_cpu_to_le_32((fdir_action->rx_queue << @@ -1710,7 +1710,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf, fdirdp->fd_id = rte_cpu_to_le_32(filter->soft_id); PMD_DRV_LOG(INFO, "filling transmit descriptor."); - txdp = &txq->tx_ring[txq->tx_tail + 1]; + txdp = &txq->i40e_tx_ring[txq->tx_tail + 1]; txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[txq->tx_tail >> 1]); td_cmd = I40E_TX_DESC_CMD_EOP | diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c index 260d238ce4..8679e5c1fd 100644 --- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c +++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c @@ -75,7 +75,7 @@ i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue, return 0; /* check DD bits on threshold descriptor */ - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) != rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) return 0; diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index b0bb20fe9a..34ef931859 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -379,7 +379,7 @@ static inline int i40e_xmit_cleanup(struct i40e_tx_queue *txq) { struct ci_tx_entry *sw_ring = txq->sw_ring; - volatile struct i40e_tx_desc *txd = txq->tx_ring; + volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; uint16_t desc_to_clean_to; @@ -1103,7 +1103,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txq = tx_queue; sw_ring = txq->sw_ring; - txr = txq->tx_ring; + txr = txq->i40e_tx_ring; tx_id = txq->tx_tail; txe = &sw_ring[tx_id]; @@ -1338,7 +1338,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq) const uint16_t k = RTE_ALIGN_FLOOR(tx_rs_thresh, RTE_I40E_TX_MAX_FREE_BUF_SZ); const uint16_t m = tx_rs_thresh % RTE_I40E_TX_MAX_FREE_BUF_SZ; - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) != rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) return 0; @@ -1417,7 +1417,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { - volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]); + volatile struct i40e_tx_desc *txdp = &txq->i40e_tx_ring[txq->tx_tail]; struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail]; const int N_PER_LOOP = 4; const int N_PER_LOOP_MASK = N_PER_LOOP - 1; @@ -1445,7 +1445,7 @@ tx_xmit_pkts(struct i40e_tx_queue *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - volatile struct i40e_tx_desc *txr = txq->tx_ring; + volatile struct i40e_tx_desc *txr = txq->i40e_tx_ring; uint16_t n = 0; /** @@ -1556,7 +1556,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts bool pkt_error = false; const char *reason = NULL; uint16_t good_pkts = nb_pkts; - struct i40e_adapter *adapter = txq->vsi->adapter; + struct i40e_adapter *adapter = txq->i40e_vsi->adapter; for (idx = 0; idx < nb_pkts; idx++) { mb = tx_pkts[idx]; @@ -2329,7 +2329,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset) desc -= txq->nb_tx_desc; } - status = &txq->tx_ring[desc].cmd_type_offset_bsz; + status = &txq->i40e_tx_ring[desc].cmd_type_offset_bsz; mask = rte_le_to_cpu_64(I40E_TXD_QW1_DTYPE_MASK); expect = rte_cpu_to_le_64( I40E_TX_DESC_DTYPE_DESC_DONE << I40E_TXD_QW1_DTYPE_SHIFT); @@ -2527,7 +2527,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate TX hardware ring descriptors. */ ring_size = sizeof(struct i40e_tx_desc) * I40E_MAX_RING_DESC; ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); - tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, + tz = rte_eth_dma_zone_reserve(dev, "i40e_tx_ring", queue_idx, ring_size, I40E_RING_BASE_ALIGN, socket_id); if (!tz) { i40e_tx_queue_release(txq); @@ -2546,11 +2546,11 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->reg_idx = reg_idx; txq->port_id = dev->data->port_id; txq->offloads = offloads; - txq->vsi = vsi; + txq->i40e_vsi = vsi; txq->tx_deferred_start = tx_conf->tx_deferred_start; txq->tx_ring_dma = tz->iova; - txq->tx_ring = (struct i40e_tx_desc *)tz->addr; + txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr; /* Allocate software ring */ txq->sw_ring = @@ -2885,11 +2885,11 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq) txe = txq->sw_ring; size = sizeof(struct i40e_tx_desc) * txq->nb_tx_desc; for (i = 0; i < size; i++) - ((volatile char *)txq->tx_ring)[i] = 0; + ((volatile char *)txq->i40e_tx_ring)[i] = 0; prev = (uint16_t)(txq->nb_tx_desc - 1); for (i = 0; i < txq->nb_tx_desc; i++) { - volatile struct i40e_tx_desc *txd = &txq->tx_ring[i]; + volatile struct i40e_tx_desc *txd = &txq->i40e_tx_ring[i]; txd->cmd_type_offset_bsz = rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE); @@ -2914,7 +2914,7 @@ int i40e_tx_queue_init(struct i40e_tx_queue *txq) { enum i40e_status_code err = I40E_SUCCESS; - struct i40e_vsi *vsi = txq->vsi; + struct i40e_vsi *vsi = txq->i40e_vsi; struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); uint16_t pf_q = txq->reg_idx; struct i40e_hmc_obj_txq tx_ctx; @@ -3207,10 +3207,10 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf) txq->nb_tx_desc = I40E_FDIR_NUM_TX_DESC; txq->queue_id = I40E_FDIR_QUEUE_ID; txq->reg_idx = pf->fdir.fdir_vsi->base_queue; - txq->vsi = pf->fdir.fdir_vsi; + txq->i40e_vsi = pf->fdir.fdir_vsi; txq->tx_ring_dma = tz->iova; - txq->tx_ring = (struct i40e_tx_desc *)tz->addr; + txq->i40e_tx_ring = (struct i40e_tx_desc *)tz->addr; /* * don't need to allocate software ring and reset for the fdir diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index f420c98687..8315ee2f59 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -130,7 +130,7 @@ struct i40e_rx_queue { struct i40e_tx_queue { uint16_t nb_tx_desc; /**< number of TX descriptors */ rte_iova_t tx_ring_dma; /**< TX ring DMA address */ - volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */ + volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */ struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */ uint16_t tx_tail; /**< current value of tail register */ volatile uint8_t *qtx_tail; /**< register address of tail */ @@ -150,7 +150,7 @@ struct i40e_tx_queue { uint16_t port_id; /**< Device port identifier. */ uint16_t queue_id; /**< TX queue index. */ uint16_t reg_idx; - struct i40e_vsi *vsi; /**< the VSI this queue belongs to */ + struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */ uint16_t tx_next_dd; uint16_t tx_next_rs; bool q_set; /**< indicate if tx queue has been configured */ diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index 80f07a3e10..bf0e9ebd71 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -568,7 +568,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->i40e_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -588,7 +588,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->i40e_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; } @@ -598,7 +598,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << I40E_TXD_QW1_CMD_SHIFT); txq->tx_next_rs = diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c index b26bae4757..5042e348db 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c @@ -758,7 +758,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->i40e_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -779,7 +779,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->i40e_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; } @@ -789,7 +789,7 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << I40E_TXD_QW1_CMD_SHIFT); txq->tx_next_rs = diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c index 8b8a16daa8..04fbe3b2e3 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c @@ -764,7 +764,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq) struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ]; /* check DD bits on threshold descriptor */ - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) != rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) return 0; @@ -948,7 +948,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->i40e_tx_ring[tx_id]; txep = (void *)txq->sw_ring; txep += tx_id; @@ -970,7 +970,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = txq->tx_ring; + txdp = txq->i40e_tx_ring; txep = (void *)txq->sw_ring; } @@ -980,7 +980,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << I40E_TXD_QW1_CMD_SHIFT); txq->tx_next_rs = diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index 325e99c1a4..e81f958361 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -26,7 +26,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq) struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ]; /* check DD bits on threshold descriptor */ - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) != rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) return 0; diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index 26bc345a0a..05191e4884 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -695,7 +695,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->i40e_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -715,7 +715,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->i40e_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; } @@ -725,7 +725,7 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << I40E_TXD_QW1_CMD_SHIFT); txq->tx_next_rs = diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index ebc32b0d27..d81b553842 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -714,7 +714,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->i40e_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -734,7 +734,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->i40e_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; } @@ -744,7 +744,7 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->i40e_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << I40E_TXD_QW1_CMD_SHIFT); txq->tx_next_rs = diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index adaaeb4625..6eda91e76b 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -296,11 +296,11 @@ reset_tx_queue(struct iavf_tx_queue *txq) txe = txq->sw_ring; size = sizeof(struct iavf_tx_desc) * txq->nb_tx_desc; for (i = 0; i < size; i++) - ((volatile char *)txq->tx_ring)[i] = 0; + ((volatile char *)txq->iavf_tx_ring)[i] = 0; prev = (uint16_t)(txq->nb_tx_desc - 1); for (i = 0; i < txq->nb_tx_desc; i++) { - txq->tx_ring[i].cmd_type_offset_bsz = + txq->iavf_tx_ring[i].cmd_type_offset_bsz = rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE); txe[i].mbuf = NULL; txe[i].last_id = i; @@ -851,7 +851,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->port_id = dev->data->port_id; txq->offloads = offloads; txq->tx_deferred_start = tx_conf->tx_deferred_start; - txq->vsi = vsi; + txq->iavf_vsi = vsi; if (iavf_ipsec_crypto_supported(adapter)) txq->ipsec_crypto_pkt_md_offset = @@ -872,7 +872,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate TX hardware ring descriptors. */ ring_size = sizeof(struct iavf_tx_desc) * IAVF_MAX_RING_DESC; ring_size = RTE_ALIGN(ring_size, IAVF_DMA_MEM_ALIGN); - mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, + mz = rte_eth_dma_zone_reserve(dev, "iavf_tx_ring", queue_idx, ring_size, IAVF_RING_BASE_ALIGN, socket_id); if (!mz) { @@ -882,7 +882,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } txq->tx_ring_dma = mz->iova; - txq->tx_ring = (struct iavf_tx_desc *)mz->addr; + txq->iavf_tx_ring = (struct iavf_tx_desc *)mz->addr; txq->mz = mz; reset_tx_queue(txq); @@ -2385,7 +2385,7 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq) uint16_t desc_to_clean_to; uint16_t nb_tx_to_clean; - volatile struct iavf_tx_desc *txd = txq->tx_ring; + volatile struct iavf_tx_desc *txd = txq->iavf_tx_ring; desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh); if (desc_to_clean_to >= nb_tx_desc) @@ -2796,7 +2796,7 @@ uint16_t iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct iavf_tx_queue *txq = tx_queue; - volatile struct iavf_tx_desc *txr = txq->tx_ring; + volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring; struct ci_tx_entry *txe_ring = txq->sw_ring; struct ci_tx_entry *txe, *txn; struct rte_mbuf *mb, *mb_seg; @@ -3803,10 +3803,10 @@ iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts, struct iavf_tx_queue *txq = tx_queue; enum iavf_tx_burst_type tx_burst_type; - if (!txq->vsi || txq->vsi->adapter->no_poll) + if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll) return 0; - tx_burst_type = txq->vsi->adapter->tx_burst_type; + tx_burst_type = txq->iavf_vsi->adapter->tx_burst_type; return iavf_tx_pkt_burst_ops[tx_burst_type](tx_queue, tx_pkts, nb_pkts); @@ -3824,9 +3824,9 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, const char *reason = NULL; bool pkt_error = false; struct iavf_tx_queue *txq = tx_queue; - struct iavf_adapter *adapter = txq->vsi->adapter; + struct iavf_adapter *adapter = txq->iavf_vsi->adapter; enum iavf_tx_burst_type tx_burst_type = - txq->vsi->adapter->tx_burst_type; + txq->iavf_vsi->adapter->tx_burst_type; for (idx = 0; idx < nb_pkts; idx++) { mb = tx_pkts[idx]; @@ -4440,7 +4440,7 @@ iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset) desc -= txq->nb_tx_desc; } - status = &txq->tx_ring[desc].cmd_type_offset_bsz; + status = &txq->iavf_tx_ring[desc].cmd_type_offset_bsz; mask = rte_le_to_cpu_64(IAVF_TXD_QW1_DTYPE_MASK); expect = rte_cpu_to_le_64( IAVF_TX_DESC_DTYPE_DESC_DONE << IAVF_TXD_QW1_DTYPE_SHIFT); diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index 44e2de731c..cc1eaaf54c 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -276,7 +276,7 @@ struct iavf_rx_queue { /* Structure associated with each TX queue. */ struct iavf_tx_queue { const struct rte_memzone *mz; /* memzone for Tx ring */ - volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */ + volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */ rte_iova_t tx_ring_dma; /* Tx ring DMA address */ struct ci_tx_entry *sw_ring; /* address array of SW ring */ uint16_t nb_tx_desc; /* ring length */ @@ -289,7 +289,7 @@ struct iavf_tx_queue { uint16_t tx_free_thresh; uint16_t tx_rs_thresh; uint8_t rel_mbufs_type; - struct iavf_vsi *vsi; /**< the VSI this queue belongs to */ + struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */ uint16_t port_id; uint16_t queue_id; diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index 42e09a2adf..f33ceceee1 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -1751,7 +1751,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, nb_commit = nb_pkts; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->iavf_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -1772,7 +1772,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->iavf_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; } @@ -1782,7 +1782,7 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << IAVF_TXD_QW1_CMD_SHIFT); txq->tx_next_rs = diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index dc1fef24f0..97420a75fd 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -1854,7 +1854,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq) struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF]; /* check DD bits on threshold descriptor */ - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) != rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE)) return 0; @@ -2328,7 +2328,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, nb_commit = nb_pkts; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->iavf_tx_ring[tx_id]; txep = (void *)txq->sw_ring; txep += tx_id; @@ -2350,7 +2350,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->iavf_tx_ring[tx_id]; txep = (void *)txq->sw_ring; txep += tx_id; } @@ -2361,7 +2361,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << IAVF_TXD_QW1_CMD_SHIFT); txq->tx_next_rs = @@ -2397,7 +2397,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, nb_pkts = nb_commit >> 1; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->iavf_tx_ring[tx_id]; txep = (void *)txq->sw_ring; txep += (tx_id >> 1); @@ -2418,7 +2418,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); tx_id = 0; /* avoid reach the end of ring */ - txdp = txq->tx_ring; + txdp = txq->iavf_tx_ring; txep = (void *)txq->sw_ring; } @@ -2429,7 +2429,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << IAVF_TXD_QW1_CMD_SHIFT); txq->tx_next_rs = diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index ff24055c34..6305c8cdd6 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -26,7 +26,7 @@ iavf_tx_free_bufs(struct iavf_tx_queue *txq) struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF]; /* check DD bits on threshold descriptor */ - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) != rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE)) return 0; diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index ed8455d669..64c3bf0eaa 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1383,7 +1383,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, nb_commit = nb_pkts; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->iavf_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -1403,7 +1403,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->iavf_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; } @@ -1413,7 +1413,7 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->iavf_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << IAVF_TXD_QW1_CMD_SHIFT); txq->tx_next_rs = diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 4b98e4066b..4ffd1f5567 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -401,11 +401,11 @@ reset_tx_queue(struct ice_tx_queue *txq) txe = txq->sw_ring; size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc; for (i = 0; i < size; i++) - ((volatile char *)txq->tx_ring)[i] = 0; + ((volatile char *)txq->ice_tx_ring)[i] = 0; prev = (uint16_t)(txq->nb_tx_desc - 1); for (i = 0; i < txq->nb_tx_desc; i++) { - txq->tx_ring[i].cmd_type_offset_bsz = + txq->ice_tx_ring[i].cmd_type_offset_bsz = rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE); txe[i].mbuf = NULL; txe[i].last_id = i; diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index d584086a36..5ec92f6d0c 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -776,7 +776,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) if (!txq_elem) return -ENOMEM; - vsi = txq->vsi; + vsi = txq->ice_vsi; hw = ICE_VSI_TO_HW(vsi); pf = ICE_VSI_TO_PF(vsi); @@ -966,7 +966,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) if (!txq_elem) return -ENOMEM; - vsi = txq->vsi; + vsi = txq->ice_vsi; hw = ICE_VSI_TO_HW(vsi); memset(&tx_ctx, 0, sizeof(tx_ctx)); @@ -1039,11 +1039,11 @@ ice_reset_tx_queue(struct ice_tx_queue *txq) txe = txq->sw_ring; size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc; for (i = 0; i < size; i++) - ((volatile char *)txq->tx_ring)[i] = 0; + ((volatile char *)txq->ice_tx_ring)[i] = 0; prev = (uint16_t)(txq->nb_tx_desc - 1); for (i = 0; i < txq->nb_tx_desc; i++) { - volatile struct ice_tx_desc *txd = &txq->tx_ring[i]; + volatile struct ice_tx_desc *txd = &txq->ice_tx_ring[i]; txd->cmd_type_offset_bsz = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE); @@ -1153,7 +1153,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) PMD_DRV_LOG(INFO, "TX queue %u not started", tx_queue_id); return 0; } - vsi = txq->vsi; + vsi = txq->ice_vsi; q_ids[0] = txq->reg_idx; q_teids[0] = txq->q_teid; @@ -1479,7 +1479,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate TX hardware ring descriptors. */ ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC; ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN); - tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, + tz = rte_eth_dma_zone_reserve(dev, "ice_tx_ring", queue_idx, ring_size, ICE_RING_BASE_ALIGN, socket_id); if (!tz) { @@ -1500,11 +1500,11 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, txq->reg_idx = vsi->base_queue + queue_idx; txq->port_id = dev->data->port_id; txq->offloads = offloads; - txq->vsi = vsi; + txq->ice_vsi = vsi; txq->tx_deferred_start = tx_conf->tx_deferred_start; txq->tx_ring_dma = tz->iova; - txq->tx_ring = tz->addr; + txq->ice_tx_ring = tz->addr; /* Allocate software ring */ txq->sw_ring = @@ -2372,7 +2372,7 @@ ice_tx_descriptor_status(void *tx_queue, uint16_t offset) desc -= txq->nb_tx_desc; } - status = &txq->tx_ring[desc].cmd_type_offset_bsz; + status = &txq->ice_tx_ring[desc].cmd_type_offset_bsz; mask = rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M); expect = rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE << ICE_TXD_QW1_DTYPE_S); @@ -2452,10 +2452,10 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf) txq->nb_tx_desc = ICE_FDIR_NUM_TX_DESC; txq->queue_id = ICE_FDIR_QUEUE_ID; txq->reg_idx = pf->fdir.fdir_vsi->base_queue; - txq->vsi = pf->fdir.fdir_vsi; + txq->ice_vsi = pf->fdir.fdir_vsi; txq->tx_ring_dma = tz->iova; - txq->tx_ring = (struct ice_tx_desc *)tz->addr; + txq->ice_tx_ring = (struct ice_tx_desc *)tz->addr; /* * don't need to allocate software ring and reset for the fdir * program queue just set the queue has been configured. @@ -2838,7 +2838,7 @@ static inline int ice_xmit_cleanup(struct ice_tx_queue *txq) { struct ci_tx_entry *sw_ring = txq->sw_ring; - volatile struct ice_tx_desc *txd = txq->tx_ring; + volatile struct ice_tx_desc *txd = txq->ice_tx_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; uint16_t desc_to_clean_to; @@ -2959,7 +2959,7 @@ uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct ice_tx_queue *txq; - volatile struct ice_tx_desc *tx_ring; + volatile struct ice_tx_desc *ice_tx_ring; volatile struct ice_tx_desc *txd; struct ci_tx_entry *sw_ring; struct ci_tx_entry *txe, *txn; @@ -2981,7 +2981,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txq = tx_queue; sw_ring = txq->sw_ring; - tx_ring = txq->tx_ring; + ice_tx_ring = txq->ice_tx_ring; tx_id = txq->tx_tail; txe = &sw_ring[tx_id]; @@ -3064,7 +3064,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Setup TX context descriptor if required */ volatile struct ice_tx_ctx_desc *ctx_txd = (volatile struct ice_tx_ctx_desc *) - &tx_ring[tx_id]; + &ice_tx_ring[tx_id]; uint16_t cd_l2tag2 = 0; uint64_t cd_type_cmd_tso_mss = ICE_TX_DESC_DTYPE_CTX; @@ -3082,7 +3082,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) cd_type_cmd_tso_mss |= ((uint64_t)ICE_TX_CTX_DESC_TSYN << ICE_TXD_CTX_QW1_CMD_S) | - (((uint64_t)txq->vsi->adapter->ptp_tx_index << + (((uint64_t)txq->ice_vsi->adapter->ptp_tx_index << ICE_TXD_CTX_QW1_TSYN_S) & ICE_TXD_CTX_QW1_TSYN_M); ctx_txd->tunneling_params = @@ -3106,7 +3106,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) m_seg = tx_pkt; do { - txd = &tx_ring[tx_id]; + txd = &ice_tx_ring[tx_id]; txn = &sw_ring[txe->next_id]; if (txe->mbuf) @@ -3134,7 +3134,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txe->last_id = tx_last; tx_id = txe->next_id; txe = txn; - txd = &tx_ring[tx_id]; + txd = &ice_tx_ring[tx_id]; txn = &sw_ring[txe->next_id]; } @@ -3187,7 +3187,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq) struct ci_tx_entry *txep; uint16_t i; - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) != rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE)) return 0; @@ -3360,7 +3360,7 @@ static inline void ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { - volatile struct ice_tx_desc *txdp = &txq->tx_ring[txq->tx_tail]; + volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail]; struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail]; const int N_PER_LOOP = 4; const int N_PER_LOOP_MASK = N_PER_LOOP - 1; @@ -3393,7 +3393,7 @@ tx_xmit_pkts(struct ice_tx_queue *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - volatile struct ice_tx_desc *txr = txq->tx_ring; + volatile struct ice_tx_desc *txr = txq->ice_tx_ring; uint16_t n = 0; /** @@ -3722,7 +3722,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) bool pkt_error = false; uint16_t good_pkts = nb_pkts; const char *reason = NULL; - struct ice_adapter *adapter = txq->vsi->adapter; + struct ice_adapter *adapter = txq->ice_vsi->adapter; uint64_t ol_flags; for (idx = 0; idx < nb_pkts; idx++) { @@ -4701,11 +4701,11 @@ ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc) uint16_t i; fdirdp = (volatile struct ice_fltr_desc *) - (&txq->tx_ring[txq->tx_tail]); + (&txq->ice_tx_ring[txq->tx_tail]); fdirdp->qidx_compq_space_stat = fdir_desc->qidx_compq_space_stat; fdirdp->dtype_cmd_vsi_fdid = fdir_desc->dtype_cmd_vsi_fdid; - txdp = &txq->tx_ring[txq->tx_tail + 1]; + txdp = &txq->ice_tx_ring[txq->tx_tail + 1]; txdp->buf_addr = rte_cpu_to_le_64(pf->fdir.dma_addr); td_cmd = ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS | diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index 8d1a1a8676..3257f449f5 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -148,7 +148,7 @@ struct ice_rx_queue { struct ice_tx_queue { uint16_t nb_tx_desc; /* number of TX descriptors */ rte_iova_t tx_ring_dma; /* TX ring DMA address */ - volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */ + volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */ uint16_t tx_tail; /* current value of tail register */ volatile uint8_t *qtx_tail; /* register address of tail */ @@ -171,7 +171,7 @@ struct ice_tx_queue { uint32_t q_teid; /* TX schedule node id. */ uint16_t reg_idx; uint64_t offloads; - struct ice_vsi *vsi; /* the VSI this queue belongs to */ + struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */ uint16_t tx_next_dd; uint16_t tx_next_rs; uint64_t mbuf_errors; diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c index 336697e72d..dde07ac99e 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c @@ -874,7 +874,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->ice_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -895,7 +895,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->ice_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; } @@ -905,7 +905,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) << ICE_TXD_QW1_CMD_S); txq->tx_next_rs = diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c index 6b6aa3f1fe..e4d0270176 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c @@ -869,7 +869,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq) struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ]; /* check DD bits on threshold descriptor */ - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) != rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE)) return 0; @@ -1071,7 +1071,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->ice_tx_ring[tx_id]; txep = (void *)txq->sw_ring; txep += tx_id; @@ -1093,7 +1093,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = txq->tx_ring; + txdp = txq->ice_tx_ring; txep = (void *)txq->sw_ring; } @@ -1103,7 +1103,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) << ICE_TXD_QW1_CMD_S); txq->tx_next_rs = diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 32e4541267..7b865b53ad 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -22,7 +22,7 @@ ice_tx_free_bufs_vec(struct ice_tx_queue *txq) struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ]; /* check DD bits on threshold descriptor */ - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) != rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE)) return 0; @@ -121,7 +121,7 @@ _ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq) i = txq->tx_next_dd - txq->tx_rs_thresh + 1; #ifdef __AVX512VL__ - struct rte_eth_dev *dev = &rte_eth_devices[txq->vsi->adapter->pf.dev_data->port_id]; + struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id]; if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 || dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) { diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index debdd8f6a2..364207e8a8 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -717,7 +717,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->ice_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -737,7 +737,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->ice_tx_ring[tx_id]; txep = &txq->sw_ring[tx_id]; } @@ -747,7 +747,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= + txq->ice_tx_ring[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) << ICE_TXD_QW1_CMD_S); txq->tx_next_rs = diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c index 2241726ad8..a878db3150 100644 --- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c +++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c @@ -72,7 +72,7 @@ ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue, return 0; /* check DD bits on threshold descriptor */ - status = txq->tx_ring[txq->tx_next_dd].wb.status; + status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status; if (!(status & IXGBE_ADVTXD_STAT_DD)) return 0; diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 0a80b944f0..f7ddbba1b6 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -106,7 +106,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ]; /* check DD bit on threshold descriptor */ - status = txq->tx_ring[txq->tx_next_dd].wb.status; + status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status; if (!(status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD))) return 0; @@ -198,7 +198,7 @@ static inline void ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { - volatile union ixgbe_adv_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]); + volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail]; struct ci_tx_entry *txep = &txq->sw_ring[txq->tx_tail]; const int N_PER_LOOP = 4; const int N_PER_LOOP_MASK = N_PER_LOOP-1; @@ -232,7 +232,7 @@ tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; - volatile union ixgbe_adv_tx_desc *tx_r = txq->tx_ring; + volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring; uint16_t n = 0; /* @@ -564,7 +564,7 @@ static inline int ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) { struct ci_tx_entry *sw_ring = txq->sw_ring; - volatile union ixgbe_adv_tx_desc *txr = txq->tx_ring; + volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; uint16_t desc_to_clean_to; @@ -652,7 +652,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_offload.data[1] = 0; txq = tx_queue; sw_ring = txq->sw_ring; - txr = txq->tx_ring; + txr = txq->ixgbe_tx_ring; tx_id = txq->tx_tail; txe = &sw_ring[tx_id]; txp = NULL; @@ -2495,13 +2495,13 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) /* Zero out HW ring memory */ for (i = 0; i < txq->nb_tx_desc; i++) { - txq->tx_ring[i] = zeroed_desc; + txq->ixgbe_tx_ring[i] = zeroed_desc; } /* Initialize SW ring entries */ prev = (uint16_t) (txq->nb_tx_desc - 1); for (i = 0; i < txq->nb_tx_desc; i++) { - volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i]; + volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i]; txd->wb.status = rte_cpu_to_le_32(IXGBE_TXD_STAT_DD); txe[i].mbuf = NULL; @@ -2751,7 +2751,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, * handle the maximum ring size is allocated in order to allow for * resizing in later calls to the queue setup function. */ - tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, + tz = rte_eth_dma_zone_reserve(dev, "ixgbe_tx_ring", queue_idx, sizeof(union ixgbe_adv_tx_desc) * IXGBE_MAX_RING_DESC, IXGBE_ALIGN, socket_id); if (tz == NULL) { @@ -2791,7 +2791,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->qtx_tail = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx)); txq->tx_ring_dma = tz->iova; - txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr; + txq->ixgbe_tx_ring = (union ixgbe_adv_tx_desc *)tz->addr; /* Allocate software ring */ txq->sw_ring = rte_zmalloc_socket("txq->sw_ring", @@ -2802,7 +2802,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64, - txq->sw_ring, txq->tx_ring, txq->tx_ring_dma); + txq->sw_ring, txq->ixgbe_tx_ring, txq->tx_ring_dma); /* set up vector or scalar TX function as appropriate */ ixgbe_set_tx_function(dev, txq); @@ -3328,7 +3328,7 @@ ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset) desc -= txq->nb_tx_desc; } - status = &txq->tx_ring[desc].wb.status; + status = &txq->ixgbe_tx_ring[desc].wb.status; if (*status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD)) return RTE_ETH_TX_DESC_DONE; diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 00e2009b3e..f6bae37cf3 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -185,7 +185,7 @@ struct ixgbe_advctx_info { */ struct ixgbe_tx_queue { /** TX ring virtual address. */ - volatile union ixgbe_adv_tx_desc *tx_ring; + volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring; rte_iova_t tx_ring_dma; /**< TX ring DMA address. */ union { struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index e9592c0d08..cc51bf6eed 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -22,7 +22,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) struct rte_mbuf *m, *free[RTE_IXGBE_TX_MAX_FREE_BUF_SZ]; /* check DD bit on threshold descriptor */ - status = txq->tx_ring[txq->tx_next_dd].wb.status; + status = txq->ixgbe_tx_ring[txq->tx_next_dd].wb.status; if (!(status & IXGBE_ADVTXD_STAT_DD)) return 0; @@ -154,11 +154,11 @@ _ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq) /* Zero out HW ring memory */ for (i = 0; i < txq->nb_tx_desc; i++) - txq->tx_ring[i] = zeroed_desc; + txq->ixgbe_tx_ring[i] = zeroed_desc; /* Initialize SW ring entries */ for (i = 0; i < txq->nb_tx_desc; i++) { - volatile union ixgbe_adv_tx_desc *txd = &txq->tx_ring[i]; + volatile union ixgbe_adv_tx_desc *txd = &txq->ixgbe_tx_ring[i]; txd->wb.status = IXGBE_TXD_STAT_DD; txe[i].mbuf = NULL; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index 871c1a7cd2..06be7ec82a 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -590,7 +590,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->ixgbe_tx_ring[tx_id]; txep = &txq->sw_ring_v[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -610,7 +610,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->ixgbe_tx_ring[tx_id]; txep = &txq->sw_ring_v[tx_id]; } @@ -620,7 +620,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |= + txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |= rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS); txq->tx_next_rs = (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index 37f2079519..a21a57bd55 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -712,7 +712,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; tx_id = txq->tx_tail; - txdp = &txq->tx_ring[tx_id]; + txdp = &txq->ixgbe_tx_ring[tx_id]; txep = &txq->sw_ring_v[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -733,7 +733,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); /* avoid reach the end of ring */ - txdp = &(txq->tx_ring[tx_id]); + txdp = &txq->ixgbe_tx_ring[tx_id]; txep = &txq->sw_ring_v[tx_id]; } @@ -743,7 +743,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->tx_next_rs) { - txq->tx_ring[txq->tx_next_rs].read.cmd_type_len |= + txq->ixgbe_tx_ring[txq->tx_next_rs].read.cmd_type_len |= rte_cpu_to_le_32(IXGBE_ADVTXD_DCMD_RS); txq->tx_next_rs = (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); From patchwork Tue Dec 3 16:41:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148990 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 369AE45E16; Tue, 3 Dec 2024 17:42:47 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 70F4B40BA2; Tue, 3 Dec 2024 17:42:08 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 204DB40674 for ; Tue, 3 Dec 2024 17:42:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244126; x=1764780126; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9jaVvAKpQ/Gyyma6nP7Xvl8NwQun4T47qqTmCjRfDD4=; b=f1JJ+VhZ/UKiawdr85bKN8YXiqzbDNeXyxH73DgL64vdOF0cd9Ry0F7h EALFFW9AwcCC4lGD/u38Wjvx4MhgAgGsH2lrLmvOtkJ1aSYoeApXSgxv5 cb3GunOOq3CMr3GEVs9X6w0+Klh3snXjRGJq+lLVYgmYrAx4eOEG2WD5H FL3J9ShBEA/6C+5rY1VBKh03yOATsL19ABmF+flu4PrgH9sgS2dNlIeB4 3iw4OlxF6zSlSLcjPuSjDh6i3ZCO687tOeqOvrzCSKO75hWD12lyyMRsA YeMMptQRW06ORs5bGc+1L7FnWu7Wu9NzqYsF79sUASgJVCP60HY9p8qOx g==; X-CSE-ConnectionGUID: GUcxu79DRrenMhSZyBpt8Q== X-CSE-MsgGUID: ZFF90mSQT7Ci6+c3w9AIwQ== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620778" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620778" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:06 -0800 X-CSE-ConnectionGUID: VV6LqeHoRVCuHVopKv08Lg== X-CSE-MsgGUID: pJ12hrmdSKCtm1fSc8BFIw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357741" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:04 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , David Christensen , Konstantin Ananyev , Wathsala Vithanage , Anatoly Burakov Subject: [PATCH v2 06/22] net/_common_intel: merge ice and i40e Tx queue struct Date: Tue, 3 Dec 2024 16:41:12 +0000 Message-ID: <20241203164132.2686558-7-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The queue structures of i40e and ice drivers are virtually identical, so merge them into a common struct. This should allow easier function merging in future using that common struct. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 55 +++++++++++++++++ drivers/net/i40e/i40e_ethdev.c | 4 +- drivers/net/i40e/i40e_ethdev.h | 4 +- drivers/net/i40e/i40e_fdir.c | 4 +- .../net/i40e/i40e_recycle_mbufs_vec_common.c | 2 +- drivers/net/i40e/i40e_rxtx.c | 58 +++++++++--------- drivers/net/i40e/i40e_rxtx.h | 50 ++-------------- drivers/net/i40e/i40e_rxtx_vec_altivec.c | 4 +- drivers/net/i40e/i40e_rxtx_vec_avx2.c | 4 +- drivers/net/i40e/i40e_rxtx_vec_avx512.c | 6 +- drivers/net/i40e/i40e_rxtx_vec_common.h | 2 +- drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +- drivers/net/i40e/i40e_rxtx_vec_sse.c | 4 +- drivers/net/ice/ice_dcf.c | 4 +- drivers/net/ice/ice_dcf_ethdev.c | 10 ++-- drivers/net/ice/ice_diagnose.c | 2 +- drivers/net/ice/ice_ethdev.c | 2 +- drivers/net/ice/ice_ethdev.h | 4 +- drivers/net/ice/ice_rxtx.c | 60 +++++++++---------- drivers/net/ice/ice_rxtx.h | 41 +------------ drivers/net/ice/ice_rxtx_vec_avx2.c | 4 +- drivers/net/ice/ice_rxtx_vec_avx512.c | 8 +-- drivers/net/ice/ice_rxtx_vec_common.h | 8 +-- drivers/net/ice/ice_rxtx_vec_sse.c | 6 +- 24 files changed, 165 insertions(+), 185 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index 5397007411..c965f5ee6c 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -8,6 +8,9 @@ #include #include +/* forward declaration of the common intel (ci) queue structure */ +struct ci_tx_queue; + /** * Structure associated with each descriptor of the TX ring of a TX queue. */ @@ -24,6 +27,58 @@ struct ci_tx_entry_vec { struct rte_mbuf *mbuf; /* mbuf associated with TX desc, if any. */ }; +typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq); + +struct ci_tx_queue { + union { /* TX ring virtual address */ + volatile struct ice_tx_desc *ice_tx_ring; + volatile struct i40e_tx_desc *i40e_tx_ring; + }; + volatile uint8_t *qtx_tail; /* register address of tail */ + struct ci_tx_entry *sw_ring; /* virtual address of SW ring */ + rte_iova_t tx_ring_dma; /* TX ring DMA address */ + uint16_t nb_tx_desc; /* number of TX descriptors */ + uint16_t tx_tail; /* current value of tail register */ + uint16_t nb_tx_used; /* number of TX desc used since RS bit set */ + /* index to last TX descriptor to have been cleaned */ + uint16_t last_desc_cleaned; + /* Total number of TX descriptors ready to be allocated. */ + uint16_t nb_tx_free; + /* Start freeing TX buffers if there are less free descriptors than + * this value. + */ + uint16_t tx_free_thresh; + /* Number of TX descriptors to use before RS bit is set. */ + uint16_t tx_rs_thresh; + uint8_t pthresh; /**< Prefetch threshold register. */ + uint8_t hthresh; /**< Host threshold register. */ + uint8_t wthresh; /**< Write-back threshold reg. */ + uint16_t port_id; /* Device port identifier. */ + uint16_t queue_id; /* TX queue index. */ + uint16_t reg_idx; + uint64_t offloads; + uint16_t tx_next_dd; + uint16_t tx_next_rs; + uint64_t mbuf_errors; + bool tx_deferred_start; /* don't start this queue in dev start */ + bool q_set; /* indicate if tx queue has been configured */ + union { /* the VSI this queue belongs to */ + struct ice_vsi *ice_vsi; + struct i40e_vsi *i40e_vsi; + }; + const struct rte_memzone *mz; + + union { + struct { /* ICE driver specific values */ + ice_tx_release_mbufs_t tx_rel_mbufs; + uint32_t q_teid; /* TX schedule node id. */ + }; + struct { /* I40E driver specific values */ + uint8_t dcb_tc; + }; + }; +}; + static __rte_always_inline void ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 30dcdc68a8..bf5560ccc8 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -3685,7 +3685,7 @@ i40e_dev_update_mbuf_stats(struct rte_eth_dev *ethdev, struct i40e_mbuf_stats *mbuf_stats) { uint16_t idx; - struct i40e_tx_queue *txq; + struct ci_tx_queue *txq; for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) { txq = ethdev->data->tx_queues[idx]; @@ -6585,7 +6585,7 @@ i40e_dev_tx_init(struct i40e_pf *pf) struct rte_eth_dev_data *data = pf->dev_data; uint16_t i; uint32_t ret = I40E_SUCCESS; - struct i40e_tx_queue *txq; + struct ci_tx_queue *txq; for (i = 0; i < data->nb_tx_queues; i++) { txq = data->tx_queues[i]; diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index 98213948b4..d351193ed9 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -334,7 +334,7 @@ struct i40e_vsi_list { }; struct i40e_rx_queue; -struct i40e_tx_queue; +struct ci_tx_queue; /* Bandwidth limit information */ struct i40e_bw_info { @@ -738,7 +738,7 @@ TAILQ_HEAD(i40e_fdir_filter_list, i40e_fdir_filter); struct i40e_fdir_info { struct i40e_vsi *fdir_vsi; /* pointer to fdir VSI structure */ uint16_t match_counter_index; /* Statistic counter index used for fdir*/ - struct i40e_tx_queue *txq; + struct ci_tx_queue *txq; struct i40e_rx_queue *rxq; void *prg_pkt[I40E_FDIR_PRG_PKT_CNT]; /* memory for fdir program packet */ uint64_t dma_addr[I40E_FDIR_PRG_PKT_CNT]; /* physic address of packet memory*/ diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c index c600167634..349627a2ed 100644 --- a/drivers/net/i40e/i40e_fdir.c +++ b/drivers/net/i40e/i40e_fdir.c @@ -1372,7 +1372,7 @@ i40e_find_available_buffer(struct rte_eth_dev *dev) { struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct i40e_fdir_info *fdir_info = &pf->fdir; - struct i40e_tx_queue *txq = pf->fdir.txq; + struct ci_tx_queue *txq = pf->fdir.txq; /* no available buffer * search for more available buffers from the current @@ -1628,7 +1628,7 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf, const struct i40e_fdir_filter_conf *filter, bool add, bool wait_status) { - struct i40e_tx_queue *txq = pf->fdir.txq; + struct ci_tx_queue *txq = pf->fdir.txq; struct i40e_rx_queue *rxq = pf->fdir.rxq; const struct i40e_fdir_action *fdir_action = &filter->action; volatile struct i40e_tx_desc *txdp; diff --git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c index 8679e5c1fd..5a65c80d90 100644 --- a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c +++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c @@ -55,7 +55,7 @@ uint16_t i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue, struct rte_eth_recycle_rxq_info *recycle_rxq_info) { - struct i40e_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; struct ci_tx_entry *txep; struct rte_mbuf **rxep; int i, n; diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 34ef931859..305bc53480 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -376,7 +376,7 @@ i40e_build_ctob(uint32_t td_cmd, } static inline int -i40e_xmit_cleanup(struct i40e_tx_queue *txq) +i40e_xmit_cleanup(struct ci_tx_queue *txq) { struct ci_tx_entry *sw_ring = txq->sw_ring; volatile struct i40e_tx_desc *txd = txq->i40e_tx_ring; @@ -1080,7 +1080,7 @@ i40e_calc_pkt_desc(struct rte_mbuf *tx_pkt) uint16_t i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct i40e_tx_queue *txq; + struct ci_tx_queue *txq; struct ci_tx_entry *sw_ring; struct ci_tx_entry *txe, *txn; volatile struct i40e_tx_desc *txd; @@ -1329,7 +1329,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } static __rte_always_inline int -i40e_tx_free_bufs(struct i40e_tx_queue *txq) +i40e_tx_free_bufs(struct ci_tx_queue *txq) { struct ci_tx_entry *txep; uint16_t tx_rs_thresh = txq->tx_rs_thresh; @@ -1413,7 +1413,7 @@ tx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts) /* Fill hardware descriptor ring with mbuf data */ static inline void -i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq, +i40e_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { @@ -1441,7 +1441,7 @@ i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq, } static inline uint16_t -tx_xmit_pkts(struct i40e_tx_queue *txq, +tx_xmit_pkts(struct ci_tx_queue *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -1504,14 +1504,14 @@ i40e_xmit_pkts_simple(void *tx_queue, uint16_t nb_tx = 0; if (likely(nb_pkts <= I40E_TX_MAX_BURST)) - return tx_xmit_pkts((struct i40e_tx_queue *)tx_queue, + return tx_xmit_pkts((struct ci_tx_queue *)tx_queue, tx_pkts, nb_pkts); while (nb_pkts) { uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts, I40E_TX_MAX_BURST); - ret = tx_xmit_pkts((struct i40e_tx_queue *)tx_queue, + ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue, &tx_pkts[nb_tx], num); nb_tx = (uint16_t)(nb_tx + ret); nb_pkts = (uint16_t)(nb_pkts - ret); @@ -1527,7 +1527,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { uint16_t nb_tx = 0; - struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; @@ -1549,7 +1549,7 @@ i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, static uint16_t i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct i40e_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; uint16_t idx; uint64_t ol_flags; struct rte_mbuf *mb; @@ -1611,7 +1611,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts pkt_error = true; break; } - if (mb->nb_segs > ((struct i40e_tx_queue *)tx_queue)->nb_tx_desc) { + if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) { PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length"); pkt_error = true; break; @@ -1873,7 +1873,7 @@ int i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { int err; - struct i40e_tx_queue *txq; + struct ci_tx_queue *txq; struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); PMD_INIT_FUNC_TRACE(); @@ -1907,7 +1907,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) int i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct i40e_tx_queue *txq; + struct ci_tx_queue *txq; int err; struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -2311,7 +2311,7 @@ i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset) int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset) { - struct i40e_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; volatile uint64_t *status; uint64_t mask, expect; uint32_t desc; @@ -2341,7 +2341,7 @@ i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset) static int i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev, - struct i40e_tx_queue *txq) + struct ci_tx_queue *txq) { struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); @@ -2394,7 +2394,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, { struct i40e_vsi *vsi; struct i40e_pf *pf = NULL; - struct i40e_tx_queue *txq; + struct ci_tx_queue *txq; const struct rte_memzone *tz; uint32_t ring_size; uint16_t tx_rs_thresh, tx_free_thresh; @@ -2515,7 +2515,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("i40e tx queue", - sizeof(struct i40e_tx_queue), + sizeof(struct ci_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); if (!txq) { @@ -2600,7 +2600,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, void i40e_tx_queue_release(void *txq) { - struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq; + struct ci_tx_queue *q = (struct ci_tx_queue *)txq; if (!q) { PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL"); @@ -2705,7 +2705,7 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq) } void -i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq) +i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq) { struct rte_eth_dev *dev; uint16_t i; @@ -2765,7 +2765,7 @@ i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq) } static int -i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq, +i40e_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt) { struct ci_tx_entry *swr_ring = txq->sw_ring; @@ -2824,7 +2824,7 @@ i40e_tx_done_cleanup_full(struct i40e_tx_queue *txq, } static int -i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq, +i40e_tx_done_cleanup_simple(struct ci_tx_queue *txq, uint32_t free_cnt) { int i, n, cnt; @@ -2848,7 +2848,7 @@ i40e_tx_done_cleanup_simple(struct i40e_tx_queue *txq, } static int -i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused, +i40e_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused, uint32_t free_cnt __rte_unused) { return -ENOTSUP; @@ -2856,7 +2856,7 @@ i40e_tx_done_cleanup_vec(struct i40e_tx_queue *txq __rte_unused, int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt) { - struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq; + struct ci_tx_queue *q = (struct ci_tx_queue *)txq; struct rte_eth_dev *dev = &rte_eth_devices[q->port_id]; struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); @@ -2872,7 +2872,7 @@ i40e_tx_done_cleanup(void *txq, uint32_t free_cnt) } void -i40e_reset_tx_queue(struct i40e_tx_queue *txq) +i40e_reset_tx_queue(struct ci_tx_queue *txq) { struct ci_tx_entry *txe; uint16_t i, prev, size; @@ -2911,7 +2911,7 @@ i40e_reset_tx_queue(struct i40e_tx_queue *txq) /* Init the TX queue in hardware */ int -i40e_tx_queue_init(struct i40e_tx_queue *txq) +i40e_tx_queue_init(struct ci_tx_queue *txq) { enum i40e_status_code err = I40E_SUCCESS; struct i40e_vsi *vsi = txq->i40e_vsi; @@ -3167,7 +3167,7 @@ i40e_dev_free_queues(struct rte_eth_dev *dev) enum i40e_status_code i40e_fdir_setup_tx_resources(struct i40e_pf *pf) { - struct i40e_tx_queue *txq; + struct ci_tx_queue *txq; const struct rte_memzone *tz = NULL; struct rte_eth_dev *dev; uint32_t ring_size; @@ -3181,7 +3181,7 @@ i40e_fdir_setup_tx_resources(struct i40e_pf *pf) /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("i40e fdir tx queue", - sizeof(struct i40e_tx_queue), + sizeof(struct ci_tx_queue), RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); if (!txq) { @@ -3304,7 +3304,7 @@ void i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo) { - struct i40e_tx_queue *txq; + struct ci_tx_queue *txq; txq = dev->data->tx_queues[queue_id]; @@ -3552,7 +3552,7 @@ i40e_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id, } void __rte_cold -i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq) +i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq) { struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); @@ -3592,7 +3592,7 @@ i40e_set_tx_function(struct rte_eth_dev *dev) #endif if (ad->tx_vec_allowed) { for (i = 0; i < dev->data->nb_tx_queues; i++) { - struct i40e_tx_queue *txq = + struct ci_tx_queue *txq = dev->data->tx_queues[i]; if (txq && i40e_txq_vec_setup(txq)) { diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 8315ee2f59..043d1df912 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -124,44 +124,6 @@ struct i40e_rx_queue { const struct rte_memzone *mz; }; -/* - * Structure associated with each TX queue. - */ -struct i40e_tx_queue { - uint16_t nb_tx_desc; /**< number of TX descriptors */ - rte_iova_t tx_ring_dma; /**< TX ring DMA address */ - volatile struct i40e_tx_desc *i40e_tx_ring; /**< TX ring virtual address */ - struct ci_tx_entry *sw_ring; /**< virtual address of SW ring */ - uint16_t tx_tail; /**< current value of tail register */ - volatile uint8_t *qtx_tail; /**< register address of tail */ - uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */ - /**< index to last TX descriptor to have been cleaned */ - uint16_t last_desc_cleaned; - /**< Total number of TX descriptors ready to be allocated. */ - uint16_t nb_tx_free; - /**< Start freeing TX buffers if there are less free descriptors than - this value. */ - uint16_t tx_free_thresh; - /** Number of TX descriptors to use before RS bit is set. */ - uint16_t tx_rs_thresh; - uint8_t pthresh; /**< Prefetch threshold register. */ - uint8_t hthresh; /**< Host threshold register. */ - uint8_t wthresh; /**< Write-back threshold reg. */ - uint16_t port_id; /**< Device port identifier. */ - uint16_t queue_id; /**< TX queue index. */ - uint16_t reg_idx; - struct i40e_vsi *i40e_vsi; /**< the VSI this queue belongs to */ - uint16_t tx_next_dd; - uint16_t tx_next_rs; - bool q_set; /**< indicate if tx queue has been configured */ - uint64_t mbuf_errors; - - bool tx_deferred_start; /**< don't start this queue in dev start */ - uint8_t dcb_tc; /**< Traffic class of tx queue */ - uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */ - const struct rte_memzone *mz; -}; - /** Offload features */ union i40e_tx_offload { uint64_t data; @@ -209,15 +171,15 @@ uint16_t i40e_simple_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t i40e_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); -int i40e_tx_queue_init(struct i40e_tx_queue *txq); +int i40e_tx_queue_init(struct ci_tx_queue *txq); int i40e_rx_queue_init(struct i40e_rx_queue *rxq); -void i40e_free_tx_resources(struct i40e_tx_queue *txq); +void i40e_free_tx_resources(struct ci_tx_queue *txq); void i40e_free_rx_resources(struct i40e_rx_queue *rxq); void i40e_dev_clear_queues(struct rte_eth_dev *dev); void i40e_dev_free_queues(struct rte_eth_dev *dev); void i40e_reset_rx_queue(struct i40e_rx_queue *rxq); -void i40e_reset_tx_queue(struct i40e_tx_queue *txq); -void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq); +void i40e_reset_tx_queue(struct ci_tx_queue *txq); +void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq); int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt); int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq); void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq); @@ -237,13 +199,13 @@ uint16_t i40e_recv_scattered_pkts_vec(void *rx_queue, uint16_t nb_pkts); int i40e_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev); int i40e_rxq_vec_setup(struct i40e_rx_queue *rxq); -int i40e_txq_vec_setup(struct i40e_tx_queue *txq); +int i40e_txq_vec_setup(struct ci_tx_queue *txq); void i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq); uint16_t i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); void i40e_set_rx_function(struct rte_eth_dev *dev); void i40e_set_tx_function_flag(struct rte_eth_dev *dev, - struct i40e_tx_queue *txq); + struct ci_tx_queue *txq); void i40e_set_tx_function(struct rte_eth_dev *dev); void i40e_set_default_ptype_table(struct rte_eth_dev *dev); void i40e_set_default_pctype_table(struct rte_eth_dev *dev); diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index bf0e9ebd71..500bba2cef 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -551,7 +551,7 @@ uint16_t i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; @@ -625,7 +625,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq) } int __rte_cold -i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused * txq) +i40e_txq_vec_setup(struct ci_tx_queue __rte_unused * txq) { return 0; } diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c index 5042e348db..29bef64287 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c @@ -743,7 +743,7 @@ static inline uint16_t i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; @@ -808,7 +808,7 @@ i40e_xmit_pkts_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { uint16_t nb_tx = 0; - struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c index 04fbe3b2e3..a3f6d1667f 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c @@ -755,7 +755,7 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue, } static __rte_always_inline int -i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq) +i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq) { struct ci_tx_entry_vec *txep; uint32_t n; @@ -933,7 +933,7 @@ static inline uint16_t i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; @@ -999,7 +999,7 @@ i40e_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { uint16_t nb_tx = 0; - struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index e81f958361..57d6263ccf 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -17,7 +17,7 @@ #endif static __rte_always_inline int -i40e_tx_free_bufs(struct i40e_tx_queue *txq) +i40e_tx_free_bufs(struct ci_tx_queue *txq) { struct ci_tx_entry *txep; uint32_t n; diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index 05191e4884..c97f337e43 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -679,7 +679,7 @@ uint16_t i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, struct rte_mbuf **__rte_restrict tx_pkts, uint16_t nb_pkts) { - struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; @@ -753,7 +753,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq) } int __rte_cold -i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq) +i40e_txq_vec_setup(struct ci_tx_queue *txq __rte_unused) { return 0; } diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index d81b553842..2c467e2089 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -698,7 +698,7 @@ uint16_t i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct i40e_tx_queue *txq = (struct i40e_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; @@ -771,7 +771,7 @@ i40e_rxq_vec_setup(struct i40e_rx_queue *rxq) } int __rte_cold -i40e_txq_vec_setup(struct i40e_tx_queue __rte_unused *txq) +i40e_txq_vec_setup(struct ci_tx_queue *txq __rte_unused) { return 0; } diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 204d4eadbb..65c18921f4 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -1177,8 +1177,8 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw) { struct ice_rx_queue **rxq = (struct ice_rx_queue **)hw->eth_dev->data->rx_queues; - struct ice_tx_queue **txq = - (struct ice_tx_queue **)hw->eth_dev->data->tx_queues; + struct ci_tx_queue **txq = + (struct ci_tx_queue **)hw->eth_dev->data->tx_queues; struct virtchnl_vsi_queue_config_info *vc_config; struct virtchnl_queue_pair_info *vc_qp; struct dcf_virtchnl_cmd args; diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 4ffd1f5567..a0c065d78c 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -387,7 +387,7 @@ reset_rx_queue(struct ice_rx_queue *rxq) } static inline void -reset_tx_queue(struct ice_tx_queue *txq) +reset_tx_queue(struct ci_tx_queue *txq) { struct ci_tx_entry *txe; uint32_t i, size; @@ -454,7 +454,7 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { struct ice_dcf_adapter *ad = dev->data->dev_private; struct iavf_hw *hw = &ad->real_hw.avf; - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; int err = 0; if (tx_queue_id >= dev->data->nb_tx_queues) @@ -486,7 +486,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { struct ice_dcf_adapter *ad = dev->data->dev_private; struct ice_dcf_hw *hw = &ad->real_hw; - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; int err; if (tx_queue_id >= dev->data->nb_tx_queues) @@ -511,7 +511,7 @@ static int ice_dcf_start_queues(struct rte_eth_dev *dev) { struct ice_rx_queue *rxq; - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; int nb_rxq = 0; int nb_txq, i; @@ -638,7 +638,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev) struct ice_dcf_adapter *ad = dev->data->dev_private; struct ice_dcf_hw *hw = &ad->real_hw; struct ice_rx_queue *rxq; - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; int ret, i; /* Stop All queues */ diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c index 5bec9d00ad..a50068441a 100644 --- a/drivers/net/ice/ice_diagnose.c +++ b/drivers/net/ice/ice_diagnose.c @@ -605,7 +605,7 @@ void print_node(const struct rte_eth_dev_data *ethdata, get_elem_type(data->data.elem_type)); if (data->data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) { for (uint16_t i = 0; i < ethdata->nb_tx_queues; i++) { - struct ice_tx_queue *q = ethdata->tx_queues[i]; + struct ci_tx_queue *q = ethdata->tx_queues[i]; if (q->q_teid == data->node_teid) { fprintf(stream, "\t\t\t\tTXQ%u\n", i); break; diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 93a6308a86..80eee03204 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -6448,7 +6448,7 @@ ice_update_mbuf_stats(struct rte_eth_dev *ethdev, struct ice_mbuf_stats *mbuf_stats) { uint16_t idx; - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) { txq = ethdev->data->tx_queues[idx]; diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index a5b27fabd2..ba54655499 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -258,7 +258,7 @@ struct ice_vsi_list { }; struct ice_rx_queue; -struct ice_tx_queue; +struct ci_tx_queue; /** * Structure that defines a VSI, associated with a adapter. @@ -408,7 +408,7 @@ struct ice_fdir_counter_pool_container { */ struct ice_fdir_info { struct ice_vsi *fdir_vsi; /* pointer to fdir VSI structure */ - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; struct ice_rx_queue *rxq; void *prg_pkt; /* memory for fdir program packet */ uint64_t dma_addr; /* physic address of packet memory*/ diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 5ec92f6d0c..bcc7c7a016 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -743,7 +743,7 @@ ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) int ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; int err; struct ice_vsi *vsi; struct ice_hw *hw; @@ -944,7 +944,7 @@ int ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; int err; struct ice_vsi *vsi; struct ice_hw *hw; @@ -1008,7 +1008,7 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* Free all mbufs for descriptors in tx queue */ static void -_ice_tx_queue_release_mbufs(struct ice_tx_queue *txq) +_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq) { uint16_t i; @@ -1026,7 +1026,7 @@ _ice_tx_queue_release_mbufs(struct ice_tx_queue *txq) } static void -ice_reset_tx_queue(struct ice_tx_queue *txq) +ice_reset_tx_queue(struct ci_tx_queue *txq) { struct ci_tx_entry *txe; uint16_t i, prev, size; @@ -1066,7 +1066,7 @@ ice_reset_tx_queue(struct ice_tx_queue *txq) int ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_vsi *vsi = pf->main_vsi; @@ -1134,7 +1134,7 @@ ice_fdir_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) int ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_vsi *vsi = pf->main_vsi; @@ -1354,7 +1354,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_vsi *vsi = pf->main_vsi; - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; const struct rte_memzone *tz; uint32_t ring_size; uint16_t tx_rs_thresh, tx_free_thresh; @@ -1467,7 +1467,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket(NULL, - sizeof(struct ice_tx_queue), + sizeof(struct ci_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); if (!txq) { @@ -1542,7 +1542,7 @@ ice_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) void ice_tx_queue_release(void *txq) { - struct ice_tx_queue *q = (struct ice_tx_queue *)txq; + struct ci_tx_queue *q = (struct ci_tx_queue *)txq; if (!q) { PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL"); @@ -1577,7 +1577,7 @@ void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo) { - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; txq = dev->data->tx_queues[queue_id]; @@ -2354,7 +2354,7 @@ ice_rx_descriptor_status(void *rx_queue, uint16_t offset) int ice_tx_descriptor_status(void *tx_queue, uint16_t offset) { - struct ice_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; volatile uint64_t *status; uint64_t mask, expect; uint32_t desc; @@ -2412,7 +2412,7 @@ ice_free_queues(struct rte_eth_dev *dev) int ice_fdir_setup_tx_resources(struct ice_pf *pf) { - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; const struct rte_memzone *tz = NULL; uint32_t ring_size; struct rte_eth_dev *dev; @@ -2426,7 +2426,7 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf) /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("ice fdir tx queue", - sizeof(struct ice_tx_queue), + sizeof(struct ci_tx_queue), RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); if (!txq) { @@ -2835,7 +2835,7 @@ ice_txd_enable_checksum(uint64_t ol_flags, } static inline int -ice_xmit_cleanup(struct ice_tx_queue *txq) +ice_xmit_cleanup(struct ci_tx_queue *txq) { struct ci_tx_entry *sw_ring = txq->sw_ring; volatile struct ice_tx_desc *txd = txq->ice_tx_ring; @@ -2958,7 +2958,7 @@ ice_calc_pkt_desc(struct rte_mbuf *tx_pkt) uint16_t ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; volatile struct ice_tx_desc *ice_tx_ring; volatile struct ice_tx_desc *txd; struct ci_tx_entry *sw_ring; @@ -3182,7 +3182,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } static __rte_always_inline int -ice_tx_free_bufs(struct ice_tx_queue *txq) +ice_tx_free_bufs(struct ci_tx_queue *txq) { struct ci_tx_entry *txep; uint16_t i; @@ -3218,7 +3218,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq) } static int -ice_tx_done_cleanup_full(struct ice_tx_queue *txq, +ice_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt) { struct ci_tx_entry *swr_ring = txq->sw_ring; @@ -3278,7 +3278,7 @@ ice_tx_done_cleanup_full(struct ice_tx_queue *txq, #ifdef RTE_ARCH_X86 static int -ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused, +ice_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused, uint32_t free_cnt __rte_unused) { return -ENOTSUP; @@ -3286,7 +3286,7 @@ ice_tx_done_cleanup_vec(struct ice_tx_queue *txq __rte_unused, #endif static int -ice_tx_done_cleanup_simple(struct ice_tx_queue *txq, +ice_tx_done_cleanup_simple(struct ci_tx_queue *txq, uint32_t free_cnt) { int i, n, cnt; @@ -3312,7 +3312,7 @@ ice_tx_done_cleanup_simple(struct ice_tx_queue *txq, int ice_tx_done_cleanup(void *txq, uint32_t free_cnt) { - struct ice_tx_queue *q = (struct ice_tx_queue *)txq; + struct ci_tx_queue *q = (struct ci_tx_queue *)txq; struct rte_eth_dev *dev = &rte_eth_devices[q->port_id]; struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); @@ -3357,7 +3357,7 @@ tx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkts) } static inline void -ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts, +ice_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { volatile struct ice_tx_desc *txdp = &txq->ice_tx_ring[txq->tx_tail]; @@ -3389,7 +3389,7 @@ ice_tx_fill_hw_ring(struct ice_tx_queue *txq, struct rte_mbuf **pkts, } static inline uint16_t -tx_xmit_pkts(struct ice_tx_queue *txq, +tx_xmit_pkts(struct ci_tx_queue *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -3452,14 +3452,14 @@ ice_xmit_pkts_simple(void *tx_queue, uint16_t nb_tx = 0; if (likely(nb_pkts <= ICE_TX_MAX_BURST)) - return tx_xmit_pkts((struct ice_tx_queue *)tx_queue, + return tx_xmit_pkts((struct ci_tx_queue *)tx_queue, tx_pkts, nb_pkts); while (nb_pkts) { uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts, ICE_TX_MAX_BURST); - ret = tx_xmit_pkts((struct ice_tx_queue *)tx_queue, + ret = tx_xmit_pkts((struct ci_tx_queue *)tx_queue, &tx_pkts[nb_tx], num); nb_tx = (uint16_t)(nb_tx + ret); nb_pkts = (uint16_t)(nb_pkts - ret); @@ -3667,7 +3667,7 @@ ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id, } void __rte_cold -ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq) +ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ci_tx_queue *txq) { struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); @@ -3716,7 +3716,7 @@ ice_check_empty_mbuf(struct rte_mbuf *tx_pkt) static uint16_t ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct ice_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; uint16_t idx; struct rte_mbuf *mb; bool pkt_error = false; @@ -3778,7 +3778,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) pkt_error = true; break; } - if (mb->nb_segs > ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) { + if (mb->nb_segs > ((struct ci_tx_queue *)tx_queue)->nb_tx_desc) { PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length"); pkt_error = true; break; @@ -3839,7 +3839,7 @@ ice_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, (m->tso_segsz < ICE_MIN_TSO_MSS || m->tso_segsz > ICE_MAX_TSO_MSS || m->nb_segs > - ((struct ice_tx_queue *)tx_queue)->nb_tx_desc || + ((struct ci_tx_queue *)tx_queue)->nb_tx_desc || m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) { /** * MSS outside the range are considered malicious @@ -3881,7 +3881,7 @@ ice_set_tx_function(struct rte_eth_dev *dev) ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); int mbuf_check = ad->devargs.mbuf_check; #ifdef RTE_ARCH_X86 - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; int i; int tx_check_ret = -1; @@ -4693,7 +4693,7 @@ ice_check_fdir_programming_status(struct ice_rx_queue *rxq) int ice_fdir_programming(struct ice_pf *pf, struct ice_fltr_desc *fdir_desc) { - struct ice_tx_queue *txq = pf->fdir.txq; + struct ci_tx_queue *txq = pf->fdir.txq; struct ice_rx_queue *rxq = pf->fdir.rxq; volatile struct ice_fltr_desc *fdirdp; volatile struct ice_tx_desc *txdp; diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index 3257f449f5..1cae8a9b50 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -79,7 +79,6 @@ extern int ice_timestamp_dynfield_offset; #define ICE_TX_MTU_SEG_MAX 8 typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq); -typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq); typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq, struct rte_mbuf *mb, volatile union ice_rx_flex_desc *rxdp); @@ -145,42 +144,6 @@ struct ice_rx_queue { bool ts_enable; /* if rxq timestamp is enabled */ }; -struct ice_tx_queue { - uint16_t nb_tx_desc; /* number of TX descriptors */ - rte_iova_t tx_ring_dma; /* TX ring DMA address */ - volatile struct ice_tx_desc *ice_tx_ring; /* TX ring virtual address */ - struct ci_tx_entry *sw_ring; /* virtual address of SW ring */ - uint16_t tx_tail; /* current value of tail register */ - volatile uint8_t *qtx_tail; /* register address of tail */ - uint16_t nb_tx_used; /* number of TX desc used since RS bit set */ - /* index to last TX descriptor to have been cleaned */ - uint16_t last_desc_cleaned; - /* Total number of TX descriptors ready to be allocated. */ - uint16_t nb_tx_free; - /* Start freeing TX buffers if there are less free descriptors than - * this value. - */ - uint16_t tx_free_thresh; - /* Number of TX descriptors to use before RS bit is set. */ - uint16_t tx_rs_thresh; - uint8_t pthresh; /**< Prefetch threshold register. */ - uint8_t hthresh; /**< Host threshold register. */ - uint8_t wthresh; /**< Write-back threshold reg. */ - uint16_t port_id; /* Device port identifier. */ - uint16_t queue_id; /* TX queue index. */ - uint32_t q_teid; /* TX schedule node id. */ - uint16_t reg_idx; - uint64_t offloads; - struct ice_vsi *ice_vsi; /* the VSI this queue belongs to */ - uint16_t tx_next_dd; - uint16_t tx_next_rs; - uint64_t mbuf_errors; - bool tx_deferred_start; /* don't start this queue in dev start */ - bool q_set; /* indicate if tx queue has been configured */ - ice_tx_release_mbufs_t tx_rel_mbufs; - const struct rte_memzone *mz; -}; - /* Offload features */ union ice_tx_offload { uint64_t data; @@ -268,7 +231,7 @@ void ice_set_rx_function(struct rte_eth_dev *dev); uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); void ice_set_tx_function_flag(struct rte_eth_dev *dev, - struct ice_tx_queue *txq); + struct ci_tx_queue *txq); void ice_set_tx_function(struct rte_eth_dev *dev); uint32_t ice_rx_queue_count(void *rx_queue); void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, @@ -290,7 +253,7 @@ void ice_select_rxd_to_pkt_fields_handler(struct ice_rx_queue *rxq, int ice_rx_vec_dev_check(struct rte_eth_dev *dev); int ice_tx_vec_dev_check(struct rte_eth_dev *dev); int ice_rxq_vec_setup(struct ice_rx_queue *rxq); -int ice_txq_vec_setup(struct ice_tx_queue *txq); +int ice_txq_vec_setup(struct ci_tx_queue *txq); uint16_t ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c index dde07ac99e..12ffa0fa9a 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c @@ -856,7 +856,7 @@ static __rte_always_inline uint16_t ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) { - struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; @@ -924,7 +924,7 @@ ice_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) { uint16_t nb_tx = 0; - struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c index e4d0270176..eabd8b04a0 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c @@ -860,7 +860,7 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue, } static __rte_always_inline int -ice_tx_free_bufs_avx512(struct ice_tx_queue *txq) +ice_tx_free_bufs_avx512(struct ci_tx_queue *txq) { struct ci_tx_entry_vec *txep; uint32_t n; @@ -1053,7 +1053,7 @@ static __rte_always_inline uint16_t ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool do_offload) { - struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; @@ -1122,7 +1122,7 @@ ice_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { uint16_t nb_tx = 0; - struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; @@ -1144,7 +1144,7 @@ ice_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { uint16_t nb_tx = 0; - struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 7b865b53ad..b39289ceb5 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -13,7 +13,7 @@ #endif static __rte_always_inline int -ice_tx_free_bufs_vec(struct ice_tx_queue *txq) +ice_tx_free_bufs_vec(struct ci_tx_queue *txq) { struct ci_tx_entry *txep; uint32_t n; @@ -105,7 +105,7 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq) } static inline void -_ice_tx_queue_release_mbufs_vec(struct ice_tx_queue *txq) +_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) { uint16_t i; @@ -231,7 +231,7 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq) } static inline int -ice_tx_vec_queue_default(struct ice_tx_queue *txq) +ice_tx_vec_queue_default(struct ci_tx_queue *txq) { if (!txq) return -1; @@ -273,7 +273,7 @@ static inline int ice_tx_vec_dev_check_default(struct rte_eth_dev *dev) { int i; - struct ice_tx_queue *txq; + struct ci_tx_queue *txq; int ret = 0; int result = 0; diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index 364207e8a8..f11528385a 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -697,7 +697,7 @@ static uint16_t ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; @@ -766,7 +766,7 @@ ice_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { uint16_t nb_tx = 0; - struct ice_tx_queue *txq = (struct ice_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; @@ -793,7 +793,7 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq) } int __rte_cold -ice_txq_vec_setup(struct ice_tx_queue __rte_unused *txq) +ice_txq_vec_setup(struct ci_tx_queue *txq __rte_unused) { if (!txq) return -1; From patchwork Tue Dec 3 16:41:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148991 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0CED445E16; Tue, 3 Dec 2024 17:42:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E26BC40E03; Tue, 3 Dec 2024 17:42:11 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 1239C40DCA for ; Tue, 3 Dec 2024 17:42:07 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244128; x=1764780128; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gqVb+1TmfsH8WpqIbWfJ+QoMRJ7GPaih3hpfFwqCa5k=; b=LSsWooTHRwHohvBXZH+asbaZZuo2ZAWORpYTQmLSnZaZg68myhQNeV6P aEnwnT7eSJ7HL7qHko6xYTQN1Mm0SuTb3+ceK95olDdcrXxfQ+Rl3Xznp qQYluuQuVT41777t7Fn69yKGyxn7GnZ3gMlOS3vYemPawf+IxwLRDMGbe swz+iddpDUPAAcAOGoJU/TwHsZC53usAnAtgckSsF9/yPxIDnoNFOr+wX 4gv6FFAuN/SxTbFdAkbyE0E5MUq/jY0HI5cyeo1zynCiXVenERQ1BKmI0 snVUgCRooiw9wa8tPlBUXyd8KbMlaSQHwMWNp6oGpBGu34PTCDouoiEiz w==; X-CSE-ConnectionGUID: 8fiEouLDSA2UaWxkg1OodQ== X-CSE-MsgGUID: /Drxnn75R9iKRXcR/NkNSA== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620787" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620787" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:08 -0800 X-CSE-ConnectionGUID: BpFihL8DTZqvzODy3n3IVg== X-CSE-MsgGUID: rrM+ncjUQvGTa5YsQ98U3g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357748" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:06 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Vladimir Medvedkin , Ian Stokes , Konstantin Ananyev Subject: [PATCH v2 07/22] net/iavf: use common Tx queue structure Date: Tue, 3 Dec 2024 16:41:13 +0000 Message-ID: <20241203164132.2686558-8-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Merge in the few additional fields used by iavf driver and convert it to using the common Tx queue structure also. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 15 +++++++- drivers/net/iavf/iavf.h | 2 +- drivers/net/iavf/iavf_ethdev.c | 4 +- drivers/net/iavf/iavf_rxtx.c | 42 ++++++++++----------- drivers/net/iavf/iavf_rxtx.h | 49 +++---------------------- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 4 +- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 14 +++---- drivers/net/iavf/iavf_rxtx_vec_common.h | 8 ++-- drivers/net/iavf/iavf_rxtx_vec_sse.c | 8 ++-- drivers/net/iavf/iavf_vchnl.c | 6 +-- 10 files changed, 62 insertions(+), 90 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index c965f5ee6c..c4a1a0c816 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -31,8 +31,9 @@ typedef void (*ice_tx_release_mbufs_t)(struct ci_tx_queue *txq); struct ci_tx_queue { union { /* TX ring virtual address */ - volatile struct ice_tx_desc *ice_tx_ring; volatile struct i40e_tx_desc *i40e_tx_ring; + volatile struct iavf_tx_desc *iavf_tx_ring; + volatile struct ice_tx_desc *ice_tx_ring; }; volatile uint8_t *qtx_tail; /* register address of tail */ struct ci_tx_entry *sw_ring; /* virtual address of SW ring */ @@ -63,8 +64,9 @@ struct ci_tx_queue { bool tx_deferred_start; /* don't start this queue in dev start */ bool q_set; /* indicate if tx queue has been configured */ union { /* the VSI this queue belongs to */ - struct ice_vsi *ice_vsi; struct i40e_vsi *i40e_vsi; + struct iavf_vsi *iavf_vsi; + struct ice_vsi *ice_vsi; }; const struct rte_memzone *mz; @@ -76,6 +78,15 @@ struct ci_tx_queue { struct { /* I40E driver specific values */ uint8_t dcb_tc; }; + struct { /* iavf driver specific values */ + uint16_t ipsec_crypto_pkt_md_offset; + uint8_t rel_mbufs_type; +#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + uint8_t vlan_flag; + uint8_t tc; + bool use_ctx; /* with ctx info, each pkt needs two descriptors */ + }; }; }; diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index ad526c644c..956c60ef45 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -98,7 +98,7 @@ struct iavf_adapter; struct iavf_rx_queue; -struct iavf_tx_queue; +struct ci_tx_queue; struct iavf_ipsec_crypto_stats { diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 7f80cd6258..328c224c93 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -954,7 +954,7 @@ static int iavf_start_queues(struct rte_eth_dev *dev) { struct iavf_rx_queue *rxq; - struct iavf_tx_queue *txq; + struct ci_tx_queue *txq; int i; uint16_t nb_txq, nb_rxq; @@ -1885,7 +1885,7 @@ iavf_dev_update_mbuf_stats(struct rte_eth_dev *ethdev, struct iavf_mbuf_stats *mbuf_stats) { uint16_t idx; - struct iavf_tx_queue *txq; + struct ci_tx_queue *txq; for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) { txq = ethdev->data->tx_queues[idx]; diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 6eda91e76b..7e381b2a17 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -213,7 +213,7 @@ check_rx_vec_allow(struct iavf_rx_queue *rxq) } static inline bool -check_tx_vec_allow(struct iavf_tx_queue *txq) +check_tx_vec_allow(struct ci_tx_queue *txq) { if (!(txq->offloads & IAVF_TX_NO_VECTOR_FLAGS) && txq->tx_rs_thresh >= IAVF_VPMD_TX_MAX_BURST && @@ -282,7 +282,7 @@ reset_rx_queue(struct iavf_rx_queue *rxq) } static inline void -reset_tx_queue(struct iavf_tx_queue *txq) +reset_tx_queue(struct ci_tx_queue *txq) { struct ci_tx_entry *txe; uint32_t i, size; @@ -388,7 +388,7 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq) } static inline void -release_txq_mbufs(struct iavf_tx_queue *txq) +release_txq_mbufs(struct ci_tx_queue *txq) { uint16_t i; @@ -778,7 +778,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); struct iavf_vsi *vsi = &vf->vsi; - struct iavf_tx_queue *txq; + struct ci_tx_queue *txq; const struct rte_memzone *mz; uint32_t ring_size; uint16_t tx_rs_thresh, tx_free_thresh; @@ -814,7 +814,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("iavf txq", - sizeof(struct iavf_tx_queue), + sizeof(struct ci_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); if (!txq) { @@ -979,7 +979,7 @@ iavf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct iavf_tx_queue *txq; + struct ci_tx_queue *txq; int err = 0; PMD_DRV_FUNC_TRACE(); @@ -1048,7 +1048,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct iavf_tx_queue *txq; + struct ci_tx_queue *txq; int err; PMD_DRV_FUNC_TRACE(); @@ -1092,7 +1092,7 @@ iavf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid) void iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - struct iavf_tx_queue *q = dev->data->tx_queues[qid]; + struct ci_tx_queue *q = dev->data->tx_queues[qid]; if (!q) return; @@ -1107,7 +1107,7 @@ static void iavf_reset_queues(struct rte_eth_dev *dev) { struct iavf_rx_queue *rxq; - struct iavf_tx_queue *txq; + struct ci_tx_queue *txq; int i; for (i = 0; i < dev->data->nb_tx_queues; i++) { @@ -2377,7 +2377,7 @@ iavf_recv_pkts_bulk_alloc(void *rx_queue, } static inline int -iavf_xmit_cleanup(struct iavf_tx_queue *txq) +iavf_xmit_cleanup(struct ci_tx_queue *txq) { struct ci_tx_entry *sw_ring = txq->sw_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; @@ -2781,7 +2781,7 @@ iavf_fill_data_desc(volatile struct iavf_tx_desc *desc, static struct iavf_ipsec_crypto_pkt_metadata * -iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq, +iavf_ipsec_crypto_get_pkt_metadata(const struct ci_tx_queue *txq, struct rte_mbuf *m) { if (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) @@ -2795,7 +2795,7 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq, uint16_t iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct iavf_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; volatile struct iavf_tx_desc *txr = txq->iavf_tx_ring; struct ci_tx_entry *txe_ring = txq->sw_ring; struct ci_tx_entry *txe, *txn; @@ -3027,7 +3027,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) * correct queue. */ static int -iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m) +iavf_check_vlan_up2tc(struct ci_tx_queue *txq, struct rte_mbuf *m) { struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id]; struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); @@ -3646,7 +3646,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, int i, ret; uint64_t ol_flags; struct rte_mbuf *m; - struct iavf_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; struct rte_eth_dev *dev = &rte_eth_devices[txq->port_id]; struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); @@ -3800,7 +3800,7 @@ static uint16_t iavf_xmit_pkts_no_poll(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct iavf_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; enum iavf_tx_burst_type tx_burst_type; if (!txq->iavf_vsi || txq->iavf_vsi->adapter->no_poll) @@ -3823,7 +3823,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t good_pkts = nb_pkts; const char *reason = NULL; bool pkt_error = false; - struct iavf_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; struct iavf_adapter *adapter = txq->iavf_vsi->adapter; enum iavf_tx_burst_type tx_burst_type = txq->iavf_vsi->adapter->tx_burst_type; @@ -4144,7 +4144,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) int mbuf_check = adapter->devargs.mbuf_check; int no_poll_on_link_down = adapter->devargs.no_poll_on_link_down; #ifdef RTE_ARCH_X86 - struct iavf_tx_queue *txq; + struct ci_tx_queue *txq; int i; int check_ret; bool use_sse = false; @@ -4265,7 +4265,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) } static int -iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, +iavf_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt) { struct ci_tx_entry *swr_ring = txq->sw_ring; @@ -4324,7 +4324,7 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, int iavf_dev_tx_done_cleanup(void *txq, uint32_t free_cnt) { - struct iavf_tx_queue *q = (struct iavf_tx_queue *)txq; + struct ci_tx_queue *q = (struct ci_tx_queue *)txq; return iavf_tx_done_cleanup_full(q, free_cnt); } @@ -4350,7 +4350,7 @@ void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo) { - struct iavf_tx_queue *txq; + struct ci_tx_queue *txq; txq = dev->data->tx_queues[queue_id]; @@ -4422,7 +4422,7 @@ iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset) int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset) { - struct iavf_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; volatile uint64_t *status; uint64_t mask, expect; uint32_t desc; diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index cc1eaaf54c..c18e01560c 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -211,7 +211,7 @@ struct iavf_rxq_ops { }; struct iavf_txq_ops { - void (*release_mbufs)(struct iavf_tx_queue *txq); + void (*release_mbufs)(struct ci_tx_queue *txq); }; @@ -273,43 +273,6 @@ struct iavf_rx_queue { uint64_t hw_time_update; }; -/* Structure associated with each TX queue. */ -struct iavf_tx_queue { - const struct rte_memzone *mz; /* memzone for Tx ring */ - volatile struct iavf_tx_desc *iavf_tx_ring; /* Tx ring virtual address */ - rte_iova_t tx_ring_dma; /* Tx ring DMA address */ - struct ci_tx_entry *sw_ring; /* address array of SW ring */ - uint16_t nb_tx_desc; /* ring length */ - uint16_t tx_tail; /* current value of tail */ - volatile uint8_t *qtx_tail; /* register address of tail */ - /* number of used desc since RS bit set */ - uint16_t nb_tx_used; - uint16_t nb_tx_free; - uint16_t last_desc_cleaned; /* last desc have been cleaned*/ - uint16_t tx_free_thresh; - uint16_t tx_rs_thresh; - uint8_t rel_mbufs_type; - struct iavf_vsi *iavf_vsi; /**< the VSI this queue belongs to */ - - uint16_t port_id; - uint16_t queue_id; - uint64_t offloads; - uint16_t tx_next_dd; /* next to set RS, for VPMD */ - uint16_t tx_next_rs; /* next to check DD, for VPMD */ - uint16_t ipsec_crypto_pkt_md_offset; - - uint64_t mbuf_errors; - - bool q_set; /* if rx queue has been configured */ - bool tx_deferred_start; /* don't start this queue in dev start */ - const struct iavf_txq_ops *ops; -#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) -#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) - uint8_t vlan_flag; - uint8_t tc; - uint8_t use_ctx:1; /* if use the ctx desc, a packet needs two descriptors */ -}; - /* Offload features */ union iavf_tx_offload { uint64_t data; @@ -724,7 +687,7 @@ int iavf_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); int iavf_rx_vec_dev_check(struct rte_eth_dev *dev); int iavf_tx_vec_dev_check(struct rte_eth_dev *dev); int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq); -int iavf_txq_vec_setup(struct iavf_tx_queue *txq); +int iavf_txq_vec_setup(struct ci_tx_queue *txq); uint16_t iavf_recv_pkts_vec_avx512(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t iavf_recv_pkts_vec_avx512_offload(void *rx_queue, @@ -757,14 +720,14 @@ uint16_t iavf_xmit_pkts_vec_avx512_ctx_offload(void *tx_queue, struct rte_mbuf * uint16_t nb_pkts); uint16_t iavf_xmit_pkts_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); -int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq); +int iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq); uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type); void iavf_set_default_ptype_table(struct rte_eth_dev *dev); -void iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq); +void iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq); void iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq); -void iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq); +void iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq); static inline void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq, @@ -791,7 +754,7 @@ void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq, * to print the qwords */ static inline -void iavf_dump_tx_descriptor(const struct iavf_tx_queue *txq, +void iavf_dump_tx_descriptor(const struct ci_tx_queue *txq, const volatile void *desc, uint16_t tx_id) { const char *name; diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index f33ceceee1..fdb98b417a 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -1734,7 +1734,7 @@ static __rte_always_inline uint16_t iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) { - struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; @@ -1801,7 +1801,7 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) { uint16_t nb_tx = 0; - struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index 97420a75fd..9cf7171524 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -1845,7 +1845,7 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue, } static __rte_always_inline int -iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq) +iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq) { struct ci_tx_entry_vec *txep; uint32_t n; @@ -2311,7 +2311,7 @@ static __rte_always_inline uint16_t iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) { - struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; @@ -2379,7 +2379,7 @@ static __rte_always_inline uint16_t iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) { - struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, nb_mbuf, tx_id; @@ -2447,7 +2447,7 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) { uint16_t nb_tx = 0; - struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; @@ -2473,7 +2473,7 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, } void __rte_cold -iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq) +iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq) { unsigned int i; const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); @@ -2494,7 +2494,7 @@ iavf_tx_queue_release_mbufs_avx512(struct iavf_tx_queue *txq) } int __rte_cold -iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq) +iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq) { txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC; return 0; @@ -2512,7 +2512,7 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool offload) { uint16_t nb_tx = 0; - struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 6305c8cdd6..f1bb12c4f4 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -17,7 +17,7 @@ #endif static __rte_always_inline int -iavf_tx_free_bufs(struct iavf_tx_queue *txq) +iavf_tx_free_bufs(struct ci_tx_queue *txq) { struct ci_tx_entry *txep; uint32_t n; @@ -104,7 +104,7 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq) } static inline void -_iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq) +_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) { unsigned i; const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); @@ -164,7 +164,7 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq) } static inline int -iavf_tx_vec_queue_default(struct iavf_tx_queue *txq) +iavf_tx_vec_queue_default(struct ci_tx_queue *txq) { if (!txq) return -1; @@ -227,7 +227,7 @@ static inline int iavf_tx_vec_dev_check_default(struct rte_eth_dev *dev) { int i; - struct iavf_tx_queue *txq; + struct ci_tx_queue *txq; int ret; int result = 0; diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 64c3bf0eaa..5c0b2fff46 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1366,7 +1366,7 @@ uint16_t iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; struct ci_tx_entry *txep; uint16_t n, nb_commit, tx_id; @@ -1435,7 +1435,7 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { uint16_t nb_tx = 0; - struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; @@ -1459,13 +1459,13 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq) } void __rte_cold -iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq) +iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq) { _iavf_tx_queue_release_mbufs_vec(txq); } int __rte_cold -iavf_txq_vec_setup(struct iavf_tx_queue *txq) +iavf_txq_vec_setup(struct ci_tx_queue *txq) { txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC; return 0; diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 0646a2f978..c74466735d 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -1218,10 +1218,8 @@ int iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs, uint16_t index) { - struct iavf_rx_queue **rxq = - (struct iavf_rx_queue **)adapter->dev_data->rx_queues; - struct iavf_tx_queue **txq = - (struct iavf_tx_queue **)adapter->dev_data->tx_queues; + struct iavf_rx_queue **rxq = (struct iavf_rx_queue **)adapter->dev_data->rx_queues; + struct ci_tx_queue **txq = (struct ci_tx_queue **)adapter->dev_data->tx_queues; struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_vsi_queue_config_info *vc_config; struct virtchnl_queue_pair_info *vc_qp; From patchwork Tue Dec 3 16:41:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148992 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2D10345E16; Tue, 3 Dec 2024 17:43:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 05D3840E0B; Tue, 3 Dec 2024 17:42:14 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 4C99D40DD0 for ; Tue, 3 Dec 2024 17:42:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244130; x=1764780130; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=c6zAM8IbX6Je9ho46p/zpG5+0L9Zguad40CJJjhdp/0=; b=SLh39ugcoPIa++AbKrD+UMWwaS8XZBN/wZZn/ma6T8u/sSYlECr8sLf6 Se37wNAAW0sm41ttynLU7V7hQ36UvMkXZ3+/R1SYmbu5osWFVGnwvCdb6 FYxXUTMM6tdQm/JPr269cBRWb2XzUJ7QVNAXPf6dhQDUxN03AyaXXsbvK swtTMxivsHC+3c7sFyO1QDHdidk0rNgJ4DH5DcfT4NCsc55N+gmVWTU6o M9nggeB/YvraiKNU9MWeyZHPJdWvQw9n4V1RHdsaQ7Uj4rBbjdNVHFazI WddWLVRLq3TcWy+Fpfa7UGzc5VyazJHx6WctWIsOQcwZES7e53jcg43ZU w==; X-CSE-ConnectionGUID: fxCugSTXSVekKjX/H9ErmQ== X-CSE-MsgGUID: EYePpnoWQRinXK12UNuqhQ== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620792" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620792" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:09 -0800 X-CSE-ConnectionGUID: GwxPPaD/T3uEC4UHKRbdYg== X-CSE-MsgGUID: mx9FLuWlSSeVEViljt3WnQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357751" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:08 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Anatoly Burakov , Vladimir Medvedkin Subject: [PATCH v2 08/22] net/ixgbe: convert Tx queue context cache field to ptr Date: Tue, 3 Dec 2024 16:41:14 +0000 Message-ID: <20241203164132.2686558-9-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rather than having a two element array of context cache values inside the Tx queue structure, convert it to a pointer to a cache at the end of the structure. This makes future merging of the structure easier as we don't need the "ixgbe_advctx_info" struct defined when defining a combined queue structure. Signed-off-by: Bruce Richardson --- drivers/net/ixgbe/ixgbe_rxtx.c | 7 ++++--- drivers/net/ixgbe/ixgbe_rxtx.h | 4 ++-- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index f7ddbba1b6..2ca26cd132 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -2522,8 +2522,7 @@ ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); txq->ctx_curr = 0; - memset((void *)&txq->ctx_cache, 0, - IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); + memset(txq->ctx_cache, 0, IXGBE_CTX_NUM * sizeof(struct ixgbe_advctx_info)); } static const struct ixgbe_txq_ops def_txq_ops = { @@ -2741,10 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, } /* First allocate the tx queue data structure */ - txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue), + txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) + + sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM, RTE_CACHE_LINE_SIZE, socket_id); if (txq == NULL) return -ENOMEM; + txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue)); /* * Allocate TX ring hardware descriptors. A memzone large enough to diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index f6bae37cf3..847cacf7b5 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -215,8 +215,8 @@ struct ixgbe_tx_queue { uint8_t wthresh; /**< Write-back threshold reg. */ uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */ uint32_t ctx_curr; /**< Hardware context states. */ - /** Hardware context0 history. */ - struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM]; + /** Hardware context history. */ + struct ixgbe_advctx_info *ctx_cache; const struct ixgbe_txq_ops *ops; /**< txq ops */ bool tx_deferred_start; /**< not in global dev start. */ #ifdef RTE_LIB_SECURITY From patchwork Tue Dec 3 16:41:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148993 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 176B245E16; Tue, 3 Dec 2024 17:43:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C2C9940E0F; Tue, 3 Dec 2024 17:42:15 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 82DFA40DDC for ; Tue, 3 Dec 2024 17:42:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244132; x=1764780132; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XVf4204DK6FeWJHFnTcM7kQzUfRPrdAya8XGg7OJWG8=; b=QXP1ol0kODFKEakzAoZjdJ8+zXgs4zEcjSBdkjZnEExdylhxQUX/IFJS bFyJk3O0AqlErWjWAbrYr4mZC4IJDStdn9KVyBD+O4BHiWZd1d7XyUZ3u CkL7cYqUaZwNUK2L3MkPn15orpAzhDVeU33HJZgWzTXS55W8TxjMflZGh AjfMJA8sde90lHfVNgk2unii6bF/7PH/hyfEoz4VesgcEtqFAYdqIXHl9 Sc+1JAv+/SjnJm4UPe/2i8XoNAAsExlUYceYh8fIGiyC9o49B025LFb/C iem/AWq82gkxPaD2RHco4XcknIi8wIXzO1IbtexMzNYbRYl8dzzB4fw5P g==; X-CSE-ConnectionGUID: URwKwCvYTR2VAj2FoCU1kQ== X-CSE-MsgGUID: Yimx9/ymTwusFYcndK0WRw== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620799" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620799" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:11 -0800 X-CSE-ConnectionGUID: 3D8OG7EFRLGY9y4J9jWyrA== X-CSE-MsgGUID: xDCcMW2hT+ip9Hlv5opuOw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357756" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:10 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Anatoly Burakov , Vladimir Medvedkin , Wathsala Vithanage , Konstantin Ananyev Subject: [PATCH v2 09/22] net/ixgbe: use common Tx queue structure Date: Tue, 3 Dec 2024 16:41:15 +0000 Message-ID: <20241203164132.2686558-10-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Merge in additional fields used by the ixgbe driver and then convert it over to using the common Tx queue structure. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 14 +++- drivers/net/ixgbe/ixgbe_ethdev.c | 4 +- .../ixgbe/ixgbe_recycle_mbufs_vec_common.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx.c | 64 +++++++++---------- drivers/net/ixgbe/ixgbe_rxtx.h | 56 ++-------------- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 26 ++++---- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 14 ++-- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 14 ++-- 8 files changed, 80 insertions(+), 114 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index c4a1a0c816..51ae3b051d 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -34,9 +34,13 @@ struct ci_tx_queue { volatile struct i40e_tx_desc *i40e_tx_ring; volatile struct iavf_tx_desc *iavf_tx_ring; volatile struct ice_tx_desc *ice_tx_ring; + volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring; }; volatile uint8_t *qtx_tail; /* register address of tail */ - struct ci_tx_entry *sw_ring; /* virtual address of SW ring */ + union { + struct ci_tx_entry *sw_ring; /* virtual address of SW ring */ + struct ci_tx_entry_vec *sw_ring_vec; + }; rte_iova_t tx_ring_dma; /* TX ring DMA address */ uint16_t nb_tx_desc; /* number of TX descriptors */ uint16_t tx_tail; /* current value of tail register */ @@ -87,6 +91,14 @@ struct ci_tx_queue { uint8_t tc; bool use_ctx; /* with ctx info, each pkt needs two descriptors */ }; + struct { /* ixgbe specific values */ + const struct ixgbe_txq_ops *ops; + struct ixgbe_advctx_info *ctx_cache; + uint32_t ctx_curr; +#ifdef RTE_LIB_SECURITY + uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */ +#endif + }; }; }; diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 8bee97d191..5f18fbaad5 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -1118,7 +1118,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) * RX and TX function. */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; /* TX queue function in primary, set by last queue initialized * Tx queue may not initialized by primary process */ @@ -1623,7 +1623,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev) * RX function */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; /* TX queue function in primary, set by last queue initialized * Tx queue may not initialized by primary process */ diff --git a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c index a878db3150..3fd05ed5eb 100644 --- a/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c +++ b/drivers/net/ixgbe/ixgbe_recycle_mbufs_vec_common.c @@ -51,7 +51,7 @@ uint16_t ixgbe_recycle_tx_mbufs_reuse_vec(void *tx_queue, struct rte_eth_recycle_rxq_info *recycle_rxq_info) { - struct ixgbe_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; struct ci_tx_entry *txep; struct rte_mbuf **rxep; int i, n; diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 2ca26cd132..344ef85685 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -98,7 +98,7 @@ * Return the total number of buffers freed. */ static __rte_always_inline int -ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) +ixgbe_tx_free_bufs(struct ci_tx_queue *txq) { struct ci_tx_entry *txep; uint32_t status; @@ -195,7 +195,7 @@ tx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts) * Copy mbuf pointers to the S/W ring. */ static inline void -ixgbe_tx_fill_hw_ring(struct ixgbe_tx_queue *txq, struct rte_mbuf **pkts, +ixgbe_tx_fill_hw_ring(struct ci_tx_queue *txq, struct rte_mbuf **pkts, uint16_t nb_pkts) { volatile union ixgbe_adv_tx_desc *txdp = &txq->ixgbe_tx_ring[txq->tx_tail]; @@ -231,7 +231,7 @@ static inline uint16_t tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile union ixgbe_adv_tx_desc *tx_r = txq->ixgbe_tx_ring; uint16_t n = 0; @@ -344,7 +344,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { uint16_t nb_tx = 0; - struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; @@ -362,7 +362,7 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, } static inline void -ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq, +ixgbe_set_xmit_ctx(struct ci_tx_queue *txq, volatile struct ixgbe_adv_tx_context_desc *ctx_txd, uint64_t ol_flags, union ixgbe_tx_offload tx_offload, __rte_unused uint64_t *mdata) @@ -493,7 +493,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq, * or create a new context descriptor. */ static inline uint32_t -what_advctx_update(struct ixgbe_tx_queue *txq, uint64_t flags, +what_advctx_update(struct ci_tx_queue *txq, uint64_t flags, union ixgbe_tx_offload tx_offload) { /* If match with the current used context */ @@ -561,7 +561,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags) /* Reset transmit descriptors after they have been used */ static inline int -ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) +ixgbe_xmit_cleanup(struct ci_tx_queue *txq) { struct ci_tx_entry *sw_ring = txq->sw_ring; volatile union ixgbe_adv_tx_desc *txr = txq->ixgbe_tx_ring; @@ -623,7 +623,7 @@ uint16_t ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; struct ci_tx_entry *sw_ring; struct ci_tx_entry *txe, *txn; volatile union ixgbe_adv_tx_desc *txr; @@ -963,7 +963,7 @@ ixgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) int i, ret; uint64_t ol_flags; struct rte_mbuf *m; - struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; for (i = 0; i < nb_pkts; i++) { m = tx_pkts[i]; @@ -2335,7 +2335,7 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts, **********************************************************************/ static void __rte_cold -ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq) +ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq) { unsigned i; @@ -2350,7 +2350,7 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq) } static int -ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt) +ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt) { struct ci_tx_entry *swr_ring = txq->sw_ring; uint16_t i, tx_last, tx_id; @@ -2408,7 +2408,7 @@ ixgbe_tx_done_cleanup_full(struct ixgbe_tx_queue *txq, uint32_t free_cnt) } static int -ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, +ixgbe_tx_done_cleanup_simple(struct ci_tx_queue *txq, uint32_t free_cnt) { int i, n, cnt; @@ -2432,7 +2432,7 @@ ixgbe_tx_done_cleanup_simple(struct ixgbe_tx_queue *txq, } static int -ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused, +ixgbe_tx_done_cleanup_vec(struct ci_tx_queue *txq __rte_unused, uint32_t free_cnt __rte_unused) { return -ENOTSUP; @@ -2441,7 +2441,7 @@ ixgbe_tx_done_cleanup_vec(struct ixgbe_tx_queue *txq __rte_unused, int ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt) { - struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; if (txq->offloads == 0 && #ifdef RTE_LIB_SECURITY !(txq->using_ipsec) && @@ -2450,7 +2450,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt) if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ && rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128 && (rte_eal_process_type() != RTE_PROC_PRIMARY || - txq->sw_ring_v != NULL)) { + txq->sw_ring_vec != NULL)) { return ixgbe_tx_done_cleanup_vec(txq, free_cnt); } else { return ixgbe_tx_done_cleanup_simple(txq, free_cnt); @@ -2461,7 +2461,7 @@ ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt) } static void __rte_cold -ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq) +ixgbe_tx_free_swring(struct ci_tx_queue *txq) { if (txq != NULL && txq->sw_ring != NULL) @@ -2469,7 +2469,7 @@ ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq) } static void __rte_cold -ixgbe_tx_queue_release(struct ixgbe_tx_queue *txq) +ixgbe_tx_queue_release(struct ci_tx_queue *txq) { if (txq != NULL && txq->ops != NULL) { txq->ops->release_mbufs(txq); @@ -2487,7 +2487,7 @@ ixgbe_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) /* (Re)set dynamic ixgbe_tx_queue fields to defaults */ static void __rte_cold -ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) +ixgbe_reset_tx_queue(struct ci_tx_queue *txq) { static const union ixgbe_adv_tx_desc zeroed_desc = {{0}}; struct ci_tx_entry *txe = txq->sw_ring; @@ -2536,7 +2536,7 @@ static const struct ixgbe_txq_ops def_txq_ops = { * in dev_init by secondary process when attaching to an existing ethdev. */ void __rte_cold -ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq) +ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq) { /* Use a simple Tx queue (no offloads, no multi segs) if possible */ if ((txq->offloads == 0) && @@ -2618,7 +2618,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, const struct rte_eth_txconf *tx_conf) { const struct rte_memzone *tz; - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; struct ixgbe_hw *hw; uint16_t tx_rs_thresh, tx_free_thresh; uint64_t offloads; @@ -2740,12 +2740,12 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, } /* First allocate the tx queue data structure */ - txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ixgbe_tx_queue) + + txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct ci_tx_queue) + sizeof(struct ixgbe_advctx_info) * IXGBE_CTX_NUM, RTE_CACHE_LINE_SIZE, socket_id); if (txq == NULL) return -ENOMEM; - txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ixgbe_tx_queue)); + txq->ctx_cache = RTE_PTR_ADD(txq, sizeof(struct ci_tx_queue)); /* * Allocate TX ring hardware descriptors. A memzone large enough to @@ -3312,7 +3312,7 @@ ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset) int ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset) { - struct ixgbe_tx_queue *txq = tx_queue; + struct ci_tx_queue *txq = tx_queue; volatile uint32_t *status; uint32_t desc; @@ -3377,7 +3377,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); for (i = 0; i < dev->data->nb_tx_queues; i++) { - struct ixgbe_tx_queue *txq = dev->data->tx_queues[i]; + struct ci_tx_queue *txq = dev->data->tx_queues[i]; if (txq != NULL) { txq->ops->release_mbufs(txq); @@ -5284,7 +5284,7 @@ void __rte_cold ixgbe_dev_tx_init(struct rte_eth_dev *dev) { struct ixgbe_hw *hw; - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; uint64_t bus_addr; uint32_t hlreg0; uint32_t txctrl; @@ -5402,7 +5402,7 @@ int __rte_cold ixgbe_dev_rxtx_start(struct rte_eth_dev *dev) { struct ixgbe_hw *hw; - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; struct ixgbe_rx_queue *rxq; uint32_t txdctl; uint32_t dmatxctl; @@ -5572,7 +5572,7 @@ int __rte_cold ixgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { struct ixgbe_hw *hw; - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; uint32_t txdctl; int poll_ms; @@ -5611,7 +5611,7 @@ int __rte_cold ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { struct ixgbe_hw *hw; - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; uint32_t txdctl; uint32_t txtdh, txtdt; int poll_ms; @@ -5685,7 +5685,7 @@ void ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo) { - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; txq = dev->data->tx_queues[queue_id]; @@ -5877,7 +5877,7 @@ void __rte_cold ixgbevf_dev_tx_init(struct rte_eth_dev *dev) { struct ixgbe_hw *hw; - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; uint64_t bus_addr; uint32_t txctrl; uint16_t i; @@ -5918,7 +5918,7 @@ void __rte_cold ixgbevf_dev_rxtx_start(struct rte_eth_dev *dev) { struct ixgbe_hw *hw; - struct ixgbe_tx_queue *txq; + struct ci_tx_queue *txq; struct ixgbe_rx_queue *rxq; uint32_t txdctl; uint32_t rxdctl; @@ -6127,7 +6127,7 @@ ixgbe_xmit_fixed_burst_vec(void __rte_unused *tx_queue, } int -ixgbe_txq_vec_setup(struct ixgbe_tx_queue __rte_unused *txq) +ixgbe_txq_vec_setup(struct ci_tx_queue *txq __rte_unused) { return -1; } diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 847cacf7b5..4333e5bf2f 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -180,56 +180,10 @@ struct ixgbe_advctx_info { union ixgbe_tx_offload tx_offload_mask; }; -/** - * Structure associated with each TX queue. - */ -struct ixgbe_tx_queue { - /** TX ring virtual address. */ - volatile union ixgbe_adv_tx_desc *ixgbe_tx_ring; - rte_iova_t tx_ring_dma; /**< TX ring DMA address. */ - union { - struct ci_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */ - struct ci_tx_entry_vec *sw_ring_v; /**< address of SW ring for vector PMD */ - }; - volatile uint8_t *qtx_tail; /**< Address of TDT register. */ - uint16_t nb_tx_desc; /**< number of TX descriptors. */ - uint16_t tx_tail; /**< current value of TDT reg. */ - /**< Start freeing TX buffers if there are less free descriptors than - this value. */ - uint16_t tx_free_thresh; - /** Number of TX descriptors to use before RS bit is set. */ - uint16_t tx_rs_thresh; - /** Number of TX descriptors used since RS bit was set. */ - uint16_t nb_tx_used; - /** Index to last TX descriptor to have been cleaned. */ - uint16_t last_desc_cleaned; - /** Total number of TX descriptors ready to be allocated. */ - uint16_t nb_tx_free; - uint16_t tx_next_dd; /**< next desc to scan for DD bit */ - uint16_t tx_next_rs; /**< next desc to set RS bit */ - uint16_t queue_id; /**< TX queue index. */ - uint16_t reg_idx; /**< TX queue register index. */ - uint16_t port_id; /**< Device port identifier. */ - uint8_t pthresh; /**< Prefetch threshold register. */ - uint8_t hthresh; /**< Host threshold register. */ - uint8_t wthresh; /**< Write-back threshold reg. */ - uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */ - uint32_t ctx_curr; /**< Hardware context states. */ - /** Hardware context history. */ - struct ixgbe_advctx_info *ctx_cache; - const struct ixgbe_txq_ops *ops; /**< txq ops */ - bool tx_deferred_start; /**< not in global dev start. */ -#ifdef RTE_LIB_SECURITY - uint8_t using_ipsec; - /**< indicates that IPsec TX feature is in use */ -#endif - const struct rte_memzone *mz; -}; - struct ixgbe_txq_ops { - void (*release_mbufs)(struct ixgbe_tx_queue *txq); - void (*free_swring)(struct ixgbe_tx_queue *txq); - void (*reset)(struct ixgbe_tx_queue *txq); + void (*release_mbufs)(struct ci_tx_queue *txq); + void (*free_swring)(struct ci_tx_queue *txq); + void (*reset)(struct ci_tx_queue *txq); }; /* @@ -250,7 +204,7 @@ struct ixgbe_txq_ops { * the queue parameters. Used in tx_queue_setup by primary process and then * in dev_init by secondary process when attaching to an existing ethdev. */ -void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq); +void ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ci_tx_queue *txq); /** * Sets the rx_pkt_burst callback in the ixgbe rte_eth_dev instance. @@ -287,7 +241,7 @@ void ixgbe_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb_mbufs); uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); -int ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq); +int ixgbe_txq_vec_setup(struct ci_tx_queue *txq); uint64_t ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev); uint64_t ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev); diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index cc51bf6eed..81fd8bb64d 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -12,7 +12,7 @@ #include "ixgbe_rxtx.h" static __rte_always_inline int -ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) +ixgbe_tx_free_bufs(struct ci_tx_queue *txq) { struct ci_tx_entry_vec *txep; uint32_t status; @@ -32,7 +32,7 @@ ixgbe_tx_free_bufs(struct ixgbe_tx_queue *txq) * first buffer to free from S/W ring is at index * tx_next_dd - (tx_rs_thresh-1) */ - txep = &txq->sw_ring_v[txq->tx_next_dd - (n - 1)]; + txep = &txq->sw_ring_vec[txq->tx_next_dd - (n - 1)]; m = rte_pktmbuf_prefree_seg(txep[0].mbuf); if (likely(m != NULL)) { free[0] = m; @@ -79,7 +79,7 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep, } static inline void -_ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) +_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) { unsigned int i; struct ci_tx_entry_vec *txe; @@ -92,14 +92,14 @@ _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1); i != txq->tx_tail; i = (i + 1) % txq->nb_tx_desc) { - txe = &txq->sw_ring_v[i]; + txe = &txq->sw_ring_vec[i]; rte_pktmbuf_free_seg(txe->mbuf); } txq->nb_tx_free = max_desc; /* reset tx_entry */ for (i = 0; i < txq->nb_tx_desc; i++) { - txe = &txq->sw_ring_v[i]; + txe = &txq->sw_ring_vec[i]; txe->mbuf = NULL; } } @@ -134,22 +134,22 @@ _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) } static inline void -_ixgbe_tx_free_swring_vec(struct ixgbe_tx_queue *txq) +_ixgbe_tx_free_swring_vec(struct ci_tx_queue *txq) { if (txq == NULL) return; if (txq->sw_ring != NULL) { - rte_free(txq->sw_ring_v - 1); - txq->sw_ring_v = NULL; + rte_free(txq->sw_ring_vec - 1); + txq->sw_ring_vec = NULL; } } static inline void -_ixgbe_reset_tx_queue_vec(struct ixgbe_tx_queue *txq) +_ixgbe_reset_tx_queue_vec(struct ci_tx_queue *txq) { static const union ixgbe_adv_tx_desc zeroed_desc = { { 0 } }; - struct ci_tx_entry_vec *txe = txq->sw_ring_v; + struct ci_tx_entry_vec *txe = txq->sw_ring_vec; uint16_t i; /* Zero out HW ring memory */ @@ -199,14 +199,14 @@ ixgbe_rxq_vec_setup_default(struct ixgbe_rx_queue *rxq) } static inline int -ixgbe_txq_vec_setup_default(struct ixgbe_tx_queue *txq, +ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq, const struct ixgbe_txq_ops *txq_ops) { - if (txq->sw_ring_v == NULL) + if (txq->sw_ring_vec == NULL) return -1; /* leave the first one for overflow */ - txq->sw_ring_v = txq->sw_ring_v + 1; + txq->sw_ring_vec = txq->sw_ring_vec + 1; txq->ops = txq_ops; return 0; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index 06be7ec82a..cb749a3760 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -571,7 +571,7 @@ uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile union ixgbe_adv_tx_desc *txdp; struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; @@ -591,7 +591,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->ixgbe_tx_ring[tx_id]; - txep = &txq->sw_ring_v[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -611,7 +611,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->ixgbe_tx_ring[tx_id]; - txep = &txq->sw_ring_v[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } tx_backlog_entry(txep, tx_pkts, nb_commit); @@ -634,7 +634,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, } static void __rte_cold -ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) +ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) { _ixgbe_tx_queue_release_mbufs_vec(txq); } @@ -646,13 +646,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) } static void __rte_cold -ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq) +ixgbe_tx_free_swring(struct ci_tx_queue *txq) { _ixgbe_tx_free_swring_vec(txq); } static void __rte_cold -ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) +ixgbe_reset_tx_queue(struct ci_tx_queue *txq) { _ixgbe_reset_tx_queue_vec(txq); } @@ -670,7 +670,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq) } int __rte_cold -ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq) +ixgbe_txq_vec_setup(struct ci_tx_queue *txq) { return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops); } diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index a21a57bd55..e46550f76a 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -693,7 +693,7 @@ uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct ixgbe_tx_queue *txq = (struct ixgbe_tx_queue *)tx_queue; + struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile union ixgbe_adv_tx_desc *txdp; struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; @@ -713,7 +713,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->ixgbe_tx_ring[tx_id]; - txep = &txq->sw_ring_v[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); @@ -734,7 +734,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->ixgbe_tx_ring[tx_id]; - txep = &txq->sw_ring_v[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } tx_backlog_entry(txep, tx_pkts, nb_commit); @@ -757,7 +757,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, } static void __rte_cold -ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) +ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) { _ixgbe_tx_queue_release_mbufs_vec(txq); } @@ -769,13 +769,13 @@ ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) } static void __rte_cold -ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq) +ixgbe_tx_free_swring(struct ci_tx_queue *txq) { _ixgbe_tx_free_swring_vec(txq); } static void __rte_cold -ixgbe_reset_tx_queue(struct ixgbe_tx_queue *txq) +ixgbe_reset_tx_queue(struct ci_tx_queue *txq) { _ixgbe_reset_tx_queue_vec(txq); } @@ -793,7 +793,7 @@ ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq) } int __rte_cold -ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq) +ixgbe_txq_vec_setup(struct ci_tx_queue *txq) { return ixgbe_txq_vec_setup_default(txq, &vec_txq_ops); } From patchwork Tue Dec 3 16:41:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148994 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0416245E16; Tue, 3 Dec 2024 17:43:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5B76640E2D; Tue, 3 Dec 2024 17:42:17 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id D696440DDC for ; Tue, 3 Dec 2024 17:42:12 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244133; x=1764780133; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JQH+JQqR+palfF8o6EwMt2tKrBRnKmVNVmdTM3NLQqM=; b=lbtA/yqzsR4cLCDisVIPA3jVN1X4QyD/fbmEykKGorkl3cOQ6Od+S8GM pWEv7rMFpvLPZrKacdtKR13wv/79X2S2/EQUMjs9TcQZQS05ZfV0qr74+ cUp5yBLQo4ZY7z61IfAcX1QAfJs0l3HqWGGgAu82YTeDnH46tKT5Z4lpt 6di/lZ41J0KHiVgY5CSyVmYqhori6YKrg3dTuTCf+pLVHvkxrtfBfNMpQ APfcThVzhkXr5sYLFAqBHFTwVIJmSOxcr38pC/OQ1d3ObnQznBEhdEhJG UIeBqIHaV2h1f4q+3fDbkTsyU6zRTc2h0LGVkBmsC8YhNdG5jSr3EHtzV A==; X-CSE-ConnectionGUID: OxFtPCZNRmWHvyEw0Hyj8Q== X-CSE-MsgGUID: Y1AyxhTATli9UcMQ5YHkjA== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620804" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620804" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:13 -0800 X-CSE-ConnectionGUID: poWiBLRDRM6OUDYoVo/CyA== X-CSE-MsgGUID: fmqLj3XsSAK06KA76aiT6g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357760" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:12 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , Anatoly Burakov Subject: [PATCH v2 10/22] net/_common_intel: pack Tx queue structure Date: Tue, 3 Dec 2024 16:41:16 +0000 Message-ID: <20241203164132.2686558-11-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move some fields about to better pack the Tx queue structure and make sure all data used by the vector codepaths is on the first cacheline of the structure. Checking with "pahole" on 64-bit build, only one 6-byte hole is left in the structure - on second cacheline - after this patch. As part of the reordering, move the p/h/wthresh values to the ixgbe-specific part of the union. That is the only driver which actually uses those values. i40e and ice drivers just record the values for later return, so we can drop them from the Tx queue structure for those drivers and just report the defaults in all cases. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 12 +++++------- drivers/net/i40e/i40e_rxtx.c | 9 +++------ drivers/net/ice/ice_rxtx.c | 9 +++------ 3 files changed, 11 insertions(+), 19 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index 51ae3b051d..c372d2838b 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -41,7 +41,6 @@ struct ci_tx_queue { struct ci_tx_entry *sw_ring; /* virtual address of SW ring */ struct ci_tx_entry_vec *sw_ring_vec; }; - rte_iova_t tx_ring_dma; /* TX ring DMA address */ uint16_t nb_tx_desc; /* number of TX descriptors */ uint16_t tx_tail; /* current value of tail register */ uint16_t nb_tx_used; /* number of TX desc used since RS bit set */ @@ -55,16 +54,14 @@ struct ci_tx_queue { uint16_t tx_free_thresh; /* Number of TX descriptors to use before RS bit is set. */ uint16_t tx_rs_thresh; - uint8_t pthresh; /**< Prefetch threshold register. */ - uint8_t hthresh; /**< Host threshold register. */ - uint8_t wthresh; /**< Write-back threshold reg. */ uint16_t port_id; /* Device port identifier. */ uint16_t queue_id; /* TX queue index. */ uint16_t reg_idx; - uint64_t offloads; uint16_t tx_next_dd; uint16_t tx_next_rs; + uint64_t offloads; uint64_t mbuf_errors; + rte_iova_t tx_ring_dma; /* TX ring DMA address */ bool tx_deferred_start; /* don't start this queue in dev start */ bool q_set; /* indicate if tx queue has been configured */ union { /* the VSI this queue belongs to */ @@ -95,9 +92,10 @@ struct ci_tx_queue { const struct ixgbe_txq_ops *ops; struct ixgbe_advctx_info *ctx_cache; uint32_t ctx_curr; -#ifdef RTE_LIB_SECURITY + uint8_t pthresh; /**< Prefetch threshold register. */ + uint8_t hthresh; /**< Host threshold register. */ + uint8_t wthresh; /**< Write-back threshold reg. */ uint8_t using_ipsec; /**< indicates that IPsec TX feature is in use */ -#endif }; }; }; diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 305bc53480..539b170266 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -2539,9 +2539,6 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->nb_tx_desc = nb_desc; txq->tx_rs_thresh = tx_rs_thresh; txq->tx_free_thresh = tx_free_thresh; - txq->pthresh = tx_conf->tx_thresh.pthresh; - txq->hthresh = tx_conf->tx_thresh.hthresh; - txq->wthresh = tx_conf->tx_thresh.wthresh; txq->queue_id = queue_idx; txq->reg_idx = reg_idx; txq->port_id = dev->data->port_id; @@ -3310,9 +3307,9 @@ i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, qinfo->nb_desc = txq->nb_tx_desc; - qinfo->conf.tx_thresh.pthresh = txq->pthresh; - qinfo->conf.tx_thresh.hthresh = txq->hthresh; - qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_thresh.pthresh = I40E_DEFAULT_TX_PTHRESH; + qinfo->conf.tx_thresh.hthresh = I40E_DEFAULT_TX_HTHRESH; + qinfo->conf.tx_thresh.wthresh = I40E_DEFAULT_TX_WTHRESH; qinfo->conf.tx_free_thresh = txq->tx_free_thresh; qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh; diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index bcc7c7a016..e2e147ba3e 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -1492,9 +1492,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, txq->nb_tx_desc = nb_desc; txq->tx_rs_thresh = tx_rs_thresh; txq->tx_free_thresh = tx_free_thresh; - txq->pthresh = tx_conf->tx_thresh.pthresh; - txq->hthresh = tx_conf->tx_thresh.hthresh; - txq->wthresh = tx_conf->tx_thresh.wthresh; txq->queue_id = queue_idx; txq->reg_idx = vsi->base_queue + queue_idx; @@ -1583,9 +1580,9 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, qinfo->nb_desc = txq->nb_tx_desc; - qinfo->conf.tx_thresh.pthresh = txq->pthresh; - qinfo->conf.tx_thresh.hthresh = txq->hthresh; - qinfo->conf.tx_thresh.wthresh = txq->wthresh; + qinfo->conf.tx_thresh.pthresh = ICE_DEFAULT_TX_PTHRESH; + qinfo->conf.tx_thresh.hthresh = ICE_DEFAULT_TX_HTHRESH; + qinfo->conf.tx_thresh.wthresh = ICE_DEFAULT_TX_WTHRESH; qinfo->conf.tx_free_thresh = txq->tx_free_thresh; qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh; From patchwork Tue Dec 3 16:41:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148995 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B77D45E16; Tue, 3 Dec 2024 17:43:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 77F3840E30; Tue, 3 Dec 2024 17:42:18 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 9ADBE40E13 for ; Tue, 3 Dec 2024 17:42:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244135; x=1764780135; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SCwRk8Fv/3hTX2V8NgtJ1c8Vw2gtU6UPv9P5CnGQr8k=; b=VHcSo8vOMbe2YKSI5xu+y3xbvCqO1/+Rw4XpnoyOtSg/sHK9GQmGihCh N9ooGKlDPougokVctDgMh66W7sOJIfL/eiCbVurmceuZEGCmRzTJWfYER 5B5i30KwRgooXGetQqlw7rjzPyjSdoqx0DrWvW8KOLdLWK5doN3ESWfDd +/ABm2nugpIM9cBultGzW2QokPQHm1L1TxbathiTQp3eOCqH6NDoAq0e2 r0eVPt4tF49VBAzYIARf/vj1IXf51LkQUb4oNckihx2KJtTLoD487kW9r J8qzxe4ykmmIUXGtfuMtNQGEFMkcwrKadNn/zlWHvZe6TmI++gNxIvVtL A==; X-CSE-ConnectionGUID: R38F/qwWR9WzvKGEGI5gGA== X-CSE-MsgGUID: Mhk/guosTHmVJLZQBxGbmw== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620810" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620810" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:14 -0800 X-CSE-ConnectionGUID: PwvS9oEzSjGe4I6mm4DRsA== X-CSE-MsgGUID: i3HSg7enTpih9d4nbhD6kQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357766" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:13 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v2 11/22] net/_common_intel: add post-Tx buffer free function Date: Tue, 3 Dec 2024 16:41:17 +0000 Message-ID: <20241203164132.2686558-12-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The actions taken for post-Tx buffer free for the SSE and AVX drivers for i40e, iavf and ice drivers are all common, so centralize those in common/intel_eth driver. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 71 ++++++++++++++++++++++++ drivers/net/i40e/i40e_rxtx_vec_common.h | 72 ++++--------------------- drivers/net/iavf/iavf_rxtx_vec_common.h | 61 ++++----------------- drivers/net/ice/ice_rxtx_vec_common.h | 61 ++++----------------- 4 files changed, 98 insertions(+), 167 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index c372d2838b..a930309c05 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -7,6 +7,7 @@ #include #include +#include /* forward declaration of the common intel (ci) queue structure */ struct ci_tx_queue; @@ -107,4 +108,74 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_ txep[i].mbuf = tx_pkts[i]; } +#define IETH_VPMD_TX_MAX_FREE_BUF 64 + +typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx); + +static __rte_always_inline int +ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) +{ + struct ci_tx_entry *txep; + uint32_t n; + uint32_t i; + int nb_free = 0; + struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF]; + + /* check DD bits on threshold descriptor */ + if (!desc_done(txq, txq->tx_next_dd)) + return 0; + + n = txq->tx_rs_thresh; + + /* first buffer to free from S/W ring is at index + * tx_next_dd - (tx_rs_thresh-1) + */ + txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; + + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + for (i = 0; i < n; i++) { + free[i] = txep[i].mbuf; + /* no need to reset txep[i].mbuf in vector path */ + } + rte_mempool_put_bulk(free[0]->pool, (void **)free, n); + goto done; + } + + m = rte_pktmbuf_prefree_seg(txep[0].mbuf); + if (likely(m != NULL)) { + free[0] = m; + nb_free = 1; + for (i = 1; i < n; i++) { + m = rte_pktmbuf_prefree_seg(txep[i].mbuf); + if (likely(m != NULL)) { + if (likely(m->pool == free[0]->pool)) { + free[nb_free++] = m; + } else { + rte_mempool_put_bulk(free[0]->pool, + (void *)free, + nb_free); + free[0] = m; + nb_free = 1; + } + } + } + rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); + } else { + for (i = 1; i < n; i++) { + m = rte_pktmbuf_prefree_seg(txep[i].mbuf); + if (m != NULL) + rte_mempool_put(m->pool, m); + } + } + +done: + /* buffers were freed, update counters */ + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); + txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); + if (txq->tx_next_dd >= txq->nb_tx_desc) + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); + + return txq->tx_rs_thresh; +} + #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index 57d6263ccf..907d32dd0b 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -16,72 +16,18 @@ #pragma GCC diagnostic ignored "-Wcast-qual" #endif +static inline int +i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx) +{ + return (txq->i40e_tx_ring[idx].cmd_type_offset_bsz & + rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) == + rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE); +} + static __rte_always_inline int i40e_tx_free_bufs(struct ci_tx_queue *txq) { - struct ci_tx_entry *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ]; - - /* check DD bits on threshold descriptor */ - if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & - rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) != - rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; - - if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { - for (i = 0; i < n; i++) { - free[i] = txep[i].mbuf; - /* no need to reset txep[i].mbuf in vector path */ - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, n); - goto done; - } - - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m != NULL)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m != NULL)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m != NULL) - rte_mempool_put(m->pool, m); - } - } - -done: - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; + return ci_tx_free_bufs(txq, i40e_tx_desc_done); } static inline void diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index f1bb12c4f4..7130229f23 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -16,61 +16,18 @@ #pragma GCC diagnostic ignored "-Wcast-qual" #endif +static inline int +iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx) +{ + return (txq->iavf_tx_ring[idx].cmd_type_offset_bsz & + rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) == + rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE); +} + static __rte_always_inline int iavf_tx_free_bufs(struct ci_tx_queue *txq) { - struct ci_tx_entry *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF]; - - /* check DD bits on threshold descriptor */ - if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & - rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) != - rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m != NULL)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m != NULL)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m) - rte_mempool_put(m->pool, m); - } - } - - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; + return ci_tx_free_bufs(txq, iavf_tx_desc_done); } static inline void diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index b39289ceb5..c6c3933299 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -12,61 +12,18 @@ #pragma GCC diagnostic ignored "-Wcast-qual" #endif +static inline int +ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx) +{ + return (txq->ice_tx_ring[idx].cmd_type_offset_bsz & + rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) == + rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE); +} + static __rte_always_inline int ice_tx_free_bufs_vec(struct ci_tx_queue *txq) { - struct ci_tx_entry *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ]; - - /* check DD bits on threshold descriptor */ - if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & - rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) != - rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m) - rte_mempool_put(m->pool, m); - } - } - - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; + return ci_tx_free_bufs(txq, ice_tx_desc_done); } static inline void From patchwork Tue Dec 3 16:41:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148996 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3AA1445E16; Tue, 3 Dec 2024 17:43:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9838A40E32; Tue, 3 Dec 2024 17:42:19 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 5217840E1D for ; Tue, 3 Dec 2024 17:42:16 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244137; x=1764780137; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=slIddy8394MDjMPW1OmsVqb+W+d3D2m7+4bX0XmJrJ4=; b=YFf2nlojezI07xESpnZvqJnNf/5b+oOQX2v3pk6AcDBUSjIaWn8QT6Jn M/O8KqYxnOM2Huo3AmShfV+Etz88xNaX5H+plTPmn6Nl5i+ck+q0qEk4+ rBKx4Qncnw4KqV6a56dOwa/G714FhS8+a4ASLhy+/cttQXl1lWvBbt4Xy 8y7nQ3+rk//s3sXmXT6uvZk7FxJg9U3EoJA2nHUxUdfFWZIdGaC9QvGiE MmXO8ovIMwHDZ5lxy2yHMYf3xSL+5SAPmlg+XpqhHtdMyJhsWnfkxb0Z3 gr0K9aSiTk/GSPq8rwPipzF6DhS0pYq2zwjvgTEEgRDbcH2D/9TqNbfbu w==; X-CSE-ConnectionGUID: 9bg2c/kPRkOPQ/Z/rSrXxw== X-CSE-MsgGUID: +mz1BSPMQM+9hrvmvt6NVw== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620812" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620812" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:16 -0800 X-CSE-ConnectionGUID: DBZlDlC/T8yChdvHs6hxEw== X-CSE-MsgGUID: pJx1na95S3e0x3aAKKOflg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357770" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:15 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Konstantin Ananyev , Ian Stokes , Anatoly Burakov Subject: [PATCH v2 12/22] net/_common_intel: add Tx buffer free fn for AVX-512 Date: Tue, 3 Dec 2024 16:41:18 +0000 Message-ID: <20241203164132.2686558-13-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org AVX-512 code paths for ice and i40e drivers are common, and differ from the regular post-Tx free function in that the SW ring from which the buffers are freed does not contain anything other than the mbuf pointer. Merge these into a common function in intel_common to reduce duplication. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 92 +++++++++++++++++++ drivers/net/i40e/i40e_rxtx_vec_avx512.c | 114 +---------------------- drivers/net/ice/ice_rxtx_vec_avx512.c | 117 +----------------------- 3 files changed, 94 insertions(+), 229 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index a930309c05..84ff839672 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -178,4 +178,96 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) return txq->tx_rs_thresh; } +static __rte_always_inline int +ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) +{ + int nb_free = 0; + struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF]; + struct rte_mbuf *m; + + /* check DD bits on threshold descriptor */ + if (!desc_done(txq, txq->tx_next_dd)) + return 0; + + const uint32_t n = txq->tx_rs_thresh; + + /* first buffer to free from S/W ring is at index + * tx_next_dd - (tx_rs_thresh - 1) + */ + struct ci_tx_entry_vec *txep = txq->sw_ring_vec; + txep += txq->tx_next_dd - (n - 1); + + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) { + struct rte_mempool *mp = txep[0].mbuf->pool; + void **cache_objs; + struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, rte_lcore_id()); + + if (!cache || cache->len == 0) + goto normal; + + cache_objs = &cache->objs[cache->len]; + + if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) { + rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n); + goto done; + } + + /* The cache follows the following algorithm + * 1. Add the objects to the cache + * 2. Anything greater than the cache min value (if it + * crosses the cache flush threshold) is flushed to the ring. + */ + /* Add elements back into the cache */ + uint32_t copied = 0; + /* n is multiple of 32 */ + while (copied < n) { + memcpy(&cache_objs[copied], &txep[copied], 32 * sizeof(void *)); + copied += 32; + } + cache->len += n; + + if (cache->len >= cache->flushthresh) { + rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size], + cache->len - cache->size); + cache->len = cache->size; + } + goto done; + } + +normal: + m = rte_pktmbuf_prefree_seg(txep[0].mbuf); + if (likely(m)) { + free[0] = m; + nb_free = 1; + for (uint32_t i = 1; i < n; i++) { + m = rte_pktmbuf_prefree_seg(txep[i].mbuf); + if (likely(m)) { + if (likely(m->pool == free[0]->pool)) { + free[nb_free++] = m; + } else { + rte_mempool_put_bulk(free[0]->pool, (void *)free, nb_free); + free[0] = m; + nb_free = 1; + } + } + } + rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); + } else { + for (uint32_t i = 1; i < n; i++) { + m = rte_pktmbuf_prefree_seg(txep[i].mbuf); + if (m) + rte_mempool_put(m->pool, m); + } + } + +done: + /* buffers were freed, update counters */ + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); + txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); + if (txq->tx_next_dd >= txq->nb_tx_desc) + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); + + return txq->tx_rs_thresh; +} + #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c index a3f6d1667f..9bb2a44231 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c @@ -754,118 +754,6 @@ i40e_recv_scattered_pkts_vec_avx512(void *rx_queue, rx_pkts + retval, nb_pkts); } -static __rte_always_inline int -i40e_tx_free_bufs_avx512(struct ci_tx_queue *txq) -{ - struct ci_tx_entry_vec *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ]; - - /* check DD bits on threshold descriptor */ - if ((txq->i40e_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & - rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) != - rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = (void *)txq->sw_ring; - txep += txq->tx_next_dd - (n - 1); - - if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) { - struct rte_mempool *mp = txep[0].mbuf->pool; - void **cache_objs; - struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, - rte_lcore_id()); - - if (!cache || n > RTE_MEMPOOL_CACHE_MAX_SIZE) { - rte_mempool_generic_put(mp, (void *)txep, n, cache); - goto done; - } - - cache_objs = &cache->objs[cache->len]; - - /* The cache follows the following algorithm - * 1. Add the objects to the cache - * 2. Anything greater than the cache min value (if it - * crosses the cache flush threshold) is flushed to the ring. - */ - /* Add elements back into the cache */ - uint32_t copied = 0; - /* n is multiple of 32 */ - while (copied < n) { -#ifdef RTE_ARCH_64 - const __m512i a = _mm512_load_si512(&txep[copied]); - const __m512i b = _mm512_load_si512(&txep[copied + 8]); - const __m512i c = _mm512_load_si512(&txep[copied + 16]); - const __m512i d = _mm512_load_si512(&txep[copied + 24]); - - _mm512_storeu_si512(&cache_objs[copied], a); - _mm512_storeu_si512(&cache_objs[copied + 8], b); - _mm512_storeu_si512(&cache_objs[copied + 16], c); - _mm512_storeu_si512(&cache_objs[copied + 24], d); -#else - const __m512i a = _mm512_load_si512(&txep[copied]); - const __m512i b = _mm512_load_si512(&txep[copied + 16]); - _mm512_storeu_si512(&cache_objs[copied], a); - _mm512_storeu_si512(&cache_objs[copied + 16], b); -#endif - copied += 32; - } - cache->len += n; - - if (cache->len >= cache->flushthresh) { - rte_mempool_ops_enqueue_bulk - (mp, &cache->objs[cache->size], - cache->len - cache->size); - cache->len = cache->size; - } - goto done; - } - - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - rte_mbuf_prefetch_part2(txep[i + 3].mbuf); - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m) - rte_mempool_put(m->pool, m); - } - } - -done: - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; -} - static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) { @@ -941,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; if (txq->nb_tx_free < txq->tx_free_thresh) - i40e_tx_free_bufs_avx512(txq); + ci_tx_free_bufs_vec(txq, i40e_tx_desc_done); nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c index eabd8b04a0..538be707ef 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c @@ -859,121 +859,6 @@ ice_recv_scattered_pkts_vec_avx512_offload(void *rx_queue, rx_pkts + retval, nb_pkts); } -static __rte_always_inline int -ice_tx_free_bufs_avx512(struct ci_tx_queue *txq) -{ - struct ci_tx_entry_vec *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[ICE_TX_MAX_FREE_BUF_SZ]; - - /* check DD bits on threshold descriptor */ - if ((txq->ice_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & - rte_cpu_to_le_64(ICE_TXD_QW1_DTYPE_M)) != - rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh - 1) - */ - txep = (void *)txq->sw_ring; - txep += txq->tx_next_dd - (n - 1); - - if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) { - struct rte_mempool *mp = txep[0].mbuf->pool; - void **cache_objs; - struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, - rte_lcore_id()); - - if (!cache || cache->len == 0) - goto normal; - - cache_objs = &cache->objs[cache->len]; - - if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) { - rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n); - goto done; - } - - /* The cache follows the following algorithm - * 1. Add the objects to the cache - * 2. Anything greater than the cache min value (if it - * crosses the cache flush threshold) is flushed to the ring. - */ - /* Add elements back into the cache */ - uint32_t copied = 0; - /* n is multiple of 32 */ - while (copied < n) { -#ifdef RTE_ARCH_64 - const __m512i a = _mm512_loadu_si512(&txep[copied]); - const __m512i b = _mm512_loadu_si512(&txep[copied + 8]); - const __m512i c = _mm512_loadu_si512(&txep[copied + 16]); - const __m512i d = _mm512_loadu_si512(&txep[copied + 24]); - - _mm512_storeu_si512(&cache_objs[copied], a); - _mm512_storeu_si512(&cache_objs[copied + 8], b); - _mm512_storeu_si512(&cache_objs[copied + 16], c); - _mm512_storeu_si512(&cache_objs[copied + 24], d); -#else - const __m512i a = _mm512_loadu_si512(&txep[copied]); - const __m512i b = _mm512_loadu_si512(&txep[copied + 16]); - _mm512_storeu_si512(&cache_objs[copied], a); - _mm512_storeu_si512(&cache_objs[copied + 16], b); -#endif - copied += 32; - } - cache->len += n; - - if (cache->len >= cache->flushthresh) { - rte_mempool_ops_enqueue_bulk - (mp, &cache->objs[cache->size], - cache->len - cache->size); - cache->len = cache->size; - } - goto done; - } - -normal: - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m) - rte_mempool_put(m->pool, m); - } - } - -done: - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; -} - static __rte_always_inline void ice_vtx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags, bool do_offload) @@ -1064,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh); if (txq->nb_tx_free < txq->tx_free_thresh) - ice_tx_free_bufs_avx512(txq); + ci_tx_free_bufs_vec(txq, ice_tx_desc_done); nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) From patchwork Tue Dec 3 16:41:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148997 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5FDF545E16; Tue, 3 Dec 2024 17:43:40 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7177E40E40; Tue, 3 Dec 2024 17:42:21 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 13DD140E2E for ; Tue, 3 Dec 2024 17:42:17 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244138; x=1764780138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YF7wU+GqHnFMLW3sCDyX7w4tUlB72SD9s5LZlgcml6I=; b=YVNCCz1zuZ9ejouo7My/SLngndkn7b6gGOFDf0CuYXbpXiEUL6fyJBWU pGnLaCFUe8Kc7szOBC8gsDH/DdKMWpO/PLqp9YuuziCITYJtNp2s9ZUqF LgzKvovsrCeK0IvbNwgfXIRNnla0d4cRte4hhtJ7iKpzUwjcXFlnbedns pp1jzoTG4yRGub0HHfU4RXWVk36oTMhHv/M72ftRfHwDj5dnP6d9iftnZ BnRE7ojXzIRCQOZUTGsLGSlA/Nu2qkjqC3v5MG84QviiBEQ3hJCtnVgHP gHe++Z27KCMdH2+m91GPcBNSO12Lel4fXXt23Pd0dqjnDCn/CkrwGFXTF w==; X-CSE-ConnectionGUID: 3yGSte9/TK2gnvTluErivQ== X-CSE-MsgGUID: 3PSFc5INRWSEGR+HsLF91Q== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620814" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620814" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:18 -0800 X-CSE-ConnectionGUID: 93Obgt5QT76+e7Gvxx+XBQ== X-CSE-MsgGUID: DJAXKk9QTB6qOS1vbae/Jg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357773" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:17 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Konstantin Ananyev , Ian Stokes , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v2 13/22] net/iavf: use common Tx free fn for AVX-512 Date: Tue, 3 Dec 2024 16:41:19 +0000 Message-ID: <20241203164132.2686558-14-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Switch the iavf driver to use the common Tx free function. This requires one additional parameter to that function, since iavf sometimes uses context descriptors which means that we have double the descriptors per SW ring slot. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 6 +- drivers/net/i40e/i40e_rxtx_vec_avx512.c | 2 +- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 119 +----------------------- drivers/net/ice/ice_rxtx_vec_avx512.c | 2 +- 4 files changed, 7 insertions(+), 122 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index 84ff839672..26aef528fa 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -179,7 +179,7 @@ ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) } static __rte_always_inline int -ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) +ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs) { int nb_free = 0; struct rte_mbuf *free[IETH_VPMD_TX_MAX_FREE_BUF]; @@ -189,13 +189,13 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) if (!desc_done(txq, txq->tx_next_dd)) return 0; - const uint32_t n = txq->tx_rs_thresh; + const uint32_t n = txq->tx_rs_thresh >> ctx_descs; /* first buffer to free from S/W ring is at index * tx_next_dd - (tx_rs_thresh - 1) */ struct ci_tx_entry_vec *txep = txq->sw_ring_vec; - txep += txq->tx_next_dd - (n - 1); + txep += (txq->tx_next_dd >> ctx_descs) - (n - 1); if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) { struct rte_mempool *mp = txep[0].mbuf->pool; diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c index 9bb2a44231..c555c3491d 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c @@ -829,7 +829,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; if (txq->nb_tx_free < txq->tx_free_thresh) - ci_tx_free_bufs_vec(txq, i40e_tx_desc_done); + ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false); nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index 9cf7171524..8543490c70 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -1844,121 +1844,6 @@ iavf_recv_scattered_pkts_vec_avx512_flex_rxd_offload(void *rx_queue, true); } -static __rte_always_inline int -iavf_tx_free_bufs_avx512(struct ci_tx_queue *txq) -{ - struct ci_tx_entry_vec *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF]; - - /* check DD bits on threshold descriptor */ - if ((txq->iavf_tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & - rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) != - rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE)) - return 0; - - n = txq->tx_rs_thresh >> txq->use_ctx; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = (void *)txq->sw_ring; - txep += (txq->tx_next_dd >> txq->use_ctx) - (n - 1); - - if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) { - struct rte_mempool *mp = txep[0].mbuf->pool; - struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, - rte_lcore_id()); - void **cache_objs; - - if (!cache || cache->len == 0) - goto normal; - - cache_objs = &cache->objs[cache->len]; - - if (n > RTE_MEMPOOL_CACHE_MAX_SIZE) { - rte_mempool_ops_enqueue_bulk(mp, (void *)txep, n); - goto done; - } - - /* The cache follows the following algorithm - * 1. Add the objects to the cache - * 2. Anything greater than the cache min value (if it crosses the - * cache flush threshold) is flushed to the ring. - */ - /* Add elements back into the cache */ - uint32_t copied = 0; - /* n is multiple of 32 */ - while (copied < n) { -#ifdef RTE_ARCH_64 - const __m512i a = _mm512_loadu_si512(&txep[copied]); - const __m512i b = _mm512_loadu_si512(&txep[copied + 8]); - const __m512i c = _mm512_loadu_si512(&txep[copied + 16]); - const __m512i d = _mm512_loadu_si512(&txep[copied + 24]); - - _mm512_storeu_si512(&cache_objs[copied], a); - _mm512_storeu_si512(&cache_objs[copied + 8], b); - _mm512_storeu_si512(&cache_objs[copied + 16], c); - _mm512_storeu_si512(&cache_objs[copied + 24], d); -#else - const __m512i a = _mm512_loadu_si512(&txep[copied]); - const __m512i b = _mm512_loadu_si512(&txep[copied + 16]); - _mm512_storeu_si512(&cache_objs[copied], a); - _mm512_storeu_si512(&cache_objs[copied + 16], b); -#endif - copied += 32; - } - cache->len += n; - - if (cache->len >= cache->flushthresh) { - rte_mempool_ops_enqueue_bulk(mp, - &cache->objs[cache->size], - cache->len - cache->size); - cache->len = cache->size; - } - goto done; - } - -normal: - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m) - rte_mempool_put(m->pool, m); - } - } - -done: - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; -} - static __rte_always_inline void tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -2320,7 +2205,7 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; if (txq->nb_tx_free < txq->tx_free_thresh) - iavf_tx_free_bufs_avx512(txq); + ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false); nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) @@ -2388,7 +2273,7 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; if (txq->nb_tx_free < txq->tx_free_thresh) - iavf_tx_free_bufs_avx512(txq); + ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, true); nb_commit = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts << 1); nb_commit &= 0xFFFE; diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c index 538be707ef..f6ec593f96 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c @@ -949,7 +949,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh); if (txq->nb_tx_free < txq->tx_free_thresh) - ci_tx_free_bufs_vec(txq, ice_tx_desc_done); + ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false); nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) From patchwork Tue Dec 3 16:41:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148998 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 86D2245E16; Tue, 3 Dec 2024 17:43:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 970BE40E34; Tue, 3 Dec 2024 17:42:22 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 9FCFD40E34 for ; Tue, 3 Dec 2024 17:42:19 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244140; x=1764780140; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Oyc6/52/eTGdihWb+Mgp8dJ2X1dewZFK+MGIajgAal4=; b=APyLDpPea0VnG46HQGcPI8JDorBbubLcHVWLV7RLHvl2PurwgLrcXpc0 RmGttBPU3sxsDADOf3rDInKsdBFMEbO8K3sUd5AZof7E6UD3zKAYwF/nX lvXVBW57SlPbHEYSpnGadw273nRroTCfyxzCQ1biiKq7jVoPCLluqZ+Ol uVsRWnAFIdvBjqJDWYKRZyAWf3hjP9ZXgFYN5yOg+u30kPawIEdf1oDXi v/PPanp2Zf/dRHtWHkG5kgd+ZtpUJnR8R5WpCL1Dq6DDqD4qj5jpk951j BDG0Sg1ve3B/TrSVxsJcF+O3HCKD8mXLpHpSA08+6pVyjoVhxYgnMtuK6 g==; X-CSE-ConnectionGUID: 9JstVFCVQfabJdFPO6mTzg== X-CSE-MsgGUID: i39D3YJCQa+FTzFp7c7KKQ== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620818" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620818" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:19 -0800 X-CSE-ConnectionGUID: II/59O7GScu3PwuaLugNEA== X-CSE-MsgGUID: LD2zUTR1S/GWbNsoGfGmrw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357778" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:18 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Anatoly Burakov , Konstantin Ananyev Subject: [PATCH v2 14/22] net/ice: move Tx queue mbuf cleanup fn to common Date: Tue, 3 Dec 2024 16:41:20 +0000 Message-ID: <20241203164132.2686558-15-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The functions to loop over the Tx queue and clean up all the mbufs on it, e.g. for queue shutdown, is not device specific and so can move into the common_intel headers. Only complication is ensuring that the correct ring format, either minimal vector or full structure, is used. Ice driver currently uses two functions and a function pointer to help with this - though actually one of those functions uses a further check inside it - so we can simplify this down to just one common function, with a flag set in the appropriate place. This avoids checking for AVX-512-specific functions, which were the only function using the smaller struct in this driver. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 49 ++++++++++++++++++++++++- drivers/net/ice/ice_dcf_ethdev.c | 5 +-- drivers/net/ice/ice_ethdev.h | 3 +- drivers/net/ice/ice_rxtx.c | 33 +++++------------ drivers/net/ice/ice_rxtx_vec_common.h | 51 --------------------------- drivers/net/ice/ice_rxtx_vec_sse.c | 4 --- 6 files changed, 60 insertions(+), 85 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index 26aef528fa..1bf2a61b2f 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -65,6 +65,8 @@ struct ci_tx_queue { rte_iova_t tx_ring_dma; /* TX ring DMA address */ bool tx_deferred_start; /* don't start this queue in dev start */ bool q_set; /* indicate if tx queue has been configured */ + bool vector_tx; /* port is using vector TX */ + bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */ union { /* the VSI this queue belongs to */ struct i40e_vsi *i40e_vsi; struct iavf_vsi *iavf_vsi; @@ -74,7 +76,6 @@ struct ci_tx_queue { union { struct { /* ICE driver specific values */ - ice_tx_release_mbufs_t tx_rel_mbufs; uint32_t q_teid; /* TX schedule node id. */ }; struct { /* I40E driver specific values */ @@ -270,4 +271,50 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx return txq->tx_rs_thresh; } +#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \ + uint16_t i = start; \ + if (txq->tx_tail < i) { \ + for (; i < txq->nb_tx_desc; i++) { \ + rte_pktmbuf_free_seg(swr[i].mbuf); \ + swr[i].mbuf = NULL; \ + } \ + i = 0; \ + } \ + for (; i < txq->tx_tail; i++) { \ + rte_pktmbuf_free_seg(swr[i].mbuf); \ + swr[i].mbuf = NULL; \ + } \ +} while (0) + +static inline void +ci_txq_release_all_mbufs(struct ci_tx_queue *txq) +{ + if (unlikely(!txq || !txq->sw_ring)) + return; + + if (!txq->vector_tx) { + for (uint16_t i = 0; i < txq->nb_tx_desc; i++) { + if (txq->sw_ring[i].mbuf != NULL) { + rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); + txq->sw_ring[i].mbuf = NULL; + } + } + return; + } + + /** + * vPMD tx will not set sw_ring's mbuf to NULL after free, + * so need to free remains more carefully. + */ + const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1; + + if (txq->vector_sw_ring) { + struct ci_tx_entry_vec *swr = txq->sw_ring_vec; + IETH_FREE_BUFS_LOOP(txq, swr, start); + } else { + struct ci_tx_entry *swr = txq->sw_ring; + IETH_FREE_BUFS_LOOP(txq, swr, start); + } +} + #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index a0c065d78c..c20399cd84 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -24,6 +24,7 @@ #include "ice_generic_flow.h" #include "ice_dcf_ethdev.h" #include "ice_rxtx.h" +#include "_common_intel/tx.h" #define DCF_NUM_MACADDR_MAX 64 @@ -500,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) } txq = dev->data->tx_queues[tx_queue_id]; - txq->tx_rel_mbufs(txq); + ci_txq_release_all_mbufs(txq); reset_tx_queue(txq); dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -650,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev) txq = dev->data->tx_queues[i]; if (!txq) continue; - txq->tx_rel_mbufs(txq); + ci_txq_release_all_mbufs(txq); reset_tx_queue(txq); dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index ba54655499..afe8dae497 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -621,13 +621,12 @@ struct ice_adapter { /* Set bit if the engine is disabled */ unsigned long disabled_engine_mask; struct ice_parser *psr; -#ifdef RTE_ARCH_X86 + /* used only on X86, zero on other Archs */ bool rx_use_avx2; bool rx_use_avx512; bool tx_use_avx2; bool tx_use_avx512; bool rx_vec_offload_support; -#endif }; struct ice_vsi_vlan_pvid_info { diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index e2e147ba3e..0a890e587c 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -751,6 +751,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) struct ice_aqc_add_tx_qgrp *txq_elem; struct ice_tlan_ctx tx_ctx; int buf_len; + struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); PMD_INIT_FUNC_TRACE(); @@ -822,6 +823,10 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) return -EIO; } + /* record what kind of descriptor cleanup we need on teardown */ + txq->vector_tx = ad->tx_vec_allowed; + txq->vector_sw_ring = ad->tx_use_avx512; + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; rte_free(txq_elem); @@ -1006,25 +1011,6 @@ ice_fdir_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } -/* Free all mbufs for descriptors in tx queue */ -static void -_ice_tx_queue_release_mbufs(struct ci_tx_queue *txq) -{ - uint16_t i; - - if (!txq || !txq->sw_ring) { - PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL"); - return; - } - - for (i = 0; i < txq->nb_tx_desc; i++) { - if (txq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - } - } -} - static void ice_reset_tx_queue(struct ci_tx_queue *txq) { @@ -1103,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return -EINVAL; } - txq->tx_rel_mbufs(txq); + ci_txq_release_all_mbufs(txq); ice_reset_tx_queue(txq); dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -1166,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return -EINVAL; } - txq->tx_rel_mbufs(txq); + ci_txq_release_all_mbufs(txq); txq->qtx_tail = NULL; return 0; @@ -1518,7 +1504,6 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, ice_reset_tx_queue(txq); txq->q_set = true; dev->data->tx_queues[queue_idx] = txq; - txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs; ice_set_tx_function_flag(dev, txq); return 0; @@ -1546,8 +1531,7 @@ ice_tx_queue_release(void *txq) return; } - if (q->tx_rel_mbufs != NULL) - q->tx_rel_mbufs(q); + ci_txq_release_all_mbufs(q); rte_free(q->sw_ring); rte_memzone_free(q->mz); rte_free(q); @@ -2460,7 +2444,6 @@ ice_fdir_setup_tx_resources(struct ice_pf *pf) txq->q_set = true; pf->fdir.txq = txq; - txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs; return ICE_SUCCESS; } diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index c6c3933299..907828b675 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -61,57 +61,6 @@ _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq) memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc); } -static inline void -_ice_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) -{ - uint16_t i; - - if (unlikely(!txq || !txq->sw_ring)) { - PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL"); - return; - } - - /** - * vPMD tx will not set sw_ring's mbuf to NULL after free, - * so need to free remains more carefully. - */ - i = txq->tx_next_dd - txq->tx_rs_thresh + 1; - -#ifdef __AVX512VL__ - struct rte_eth_dev *dev = &rte_eth_devices[txq->ice_vsi->adapter->pf.dev_data->port_id]; - - if (dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512 || - dev->tx_pkt_burst == ice_xmit_pkts_vec_avx512_offload) { - struct ci_tx_entry_vec *swr = (void *)txq->sw_ring; - - if (txq->tx_tail < i) { - for (; i < txq->nb_tx_desc; i++) { - rte_pktmbuf_free_seg(swr[i].mbuf); - swr[i].mbuf = NULL; - } - i = 0; - } - for (; i < txq->tx_tail; i++) { - rte_pktmbuf_free_seg(swr[i].mbuf); - swr[i].mbuf = NULL; - } - } else -#endif - { - if (txq->tx_tail < i) { - for (; i < txq->nb_tx_desc; i++) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - } - i = 0; - } - for (; i < txq->tx_tail; i++) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - } - } -} - static inline int ice_rxq_vec_setup_default(struct ice_rx_queue *rxq) { diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index f11528385a..bff39c28d8 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -795,10 +795,6 @@ ice_rxq_vec_setup(struct ice_rx_queue *rxq) int __rte_cold ice_txq_vec_setup(struct ci_tx_queue *txq __rte_unused) { - if (!txq) - return -1; - - txq->tx_rel_mbufs = _ice_tx_queue_release_mbufs_vec; return 0; } From patchwork Tue Dec 3 16:41:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 148999 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 691E845E16; Tue, 3 Dec 2024 17:43:53 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F2C0E40E26; Tue, 3 Dec 2024 17:42:23 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id CDC9540E37 for ; Tue, 3 Dec 2024 17:42:20 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244141; x=1764780141; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hTsh8DB+KzTjdVOwrPqHKFHKzTDoRj/E9/FJ4Jei1Og=; b=df3mTYjs/tLUjl5m0bgHpRtML4rUesks8rCfsR0e2DMsNagcYA9lyuwa r3e8cJ3Fqfs4dcbUtuvDUzZ4yZ+/lgsuXu5LWXEVe/h9aU6KUbBbJgpXG +z2/tzOQm0iSeyYxEkpPOoIPtB4Z1CdxgCCpQhDOq/VMwKkHv6YStBGZx IW6JXKcBEXBzqsTYIYCYWFCZQbAD83XVNTtm6cS/fdEVzQMxpjC0k30G/ ItZJFjYNtO/EUOhLj6gtduNUzYFH2d0DBig+jCg7ezM2dFjwiLESYbgPF n7tHih1dJ7LX3TS6jdE02JOg/SDxJyX+PaSumqNz58njK5lMSlA/Ni+1n g==; X-CSE-ConnectionGUID: hNkF2s4HS6yRZoTeCUjfyA== X-CSE-MsgGUID: K1oIIBYqR2WyIpAZpWsewg== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620823" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620823" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:21 -0800 X-CSE-ConnectionGUID: 4KaMtn+rTnSBocPndOdPrQ== X-CSE-MsgGUID: z1X+F29SSrycOr3o5U+tzA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357782" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:20 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes Subject: [PATCH v2 15/22] net/i40e: use common Tx queue mbuf cleanup fn Date: Tue, 3 Dec 2024 16:41:21 +0000 Message-ID: <20241203164132.2686558-16-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update driver to be similar to the "ice" driver and use the common mbuf ring cleanup code on shutdown of a Tx queue. Signed-off-by: Bruce Richardson --- drivers/net/i40e/i40e_ethdev.h | 4 +- drivers/net/i40e/i40e_rxtx.c | 70 ++++------------------------------ drivers/net/i40e/i40e_rxtx.h | 1 - 3 files changed, 9 insertions(+), 66 deletions(-) diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index d351193ed9..ccc8732d7d 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -1260,12 +1260,12 @@ struct i40e_adapter { /* For RSS reta table update */ uint8_t rss_reta_updated; -#ifdef RTE_ARCH_X86 + + /* used only on x86, zero on other architectures */ bool rx_use_avx2; bool rx_use_avx512; bool tx_use_avx2; bool tx_use_avx512; -#endif }; /** diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 539b170266..b70919c5dc 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -1875,6 +1875,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) int err; struct ci_tx_queue *txq; struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + const struct i40e_adapter *ad = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); PMD_INIT_FUNC_TRACE(); @@ -1889,6 +1890,9 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) PMD_DRV_LOG(WARNING, "TX queue %u is deferred start", tx_queue_id); + txq->vector_tx = ad->tx_vec_allowed; + txq->vector_sw_ring = ad->tx_use_avx512; + /* * tx_queue_id is queue id application refers to, while * rxq->reg_idx is the real queue index. @@ -1929,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return err; } - i40e_tx_queue_release_mbufs(txq); + ci_txq_release_all_mbufs(txq); i40e_reset_tx_queue(txq); dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -2604,7 +2608,7 @@ i40e_tx_queue_release(void *txq) return; } - i40e_tx_queue_release_mbufs(q); + ci_txq_release_all_mbufs(q); rte_free(q->sw_ring); rte_memzone_free(q->mz); rte_free(q); @@ -2701,66 +2705,6 @@ i40e_reset_rx_queue(struct i40e_rx_queue *rxq) rxq->rxrearm_nb = 0; } -void -i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq) -{ - struct rte_eth_dev *dev; - uint16_t i; - - if (!txq || !txq->sw_ring) { - PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL"); - return; - } - - dev = &rte_eth_devices[txq->port_id]; - - /** - * vPMD tx will not set sw_ring's mbuf to NULL after free, - * so need to free remains more carefully. - */ -#ifdef CC_AVX512_SUPPORT - if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx512) { - struct ci_tx_entry_vec *swr = (void *)txq->sw_ring; - - i = txq->tx_next_dd - txq->tx_rs_thresh + 1; - if (txq->tx_tail < i) { - for (; i < txq->nb_tx_desc; i++) { - rte_pktmbuf_free_seg(swr[i].mbuf); - swr[i].mbuf = NULL; - } - i = 0; - } - for (; i < txq->tx_tail; i++) { - rte_pktmbuf_free_seg(swr[i].mbuf); - swr[i].mbuf = NULL; - } - return; - } -#endif - if (dev->tx_pkt_burst == i40e_xmit_pkts_vec_avx2 || - dev->tx_pkt_burst == i40e_xmit_pkts_vec) { - i = txq->tx_next_dd - txq->tx_rs_thresh + 1; - if (txq->tx_tail < i) { - for (; i < txq->nb_tx_desc; i++) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - } - i = 0; - } - for (; i < txq->tx_tail; i++) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - } - } else { - for (i = 0; i < txq->nb_tx_desc; i++) { - if (txq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - } - } - } -} - static int i40e_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt) @@ -3127,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_tx_queues; i++) { if (!dev->data->tx_queues[i]) continue; - i40e_tx_queue_release_mbufs(dev->data->tx_queues[i]); + ci_txq_release_all_mbufs(dev->data->tx_queues[i]); i40e_reset_tx_queue(dev->data->tx_queues[i]); } diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 043d1df912..858b8433e9 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -179,7 +179,6 @@ void i40e_dev_clear_queues(struct rte_eth_dev *dev); void i40e_dev_free_queues(struct rte_eth_dev *dev); void i40e_reset_rx_queue(struct i40e_rx_queue *rxq); void i40e_reset_tx_queue(struct ci_tx_queue *txq); -void i40e_tx_queue_release_mbufs(struct ci_tx_queue *txq); int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt); int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq); void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq); From patchwork Tue Dec 3 16:41:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 149000 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 00BF045E16; Tue, 3 Dec 2024 17:43:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B91D74066C; Tue, 3 Dec 2024 17:42:25 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 082BE40E26 for ; Tue, 3 Dec 2024 17:42:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244143; x=1764780143; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LptoBjVr6a1XDGpysbZ+5vfkXKDACxNgqZ8MWdcZcPE=; b=gQkyIMHnOeAwIGrE7IDp8d15bty6ryQLpZ3ECkN7uI6tldCX6iSlAjM2 qbuK8VoIMACK/Gui+5Ho/fk1TBfE0uh1OBMp47odpj1bB3ihdlP95mB/W nhyHj3WWQRgdLcGz3QMUA2RT/IVW+iGL6PxKOjgTjDFhujXiEj+82UUgX IKRyRzM2i2Hh5AUPEDSo0WSx3cpRNPqOcbtUk1qWnTfJp2113PVhzYpIE 4udnEnGjudvfiLBHzwArFDaEBWpqVs2RBaDVJyM3OGV6PDB4COl0jINIZ 2dYoKDywGaUHfgqGmypoenTPRBX1IpHCOtQRWHOupljt3NkPuJmf/pXqG Q==; X-CSE-ConnectionGUID: OG6hOSIXSbiMm7BLEBh4iA== X-CSE-MsgGUID: NgmH0NF8R0yBawZA8NBFpA== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620826" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620826" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:22 -0800 X-CSE-ConnectionGUID: W90O8mmjRI6WIeQgBMNw8w== X-CSE-MsgGUID: M2b05HvVStC5mP1oPAw+1g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357788" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:21 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Anatoly Burakov , Vladimir Medvedkin , Wathsala Vithanage , Konstantin Ananyev Subject: [PATCH v2 16/22] net/ixgbe: use common Tx queue mbuf cleanup fn Date: Tue, 3 Dec 2024 16:41:22 +0000 Message-ID: <20241203164132.2686558-17-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update driver to use the common cleanup function. Signed-off-by: Bruce Richardson --- drivers/net/ixgbe/ixgbe_rxtx.c | 22 +++--------------- drivers/net/ixgbe/ixgbe_rxtx.h | 1 - drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 28 ++--------------------- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 7 ------ drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 7 ------ 5 files changed, 5 insertions(+), 60 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 344ef85685..bf9d461b06 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -2334,21 +2334,6 @@ ixgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts, * **********************************************************************/ -static void __rte_cold -ixgbe_tx_queue_release_mbufs(struct ci_tx_queue *txq) -{ - unsigned i; - - if (txq->sw_ring != NULL) { - for (i = 0; i < txq->nb_tx_desc; i++) { - if (txq->sw_ring[i].mbuf != NULL) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - } - } - } -} - static int ixgbe_tx_done_cleanup_full(struct ci_tx_queue *txq, uint32_t free_cnt) { @@ -2472,7 +2457,7 @@ static void __rte_cold ixgbe_tx_queue_release(struct ci_tx_queue *txq) { if (txq != NULL && txq->ops != NULL) { - txq->ops->release_mbufs(txq); + ci_txq_release_all_mbufs(txq); txq->ops->free_swring(txq); rte_memzone_free(txq->mz); rte_free(txq); @@ -2526,7 +2511,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq) } static const struct ixgbe_txq_ops def_txq_ops = { - .release_mbufs = ixgbe_tx_queue_release_mbufs, .free_swring = ixgbe_tx_free_swring, .reset = ixgbe_reset_tx_queue, }; @@ -3380,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev) struct ci_tx_queue *txq = dev->data->tx_queues[i]; if (txq != NULL) { - txq->ops->release_mbufs(txq); + ci_txq_release_all_mbufs(txq); txq->ops->reset(txq); dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } @@ -5655,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) } if (txq->ops != NULL) { - txq->ops->release_mbufs(txq); + ci_txq_release_all_mbufs(txq); txq->ops->reset(txq); } dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 4333e5bf2f..11689eb432 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -181,7 +181,6 @@ struct ixgbe_advctx_info { }; struct ixgbe_txq_ops { - void (*release_mbufs)(struct ci_tx_queue *txq); void (*free_swring)(struct ci_tx_queue *txq); void (*reset)(struct ci_tx_queue *txq); }; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index 81fd8bb64d..65794e45cb 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -78,32 +78,6 @@ tx_backlog_entry(struct ci_tx_entry_vec *txep, txep[i].mbuf = tx_pkts[i]; } -static inline void -_ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) -{ - unsigned int i; - struct ci_tx_entry_vec *txe; - const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); - - if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc) - return; - - /* release the used mbufs in sw_ring */ - for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1); - i != txq->tx_tail; - i = (i + 1) % txq->nb_tx_desc) { - txe = &txq->sw_ring_vec[i]; - rte_pktmbuf_free_seg(txe->mbuf); - } - txq->nb_tx_free = max_desc; - - /* reset tx_entry */ - for (i = 0; i < txq->nb_tx_desc; i++) { - txe = &txq->sw_ring_vec[i]; - txe->mbuf = NULL; - } -} - static inline void _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) { @@ -208,6 +182,8 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq, /* leave the first one for overflow */ txq->sw_ring_vec = txq->sw_ring_vec + 1; txq->ops = txq_ops; + txq->vector_tx = 1; + txq->vector_sw_ring = 1; return 0; } diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index cb749a3760..2ccb399b64 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -633,12 +633,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_pkts; } -static void __rte_cold -ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) -{ - _ixgbe_tx_queue_release_mbufs_vec(txq); -} - void __rte_cold ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) { @@ -658,7 +652,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq) } static const struct ixgbe_txq_ops vec_txq_ops = { - .release_mbufs = ixgbe_tx_queue_release_mbufs_vec, .free_swring = ixgbe_tx_free_swring, .reset = ixgbe_reset_tx_queue, }; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index e46550f76a..fa26365f06 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -756,12 +756,6 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_pkts; } -static void __rte_cold -ixgbe_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) -{ - _ixgbe_tx_queue_release_mbufs_vec(txq); -} - void __rte_cold ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) { @@ -781,7 +775,6 @@ ixgbe_reset_tx_queue(struct ci_tx_queue *txq) } static const struct ixgbe_txq_ops vec_txq_ops = { - .release_mbufs = ixgbe_tx_queue_release_mbufs_vec, .free_swring = ixgbe_tx_free_swring, .reset = ixgbe_reset_tx_queue, }; From patchwork Tue Dec 3 16:41:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 149001 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6665445E16; Tue, 3 Dec 2024 17:44:09 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CE97D40E7C; Tue, 3 Dec 2024 17:42:28 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 43CE940E4C for ; Tue, 3 Dec 2024 17:42:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244146; x=1764780146; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Jerpqbg8Jx2dtWCTcDYy39FK24VUXLsgcWTDX7NUkAw=; b=HgEqidlH802CU0VTXi+3LC7XFnZRkugyGAA1wBgK/PaKaNdhHSyT+97M dhwMk4sLmjK0yYMCIwLeESEBa5JvAYIWcX+Nk5XGM2pPMRl83PP0J6zgY ONpArDZ3j1jzp1nhvafbohHE32gDpbEJveWvXukWx+yEQCiTsd7OQvk32 TA9IhUHCSJrLAjNM2r42rhHNh72cuYVbYiCv8NBaocmv5mxg1irMsC9la fRHIiuBlDDvOw3xQpQC4tv3ckVAZFQeONBISmMnl1VIzd3OcCaONHdXYq AuGFGxRDtNp3QqexcsAxNIVKibk6M7kiu9Vkvv/Ms3X0G8av7YLoCHUAq A==; X-CSE-ConnectionGUID: Wt2ky9YASwiTdv8sKlmscw== X-CSE-MsgGUID: +4wzZzjBQU2DVBI0zsyozA== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620831" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620831" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:24 -0800 X-CSE-ConnectionGUID: zlXUmgUASbCnL/+zYNg0Lw== X-CSE-MsgGUID: wsZYEWABQ0C267guJ5Hfsw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357799" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:23 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , Vladimir Medvedkin , Konstantin Ananyev , Anatoly Burakov Subject: [PATCH v2 17/22] net/iavf: use common Tx queue mbuf cleanup fn Date: Tue, 3 Dec 2024 16:41:23 +0000 Message-ID: <20241203164132.2686558-18-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adjust iavf driver to also use the common mbuf freeing functions on Tx queue release/cleanup. The implementation is complicated a little by the need to integrate the additional "has_ctx" parameter for the iavf code, but changes in other drivers are minimal - just a constant "false" parameter. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 27 +++++++++--------- drivers/net/i40e/i40e_rxtx.c | 6 ++-- drivers/net/iavf/iavf_rxtx.c | 37 ++----------------------- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 24 ++-------------- drivers/net/iavf/iavf_rxtx_vec_common.h | 18 ------------ drivers/net/iavf/iavf_rxtx_vec_sse.c | 9 ++---- drivers/net/ice/ice_dcf_ethdev.c | 4 +-- drivers/net/ice/ice_rxtx.c | 6 ++-- drivers/net/ixgbe/ixgbe_rxtx.c | 6 ++-- 9 files changed, 31 insertions(+), 106 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index 1bf2a61b2f..310b51adcf 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -271,23 +271,23 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx return txq->tx_rs_thresh; } -#define IETH_FREE_BUFS_LOOP(txq, swr, start) do { \ +#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \ uint16_t i = start; \ - if (txq->tx_tail < i) { \ - for (; i < txq->nb_tx_desc; i++) { \ + if (end < i) { \ + for (; i < nb_desc; i++) { \ rte_pktmbuf_free_seg(swr[i].mbuf); \ swr[i].mbuf = NULL; \ } \ i = 0; \ } \ - for (; i < txq->tx_tail; i++) { \ + for (; i < end; i++) { \ rte_pktmbuf_free_seg(swr[i].mbuf); \ swr[i].mbuf = NULL; \ } \ } while (0) static inline void -ci_txq_release_all_mbufs(struct ci_tx_queue *txq) +ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) { if (unlikely(!txq || !txq->sw_ring)) return; @@ -306,15 +306,14 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq) * vPMD tx will not set sw_ring's mbuf to NULL after free, * so need to free remains more carefully. */ - const uint16_t start = txq->tx_next_dd - txq->tx_rs_thresh + 1; - - if (txq->vector_sw_ring) { - struct ci_tx_entry_vec *swr = txq->sw_ring_vec; - IETH_FREE_BUFS_LOOP(txq, swr, start); - } else { - struct ci_tx_entry *swr = txq->sw_ring; - IETH_FREE_BUFS_LOOP(txq, swr, start); - } + const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx; + const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx; + const uint16_t end = txq->tx_tail >> use_ctx; + + if (txq->vector_sw_ring) + IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end); + else + IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end); } #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index b70919c5dc..081d743e62 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -1933,7 +1933,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return err; } - ci_txq_release_all_mbufs(txq); + ci_txq_release_all_mbufs(txq, false); i40e_reset_tx_queue(txq); dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -2608,7 +2608,7 @@ i40e_tx_queue_release(void *txq) return; } - ci_txq_release_all_mbufs(q); + ci_txq_release_all_mbufs(q, false); rte_free(q->sw_ring); rte_memzone_free(q->mz); rte_free(q); @@ -3071,7 +3071,7 @@ i40e_dev_clear_queues(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_tx_queues; i++) { if (!dev->data->tx_queues[i]) continue; - ci_txq_release_all_mbufs(dev->data->tx_queues[i]); + ci_txq_release_all_mbufs(dev->data->tx_queues[i], false); i40e_reset_tx_queue(dev->data->tx_queues[i]); } diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 7e381b2a17..f0ab881ac5 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -387,24 +387,6 @@ release_rxq_mbufs(struct iavf_rx_queue *rxq) rxq->rx_nb_avail = 0; } -static inline void -release_txq_mbufs(struct ci_tx_queue *txq) -{ - uint16_t i; - - if (!txq || !txq->sw_ring) { - PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL"); - return; - } - - for (i = 0; i < txq->nb_tx_desc; i++) { - if (txq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - } - } -} - static const struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = { [IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_rxq_mbufs, @@ -413,18 +395,6 @@ struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = { #endif }; -static const -struct iavf_txq_ops iavf_txq_release_mbufs_ops[] = { - [IAVF_REL_MBUFS_DEFAULT].release_mbufs = release_txq_mbufs, -#ifdef RTE_ARCH_X86 - [IAVF_REL_MBUFS_SSE_VEC].release_mbufs = iavf_tx_queue_release_mbufs_sse, -#ifdef CC_AVX512_SUPPORT - [IAVF_REL_MBUFS_AVX512_VEC].release_mbufs = iavf_tx_queue_release_mbufs_avx512, -#endif -#endif - -}; - static inline void iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq, struct rte_mbuf *mb, @@ -889,7 +859,6 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->q_set = true; dev->data->tx_queues[queue_idx] = txq; txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(queue_idx); - txq->rel_mbufs_type = IAVF_REL_MBUFS_DEFAULT; if (check_tx_vec_allow(txq) == false) { struct iavf_adapter *ad = @@ -1068,7 +1037,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) } txq = dev->data->tx_queues[tx_queue_id]; - iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq); + ci_txq_release_all_mbufs(txq, txq->use_ctx); reset_tx_queue(txq); dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -1097,7 +1066,7 @@ iavf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) if (!q) return; - iavf_txq_release_mbufs_ops[q->rel_mbufs_type].release_mbufs(q); + ci_txq_release_all_mbufs(q, q->use_ctx); rte_free(q->sw_ring); rte_memzone_free(q->mz); rte_free(q); @@ -1114,7 +1083,7 @@ iavf_reset_queues(struct rte_eth_dev *dev) txq = dev->data->tx_queues[i]; if (!txq) continue; - iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq); + ci_txq_release_all_mbufs(txq, txq->use_ctx); reset_tx_queue(txq); dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index 8543490c70..007759e451 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -2357,31 +2357,11 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false); } -void __rte_cold -iavf_tx_queue_release_mbufs_avx512(struct ci_tx_queue *txq) -{ - unsigned int i; - const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); - const uint16_t end_desc = txq->tx_tail >> txq->use_ctx; /* next empty slot */ - const uint16_t wrap_point = txq->nb_tx_desc >> txq->use_ctx; /* end of SW ring */ - struct ci_tx_entry_vec *swr = (void *)txq->sw_ring; - - if (!txq->sw_ring || txq->nb_tx_free == max_desc) - return; - - i = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> txq->use_ctx; - while (i != end_desc) { - rte_pktmbuf_free_seg(swr[i].mbuf); - swr[i].mbuf = NULL; - if (++i == wrap_point) - i = 0; - } -} - int __rte_cold iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq) { - txq->rel_mbufs_type = IAVF_REL_MBUFS_AVX512_VEC; + txq->vector_tx = true; + txq->vector_sw_ring = true; return 0; } diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 7130229f23..6f94587eee 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -60,24 +60,6 @@ _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq) memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc); } -static inline void -_iavf_tx_queue_release_mbufs_vec(struct ci_tx_queue *txq) -{ - unsigned i; - const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); - - if (!txq->sw_ring || txq->nb_tx_free == max_desc) - return; - - i = txq->tx_next_dd - txq->tx_rs_thresh + 1; - while (i != txq->tx_tail) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - if (++i == txq->nb_tx_desc) - i = 0; - } -} - static inline int iavf_rxq_vec_setup_default(struct iavf_rx_queue *rxq) { diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 5c0b2fff46..3adf2a59e4 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1458,16 +1458,11 @@ iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq) _iavf_rx_queue_release_mbufs_vec(rxq); } -void __rte_cold -iavf_tx_queue_release_mbufs_sse(struct ci_tx_queue *txq) -{ - _iavf_tx_queue_release_mbufs_vec(txq); -} - int __rte_cold iavf_txq_vec_setup(struct ci_tx_queue *txq) { - txq->rel_mbufs_type = IAVF_REL_MBUFS_SSE_VEC; + txq->vector_tx = true; + txq->vector_sw_ring = false; return 0; } diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index c20399cd84..57fe44ebb3 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -501,7 +501,7 @@ ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) } txq = dev->data->tx_queues[tx_queue_id]; - ci_txq_release_all_mbufs(txq); + ci_txq_release_all_mbufs(txq, false); reset_tx_queue(txq); dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -651,7 +651,7 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev) txq = dev->data->tx_queues[i]; if (!txq) continue; - ci_txq_release_all_mbufs(txq); + ci_txq_release_all_mbufs(txq, false); reset_tx_queue(txq); dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 0a890e587c..ad0ddf6a88 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -1089,7 +1089,7 @@ ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return -EINVAL; } - ci_txq_release_all_mbufs(txq); + ci_txq_release_all_mbufs(txq, false); ice_reset_tx_queue(txq); dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; @@ -1152,7 +1152,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return -EINVAL; } - ci_txq_release_all_mbufs(txq); + ci_txq_release_all_mbufs(txq, false); txq->qtx_tail = NULL; return 0; @@ -1531,7 +1531,7 @@ ice_tx_queue_release(void *txq) return; } - ci_txq_release_all_mbufs(q); + ci_txq_release_all_mbufs(q, false); rte_free(q->sw_ring); rte_memzone_free(q->mz); rte_free(q); diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index bf9d461b06..3b7a6a6f0e 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -2457,7 +2457,7 @@ static void __rte_cold ixgbe_tx_queue_release(struct ci_tx_queue *txq) { if (txq != NULL && txq->ops != NULL) { - ci_txq_release_all_mbufs(txq); + ci_txq_release_all_mbufs(txq, false); txq->ops->free_swring(txq); rte_memzone_free(txq->mz); rte_free(txq); @@ -3364,7 +3364,7 @@ ixgbe_dev_clear_queues(struct rte_eth_dev *dev) struct ci_tx_queue *txq = dev->data->tx_queues[i]; if (txq != NULL) { - ci_txq_release_all_mbufs(txq); + ci_txq_release_all_mbufs(txq, false); txq->ops->reset(txq); dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } @@ -5639,7 +5639,7 @@ ixgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) } if (txq->ops != NULL) { - ci_txq_release_all_mbufs(txq); + ci_txq_release_all_mbufs(txq, false); txq->ops->reset(txq); } dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; From patchwork Tue Dec 3 16:41:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 149002 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0FCB45E16; Tue, 3 Dec 2024 17:44:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8021F40E39; Tue, 3 Dec 2024 17:42:30 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 5F42540E3A for ; Tue, 3 Dec 2024 17:42:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244147; x=1764780147; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SEFiWOVcfgqNZJJta9E3b+5ZBvBr4jZJsjP70AeYw/0=; b=azwIyTDxChFyB87hCd+7toKkD9VK84wxPvzaWLgf+F9Q3IJqAhwS/qZ4 CpFXcRHjrEH9rnGsUHGpKpBj8bn94wyOSHWg5YUEjXETEEBEwAoYuWVqg tAkCmC9qZS1exc/FgKObUjBdbHVTrBIE3uDn+okeHVdf1EQ3KhOxCYRjz /IT0IXequSthXnfSzaUWJesdwAo/QJGfNkWn9MToruREbLzNEcmYO6vHp hlqSJHOC5+gbVU5vyDWVGQ8OHQhjD19OhxQmXz9B9J0MJSilTXkkf+op9 gKZ/zf8/IOHbx3MqHbCxPy5ZHA1otuQLsQ3feyEJd38R482lzyzo6g+sF g==; X-CSE-ConnectionGUID: lvLpVL4bQVWC9HPlDYQ7WA== X-CSE-MsgGUID: 3guNHVIdSketYbzAhTJxPw== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620835" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620835" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:26 -0800 X-CSE-ConnectionGUID: 8MmpJV0MR0aHaSpKVOXQ3Q== X-CSE-MsgGUID: YJfE+WlOSuuiSeFcFligBw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357806" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:25 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Anatoly Burakov , Konstantin Ananyev Subject: [PATCH v2 18/22] net/ice: use vector SW ring for all vector paths Date: Tue, 3 Dec 2024 16:41:24 +0000 Message-ID: <20241203164132.2686558-19-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The AVX-512 code path used a smaller SW ring structure only containing the mbuf pointer, but no other fields. The other fields are only used in the scalar code path, so update all vector driver code paths to use the smaller, faster structure. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 7 +++++++ drivers/net/ice/ice_rxtx.c | 2 +- drivers/net/ice/ice_rxtx_vec_avx2.c | 12 ++++++------ drivers/net/ice/ice_rxtx_vec_avx512.c | 14 ++------------ drivers/net/ice/ice_rxtx_vec_common.h | 6 ------ drivers/net/ice/ice_rxtx_vec_sse.c | 12 ++++++------ 6 files changed, 22 insertions(+), 31 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index 310b51adcf..aa42b9b49f 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -109,6 +109,13 @@ ci_tx_backlog_entry(struct ci_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_ txep[i].mbuf = tx_pkts[i]; } +static __rte_always_inline void +ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + for (uint16_t i = 0; i < nb_pkts; ++i) + txep[i].mbuf = tx_pkts[i]; +} + #define IETH_VPMD_TX_MAX_FREE_BUF 64 typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx); diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index ad0ddf6a88..77cb6688a7 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -825,7 +825,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* record what kind of descriptor cleanup we need on teardown */ txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = ad->tx_use_avx512; + txq->vector_sw_ring = txq->vector_tx; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c index 12ffa0fa9a..98bab322b4 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c @@ -858,7 +858,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; @@ -867,7 +867,7 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh); if (txq->nb_tx_free < txq->tx_free_thresh) - ice_tx_free_bufs_vec(txq); + ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false); nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) @@ -875,13 +875,13 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->ice_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); ice_vtx(txdp, tx_pkts, n - 1, flags, offload); tx_pkts += (n - 1); @@ -896,10 +896,10 @@ ice_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->ice_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); ice_vtx(txdp, tx_pkts, nb_commit, flags, offload); diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c index f6ec593f96..481f784e34 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c @@ -924,16 +924,6 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt, } } -static __rte_always_inline void -ice_tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep, - struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - int i; - - for (i = 0; i < (int)nb_pkts; ++i) - txep[i].mbuf = tx_pkts[i]; -} - static __rte_always_inline uint16_t ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, bool do_offload) @@ -964,7 +954,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ice_tx_backlog_entry_avx512(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); ice_vtx(txdp, tx_pkts, n - 1, flags, do_offload); tx_pkts += (n - 1); @@ -982,7 +972,7 @@ ice_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, txep = (void *)txq->sw_ring; } - ice_tx_backlog_entry_avx512(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); ice_vtx(txdp, tx_pkts, nb_commit, flags, do_offload); diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 907828b675..aa709fb51c 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -20,12 +20,6 @@ ice_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx) rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE); } -static __rte_always_inline int -ice_tx_free_bufs_vec(struct ci_tx_queue *txq) -{ - return ci_tx_free_bufs(txq, ice_tx_desc_done); -} - static inline void _ice_rx_queue_release_mbufs_vec(struct ice_rx_queue *rxq) { diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index bff39c28d8..73e3e9eb54 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -699,7 +699,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct ice_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = ICE_TD_CMD; uint64_t rs = ICE_TX_DESC_CMD_RS | ICE_TD_CMD; @@ -709,7 +709,7 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh); if (txq->nb_tx_free < txq->tx_free_thresh) - ice_tx_free_bufs_vec(txq); + ci_tx_free_bufs_vec(txq, ice_tx_desc_done, false); nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); nb_commit = nb_pkts; @@ -718,13 +718,13 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->ice_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) ice_vtx1(txdp, *tx_pkts, flags); @@ -738,10 +738,10 @@ ice_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->ice_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); ice_vtx(txdp, tx_pkts, nb_commit, flags); From patchwork Tue Dec 3 16:41:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 149003 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C73F745E16; Tue, 3 Dec 2024 17:44:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8E9F040ECF; Tue, 3 Dec 2024 17:42:31 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 00EAC40E39 for ; Tue, 3 Dec 2024 17:42:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244148; x=1764780148; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=27tHrJztAjWtjEjxAuwqRg+8cNv8bg3I0FpBHpM2HVs=; b=Jlul/kvkNW515mpmysf35O+A0NQn9v4mqdP1v4VydB4MPqZEO+w6JzNS oKb4AfWp9VTfslVUj+/uuLG/7x3xq4V0FAwQBAd4uROGTaohHG85pvm3+ wBbaktcTrjWIvHi51B6uTZ3y63lPmooU+WbxLHiMHqrcx07VZseVl6iWE PB1Mw1Ejo0hNfZKLDNlMpm9AaKAvSTcPe6exZvelB9af2SNyuTVsIlYgQ uyJ5yMuMQmni/hmO3Sv5GBrKj9a9VkN4OEaT07AdAWLHB1ylyggaXKAfi 1NSfj1uWKQ4BZstMbwFUohmdac0VAWaCdJTe9Vk9Tf6bCYvQFO8E/K0oG w==; X-CSE-ConnectionGUID: k4OPUBNHQsmixva96msUAw== X-CSE-MsgGUID: Jc/kId75RI2RgJYfSU2Iaw== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620839" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620839" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:28 -0800 X-CSE-ConnectionGUID: egoXqAGqT4OpfuvVHqZQ8A== X-CSE-MsgGUID: MNrw5+PuS3qZyYlhWR25YA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357810" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:26 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , David Christensen , Konstantin Ananyev , Wathsala Vithanage Subject: [PATCH v2 19/22] net/i40e: use vector SW ring for all vector paths Date: Tue, 3 Dec 2024 16:41:25 +0000 Message-ID: <20241203164132.2686558-20-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The AVX-512 code path used a smaller SW ring structure only containing the mbuf pointer, but no other fields. The other fields are only used in the scalar code path, so update all vector driver code paths (AVX2, SSE, Neon, Altivec) to use the smaller, faster structure. Signed-off-by: Bruce Richardson --- drivers/net/i40e/i40e_rxtx.c | 8 +++++--- drivers/net/i40e/i40e_rxtx_vec_altivec.c | 12 ++++++------ drivers/net/i40e/i40e_rxtx_vec_avx2.c | 12 ++++++------ drivers/net/i40e/i40e_rxtx_vec_avx512.c | 14 ++------------ drivers/net/i40e/i40e_rxtx_vec_common.h | 6 ------ drivers/net/i40e/i40e_rxtx_vec_neon.c | 12 ++++++------ drivers/net/i40e/i40e_rxtx_vec_sse.c | 12 ++++++------ 7 files changed, 31 insertions(+), 45 deletions(-) diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 081d743e62..745c467912 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -1891,7 +1891,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) tx_queue_id); txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = ad->tx_use_avx512; + txq->vector_sw_ring = txq->vector_tx; /* * tx_queue_id is queue id application refers to, while @@ -3550,9 +3550,11 @@ i40e_set_tx_function(struct rte_eth_dev *dev) } } + if (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_128) + ad->tx_vec_allowed = false; + if (ad->tx_simple_allowed) { - if (ad->tx_vec_allowed && - rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + if (ad->tx_vec_allowed) { #ifdef RTE_ARCH_X86 if (ad->tx_use_avx512) { #ifdef CC_AVX512_SUPPORT diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index 500bba2cef..b6900a3e15 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -553,14 +553,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; int i; if (txq->nb_tx_free < txq->tx_free_thresh) - i40e_tx_free_bufs(txq); + ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false); nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); nb_commit = nb_pkts; @@ -569,13 +569,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->i40e_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -589,10 +589,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->i40e_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c index 29bef64287..2477573c01 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c @@ -745,13 +745,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; if (txq->nb_tx_free < txq->tx_free_thresh) - i40e_tx_free_bufs(txq); + ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false); nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) @@ -759,13 +759,13 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->i40e_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); vtx(txdp, tx_pkts, n - 1, flags); tx_pkts += (n - 1); @@ -780,10 +780,10 @@ i40e_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->i40e_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c index c555c3491d..2497e6a8f0 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c +++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c @@ -807,16 +807,6 @@ vtx(volatile struct i40e_tx_desc *txdp, } } -static __rte_always_inline void -tx_backlog_entry_avx512(struct ci_tx_entry_vec *txep, - struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - int i; - - for (i = 0; i < (int)nb_pkts; ++i) - txep[i].mbuf = tx_pkts[i]; -} - static inline uint16_t i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -844,7 +834,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - tx_backlog_entry_avx512(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); vtx(txdp, tx_pkts, n - 1, flags); tx_pkts += (n - 1); @@ -862,7 +852,7 @@ i40e_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, txep = (void *)txq->sw_ring; } - tx_backlog_entry_avx512(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index 907d32dd0b..733dc797cd 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -24,12 +24,6 @@ i40e_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx) rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE); } -static __rte_always_inline int -i40e_tx_free_bufs(struct ci_tx_queue *txq) -{ - return ci_tx_free_bufs(txq, i40e_tx_desc_done); -} - static inline void _i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq) { diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index c97f337e43..b398d66154 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -681,14 +681,14 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; int i; if (txq->nb_tx_free < txq->tx_free_thresh) - i40e_tx_free_bufs(txq); + ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false); nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) @@ -696,13 +696,13 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, tx_id = txq->tx_tail; txdp = &txq->i40e_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -716,10 +716,10 @@ i40e_xmit_fixed_burst_vec(void *__rte_restrict tx_queue, /* avoid reach the end of ring */ txdp = &txq->i40e_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index 2c467e2089..90c57e59d0 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -700,14 +700,14 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct i40e_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = I40E_TD_CMD; uint64_t rs = I40E_TX_DESC_CMD_RS | I40E_TD_CMD; int i; if (txq->nb_tx_free < txq->tx_free_thresh) - i40e_tx_free_bufs(txq); + ci_tx_free_bufs_vec(txq, i40e_tx_desc_done, false); nb_commit = nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) @@ -715,13 +715,13 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->i40e_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -735,10 +735,10 @@ i40e_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->i40e_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); From patchwork Tue Dec 3 16:41:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 149004 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D5A9F45E16; Tue, 3 Dec 2024 17:44:31 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 81C5840EE2; Tue, 3 Dec 2024 17:42:33 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 7AB8940689 for ; Tue, 3 Dec 2024 17:42:29 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244150; x=1764780150; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mE41Ralz5VkwFAiwyQ3rkHKnAHPygxM5uvYYGITfRxA=; b=AIBVz+2lK7kDg4n+H4rf99JEU6lVMm4CF/v+7cbdy6M2cSods0PqesuT BjyXRWLIIhRDoHzqYX/1t69+xxO7UhNrfuf8PNLVfjkn2TZdPZLZ3tZQ7 HLGXPVzzyZxtnWKeOKdKCy4dVuiBPz7+iK5f0zGf37MKV3ZeWq7pWy7iN KlVnqhVwFNbT6m/jQbj27tTIRMsziLbJiomGbWuCEZ8gWVGfYvS6DD6aQ EXiDJ2qKd4ySe7u/ZuOkZjhysdxZK8xXc4TBy6CPxCLDx7hXmXu92Jf69 Xlc1+l6homBoPMFUjA4Fy4pQUboLkVsFboGmUT+gvJXqgh50RXaal3hDE A==; X-CSE-ConnectionGUID: KV5J4aC8S/iHhWMQIs5nYw== X-CSE-MsgGUID: z6sqExK5SRupVzK9imgYKw== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620848" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620848" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:29 -0800 X-CSE-ConnectionGUID: HImKWMy+Rd+FbcyAVw5NyA== X-CSE-MsgGUID: T0GfsS1FRwKUkv5CGywuMQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357815" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:28 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Vladimir Medvedkin , Ian Stokes , Konstantin Ananyev Subject: [PATCH v2 20/22] net/iavf: use vector SW ring for all vector paths Date: Tue, 3 Dec 2024 16:41:26 +0000 Message-ID: <20241203164132.2686558-21-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The AVX-512 code path used a smaller SW ring structure only containing the mbuf pointer, but no other fields. The other fields are only used in the scalar code path, so update all vector driver code paths (AVX2, SSE) to use the smaller, faster structure. Signed-off-by: Bruce Richardson --- drivers/net/iavf/iavf_rxtx.c | 7 ------- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 12 ++++++------ drivers/net/iavf/iavf_rxtx_vec_avx512.c | 8 -------- drivers/net/iavf/iavf_rxtx_vec_common.h | 6 ------ drivers/net/iavf/iavf_rxtx_vec_sse.c | 14 +++++++------- 5 files changed, 13 insertions(+), 34 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index f0ab881ac5..6692f6992b 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -4193,14 +4193,7 @@ iavf_set_tx_function(struct rte_eth_dev *dev) txq = dev->data->tx_queues[i]; if (!txq) continue; -#ifdef CC_AVX512_SUPPORT - if (use_avx512) - iavf_txq_vec_setup_avx512(txq); - else - iavf_txq_vec_setup(txq); -#else iavf_txq_vec_setup(txq); -#endif } if (no_poll_on_link_down) { diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index fdb98b417a..b847886081 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -1736,14 +1736,14 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; /* bit2 is reserved and must be set to 1 according to Spec */ uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; if (txq->nb_tx_free < txq->tx_free_thresh) - iavf_tx_free_bufs(txq); + ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false); nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) @@ -1752,13 +1752,13 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->iavf_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); iavf_vtx(txdp, tx_pkts, n - 1, flags, offload); tx_pkts += (n - 1); @@ -1773,10 +1773,10 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->iavf_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); iavf_vtx(txdp, tx_pkts, nb_commit, flags, offload); diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index 007759e451..641f3311eb 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -2357,14 +2357,6 @@ iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, return iavf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts, false); } -int __rte_cold -iavf_txq_vec_setup_avx512(struct ci_tx_queue *txq) -{ - txq->vector_tx = true; - txq->vector_sw_ring = true; - return 0; -} - uint16_t iavf_xmit_pkts_vec_avx512_offload(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 6f94587eee..c69399a173 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -24,12 +24,6 @@ iavf_tx_desc_done(struct ci_tx_queue *txq, uint16_t idx) rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE); } -static __rte_always_inline int -iavf_tx_free_bufs(struct ci_tx_queue *txq) -{ - return ci_tx_free_bufs(txq, iavf_tx_desc_done); -} - static inline void _iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq) { diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 3adf2a59e4..9f7db80bfd 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1368,14 +1368,14 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, { struct ci_tx_queue *txq = (struct ci_tx_queue *)tx_queue; volatile struct iavf_tx_desc *txdp; - struct ci_tx_entry *txep; + struct ci_tx_entry_vec *txep; uint16_t n, nb_commit, tx_id; uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */ uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; int i; if (txq->nb_tx_free < txq->tx_free_thresh) - iavf_tx_free_bufs(txq); + ci_tx_free_bufs_vec(txq, iavf_tx_desc_done, false); nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); if (unlikely(nb_pkts == 0)) @@ -1384,13 +1384,13 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_id = txq->tx_tail; txdp = &txq->iavf_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - ci_tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -1404,10 +1404,10 @@ iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, /* avoid reach the end of ring */ txdp = &txq->iavf_tx_ring[tx_id]; - txep = &txq->sw_ring[tx_id]; + txep = &txq->sw_ring_vec[tx_id]; } - ci_tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); iavf_vtx(txdp, tx_pkts, nb_commit, flags); @@ -1462,7 +1462,7 @@ int __rte_cold iavf_txq_vec_setup(struct ci_tx_queue *txq) { txq->vector_tx = true; - txq->vector_sw_ring = false; + txq->vector_sw_ring = txq->vector_tx; return 0; } From patchwork Tue Dec 3 16:41:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 149005 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DE0645E16; Tue, 3 Dec 2024 17:44:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ACDD840EE6; Tue, 3 Dec 2024 17:42:34 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id 3F5E340E38 for ; Tue, 3 Dec 2024 17:42:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244152; x=1764780152; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kVypZX/6mEcyZ6ZErU3VrIXkavqFRPKtSLh5ez1Fv/4=; b=njocyePCrmglxspFTrjNZ54rvOIH0mmYkhSs5vC2iUIgvCwwtgJUFab0 jRjXHqIUrItGvzCORava6nXYJYvf+mF513rKZ7Ho+Vy2YWPSrfehF3fka vXKNUJTT470oXsnHFv4PapGdl+BrVzGWw7A3xtan6nYzBlFW/Mqt5D0Z8 EvGmFsBxFg7jE+vaCoEdZi+2wnWPysRRxSoEsaDkiR/4Qr80YlUE27q2J UQcg5yjrUHdM7DiYck0eALi3LIqg4IT9vqOCuRf8KEAo83wRTCCAEMjKs ABKl5ASYhrCTBTdqi0GTkgWLZTq+/L/9QTB7kJN+O9iGr8fFcRXJG5WY7 g==; X-CSE-ConnectionGUID: JlnLfc8MRdyl1u8rabsllA== X-CSE-MsgGUID: U6XQ1WSfRwGEr8D0xzc/Kg== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620856" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620856" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:31 -0800 X-CSE-ConnectionGUID: ga/Q+NsrRiOwL1abDKd1AQ== X-CSE-MsgGUID: m/Evxz4lQ4eWdDoliFhl2Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357822" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:30 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Ian Stokes , Konstantin Ananyev , Vladimir Medvedkin , Anatoly Burakov Subject: [PATCH v2 21/22] net/_common_intel: remove unneeded code Date: Tue, 3 Dec 2024 16:41:27 +0000 Message-ID: <20241203164132.2686558-22-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org With all drivers using the common Tx structure updated so that their vector paths all use the simplified Tx mbuf ring format, it's no longer necessary to have a separate flag for the ring format and for use of a vector driver. Remove the former flag and base all decisions off the vector flag. With that done, we go from having only two paths to consider for releasing all mbufs in the ring, not three. That allows further simpification of the "ci_txq_release_all_mbufs" function. The separate function to free buffers from the vector driver not using the simplified ring format can similarly be removed as no longer necessary. Signed-off-by: Bruce Richardson --- drivers/net/_common_intel/tx.h | 97 +++-------------------- drivers/net/i40e/i40e_rxtx.c | 1 - drivers/net/iavf/iavf_rxtx_vec_sse.c | 1 - drivers/net/ice/ice_rxtx.c | 1 - drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 1 - 5 files changed, 10 insertions(+), 91 deletions(-) diff --git a/drivers/net/_common_intel/tx.h b/drivers/net/_common_intel/tx.h index aa42b9b49f..d9cf4474fc 100644 --- a/drivers/net/_common_intel/tx.h +++ b/drivers/net/_common_intel/tx.h @@ -66,7 +66,6 @@ struct ci_tx_queue { bool tx_deferred_start; /* don't start this queue in dev start */ bool q_set; /* indicate if tx queue has been configured */ bool vector_tx; /* port is using vector TX */ - bool vector_sw_ring; /* port is using vectorized SW ring (ieth_tx_entry_vec) */ union { /* the VSI this queue belongs to */ struct i40e_vsi *i40e_vsi; struct iavf_vsi *iavf_vsi; @@ -120,72 +119,6 @@ ci_tx_backlog_entry_vec(struct ci_tx_entry_vec *txep, struct rte_mbuf **tx_pkts, typedef int (*ci_desc_done_fn)(struct ci_tx_queue *txq, uint16_t idx); -static __rte_always_inline int -ci_tx_free_bufs(struct ci_tx_queue *txq, ci_desc_done_fn desc_done) -{ - struct ci_tx_entry *txep; - uint32_t n; - uint32_t i; - int nb_free = 0; - struct rte_mbuf *m, *free[IETH_VPMD_TX_MAX_FREE_BUF]; - - /* check DD bits on threshold descriptor */ - if (!desc_done(txq, txq->tx_next_dd)) - return 0; - - n = txq->tx_rs_thresh; - - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ - txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; - - if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { - for (i = 0; i < n; i++) { - free[i] = txep[i].mbuf; - /* no need to reset txep[i].mbuf in vector path */ - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, n); - goto done; - } - - m = rte_pktmbuf_prefree_seg(txep[0].mbuf); - if (likely(m != NULL)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (likely(m != NULL)) { - if (likely(m->pool == free[0]->pool)) { - free[nb_free++] = m; - } else { - rte_mempool_put_bulk(free[0]->pool, - (void *)free, - nb_free); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); - } else { - for (i = 1; i < n; i++) { - m = rte_pktmbuf_prefree_seg(txep[i].mbuf); - if (m != NULL) - rte_mempool_put(m->pool, m); - } - } - -done: - /* buffers were freed, update counters */ - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; -} - static __rte_always_inline int ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx_descs) { @@ -278,21 +211,6 @@ ci_tx_free_bufs_vec(struct ci_tx_queue *txq, ci_desc_done_fn desc_done, bool ctx return txq->tx_rs_thresh; } -#define IETH_FREE_BUFS_LOOP(swr, nb_desc, start, end) do { \ - uint16_t i = start; \ - if (end < i) { \ - for (; i < nb_desc; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ - i = 0; \ - } \ - for (; i < end; i++) { \ - rte_pktmbuf_free_seg(swr[i].mbuf); \ - swr[i].mbuf = NULL; \ - } \ -} while (0) - static inline void ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) { @@ -311,16 +229,21 @@ ci_txq_release_all_mbufs(struct ci_tx_queue *txq, bool use_ctx) /** * vPMD tx will not set sw_ring's mbuf to NULL after free, - * so need to free remains more carefully. + * so determining buffers to free is a little more complex. */ const uint16_t start = (txq->tx_next_dd - txq->tx_rs_thresh + 1) >> use_ctx; const uint16_t nb_desc = txq->nb_tx_desc >> use_ctx; const uint16_t end = txq->tx_tail >> use_ctx; - if (txq->vector_sw_ring) - IETH_FREE_BUFS_LOOP(txq->sw_ring_vec, nb_desc, start, end); - else - IETH_FREE_BUFS_LOOP(txq->sw_ring, nb_desc, start, end); + uint16_t i = start; + if (end < i) { + for (; i < nb_desc; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + i = 0; + } + for (; i < end; i++) + rte_pktmbuf_free_seg(txq->sw_ring_vec[i].mbuf); + memset(txq->sw_ring_vec, 0, sizeof(txq->sw_ring_vec[0]) * nb_desc); } #endif /* _COMMON_INTEL_TX_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 745c467912..c3ff2e05c3 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -1891,7 +1891,6 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) tx_queue_id); txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; /* * tx_queue_id is queue id application refers to, while diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 9f7db80bfd..21d5bfd309 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -1462,7 +1462,6 @@ int __rte_cold iavf_txq_vec_setup(struct ci_tx_queue *txq) { txq->vector_tx = true; - txq->vector_sw_ring = txq->vector_tx; return 0; } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 77cb6688a7..dcfa409813 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -825,7 +825,6 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* record what kind of descriptor cleanup we need on teardown */ txq->vector_tx = ad->tx_vec_allowed; - txq->vector_sw_ring = txq->vector_tx; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index 65794e45cb..3d4840c3b7 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -183,7 +183,6 @@ ixgbe_txq_vec_setup_default(struct ci_tx_queue *txq, txq->sw_ring_vec = txq->sw_ring_vec + 1; txq->ops = txq_ops; txq->vector_tx = 1; - txq->vector_sw_ring = 1; return 0; } From patchwork Tue Dec 3 16:41:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 149006 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F23AF45E16; Tue, 3 Dec 2024 17:44:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D96C340ED4; Tue, 3 Dec 2024 17:42:35 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by mails.dpdk.org (Postfix) with ESMTP id EA46C40ED9 for ; Tue, 3 Dec 2024 17:42:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733244153; x=1764780153; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nxH6DH67+UV9Co6PBQpxQhjFRzxapVOrmtHkgqEFBgk=; b=AkCKsqUunKeSk0HoCefuY1PY3dXmTBx3eMad1I97khwScZkrYymDNP9w ed+E4Hc7ClIauSNdK396SIHuCnlRqEkLGj1b9ObtnI/NxlGN8/5NGSBQW JSGNaoRcWzluoYWDX3/oihnLZE1A1yqDZT2OCnzm9Rbom/EgfNzXJt7zh 4RNxtCbxBiGUbCPod7/oTSbYTPSvw7UdbBm4SBa9BiI+vVLYnO++sPWbT tn6CPiommEE9ucDvN6HVIIKe/PaqJqGKaGvzvnAhgnbKQiyHvob1SHVyZ KjGWrsFB+ntD21+xhP8iGmIAN00npnLR66DAssXnX0/cIAvEzj9TdmHzC A==; X-CSE-ConnectionGUID: GRaHJxFxQQeHPxnMbptyzw== X-CSE-MsgGUID: va+Jxt95Rm2o+OMWHMKdMw== X-IronPort-AV: E=McAfee;i="6700,10204,11275"; a="33620858" X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="33620858" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Dec 2024 08:42:33 -0800 X-CSE-ConnectionGUID: vAKzs4BoT5+4YGh0ZFy9aA== X-CSE-MsgGUID: 9RUD3sz4SWSWLqTQOOWOFg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,205,1728975600"; d="scan'208";a="93357828" Received: from silpixa00401197coob.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.45]) by orviesa010.jf.intel.com with ESMTP; 03 Dec 2024 08:42:32 -0800 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson , Anatoly Burakov , Vladimir Medvedkin , Wathsala Vithanage , Konstantin Ananyev Subject: [PATCH v2 22/22] net/ixgbe: use common Tx backlog entry fn Date: Tue, 3 Dec 2024 16:41:28 +0000 Message-ID: <20241203164132.2686558-23-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241203164132.2686558-1-bruce.richardson@intel.com> References: <20241122125418.2857301-1-bruce.richardson@intel.com> <20241203164132.2686558-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Remove the custom vector Tx backlog entry function and use the standard intel_common one, now that all vector drivers are using the same, smaller ring structure. Signed-off-by: Bruce Richardson --- drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 10 ---------- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 4 ++-- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 4 ++-- 3 files changed, 4 insertions(+), 14 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index 3d4840c3b7..7316fc6c3b 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -68,16 +68,6 @@ ixgbe_tx_free_bufs(struct ci_tx_queue *txq) return txq->tx_rs_thresh; } -static __rte_always_inline void -tx_backlog_entry(struct ci_tx_entry_vec *txep, - struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - int i; - - for (i = 0; i < (int)nb_pkts; ++i) - txep[i].mbuf = tx_pkts[i]; -} - static inline void _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) { diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index 2ccb399b64..f879f6fa9a 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -597,7 +597,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -614,7 +614,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txep = &txq->sw_ring_vec[tx_id]; } - tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags); diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index fa26365f06..915358e16b 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -720,7 +720,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, n = (uint16_t)(txq->nb_tx_desc - tx_id); if (nb_commit >= n) { - tx_backlog_entry(txep, tx_pkts, n); + ci_tx_backlog_entry_vec(txep, tx_pkts, n); for (i = 0; i < n - 1; ++i, ++tx_pkts, ++txdp) vtx1(txdp, *tx_pkts, flags); @@ -737,7 +737,7 @@ ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, txep = &txq->sw_ring_vec[tx_id]; } - tx_backlog_entry(txep, tx_pkts, nb_commit); + ci_tx_backlog_entry_vec(txep, tx_pkts, nb_commit); vtx(txdp, tx_pkts, nb_commit, flags);