From patchwork Mon Oct 4 13:55:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 100439 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7910EA0C4D; Mon, 4 Oct 2021 16:20:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0C82B4123C; Mon, 4 Oct 2021 16:20:41 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id A051741236 for ; Mon, 4 Oct 2021 16:20:38 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10126"; a="222887645" X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="222887645" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Oct 2021 06:56:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="558508355" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by FMSMGA003.fm.intel.com with ESMTP; 04 Oct 2021 06:56:48 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, adwivedi@marvell.com, shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com, john.miller@atomicrules.com, irusskikh@marvell.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com, hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com, longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com, andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, jiawenwu@trustnetic.com, jianwang@trustnetic.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com, Konstantin Ananyev Date: Mon, 4 Oct 2021 14:55:57 +0100 Message-Id: <20211004135603.20593-2-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20211004135603.20593-1-konstantin.ananyev@intel.com> References: <20211001140255.5726-1-konstantin.ananyev@intel.com> <20211004135603.20593-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 1/7] ethdev: allocate max space for internal queue array X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" At queue configure stage always allocate space for maximum possible number (RTE_MAX_QUEUES_PER_PORT) of queue pointers. That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer pointer to internal queue data without extra checking of current number of configured queues. That would help in future to hide rte_eth_dev and related structures. Signed-off-by: Konstantin Ananyev --- lib/ethdev/rte_ethdev.c | 36 +++++++++--------------------------- 1 file changed, 9 insertions(+), 27 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index daf5ca9242..424bc260fa 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -898,7 +898,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */ dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", - sizeof(dev->data->rx_queues[0]) * nb_queues, + sizeof(dev->data->rx_queues[0]) * + RTE_MAX_QUEUES_PER_PORT, RTE_CACHE_LINE_SIZE); if (dev->data->rx_queues == NULL) { dev->data->nb_rx_queues = 0; @@ -909,21 +910,11 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) rxq = dev->data->rx_queues; - for (i = nb_queues; i < old_nb_queues; i++) + for (i = nb_queues; i < old_nb_queues; i++) { (*dev->dev_ops->rx_queue_release)(rxq[i]); - rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues, - RTE_CACHE_LINE_SIZE); - if (rxq == NULL) - return -(ENOMEM); - if (nb_queues > old_nb_queues) { - uint16_t new_qs = nb_queues - old_nb_queues; - - memset(rxq + old_nb_queues, 0, - sizeof(rxq[0]) * new_qs); + rxq[i] = NULL; } - dev->data->rx_queues = rxq; - } else if (dev->data->rx_queues != NULL && nb_queues == 0) { RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP); @@ -1138,8 +1129,9 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */ dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", - sizeof(dev->data->tx_queues[0]) * nb_queues, - RTE_CACHE_LINE_SIZE); + sizeof(dev->data->tx_queues[0]) * + RTE_MAX_QUEUES_PER_PORT, + RTE_CACHE_LINE_SIZE); if (dev->data->tx_queues == NULL) { dev->data->nb_tx_queues = 0; return -(ENOMEM); @@ -1149,21 +1141,11 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) txq = dev->data->tx_queues; - for (i = nb_queues; i < old_nb_queues; i++) + for (i = nb_queues; i < old_nb_queues; i++) { (*dev->dev_ops->tx_queue_release)(txq[i]); - txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues, - RTE_CACHE_LINE_SIZE); - if (txq == NULL) - return -ENOMEM; - if (nb_queues > old_nb_queues) { - uint16_t new_qs = nb_queues - old_nb_queues; - - memset(txq + old_nb_queues, 0, - sizeof(txq[0]) * new_qs); + txq[i] = NULL; } - dev->data->tx_queues = txq; - } else if (dev->data->tx_queues != NULL && nb_queues == 0) { RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP); From patchwork Mon Oct 4 13:55:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 100433 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EAB0DA0C4D; Mon, 4 Oct 2021 15:57:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 744C441297; Mon, 4 Oct 2021 15:57:57 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 7FCFC41255 for ; Mon, 4 Oct 2021 15:57:54 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10126"; a="225754406" X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="225754406" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Oct 2021 06:57:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="558508586" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by FMSMGA003.fm.intel.com with ESMTP; 04 Oct 2021 06:57:10 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, adwivedi@marvell.com, shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com, john.miller@atomicrules.com, irusskikh@marvell.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com, hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com, longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com, andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, jiawenwu@trustnetic.com, jianwang@trustnetic.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com, Konstantin Ananyev Date: Mon, 4 Oct 2021 14:55:58 +0100 Message-Id: <20211004135603.20593-3-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20211004135603.20593-1-konstantin.ananyev@intel.com> References: <20211001140255.5726-1-konstantin.ananyev@intel.com> <20211004135603.20593-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently majority of 'fast' ethdev ops take pointers to internal queue data structures as an input parameter. While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue index. For future work to hide rte_eth_devices[] and friends it would be plausible to unify parameters list of all 'fast' ethdev ops. This patch changes eth_rx_queue_count() to accept pointer to internal queue data as input parameter. While this change is transparent to user, it still counts as an ABI change, as eth_rx_queue_count_t is used by ethdev public inline function rte_eth_rx_queue_count(). Signed-off-by: Konstantin Ananyev --- doc/guides/rel_notes/release_21_11.rst | 6 ++++++ drivers/net/ark/ark_ethdev_rx.c | 4 ++-- drivers/net/ark/ark_ethdev_rx.h | 3 +-- drivers/net/atlantic/atl_ethdev.h | 2 +- drivers/net/atlantic/atl_rxtx.c | 9 ++------- drivers/net/bnxt/bnxt_ethdev.c | 8 +++++--- drivers/net/dpaa/dpaa_ethdev.c | 9 ++++----- drivers/net/dpaa2/dpaa2_ethdev.c | 9 ++++----- drivers/net/e1000/e1000_ethdev.h | 6 ++---- drivers/net/e1000/em_rxtx.c | 4 ++-- drivers/net/e1000/igb_rxtx.c | 4 ++-- drivers/net/enic/enic_ethdev.c | 12 ++++++------ drivers/net/fm10k/fm10k.h | 2 +- drivers/net/fm10k/fm10k_rxtx.c | 4 ++-- drivers/net/hns3/hns3_rxtx.c | 7 +++++-- drivers/net/hns3/hns3_rxtx.h | 2 +- drivers/net/i40e/i40e_rxtx.c | 4 ++-- drivers/net/i40e/i40e_rxtx.h | 3 +-- drivers/net/iavf/iavf_rxtx.c | 4 ++-- drivers/net/iavf/iavf_rxtx.h | 2 +- drivers/net/ice/ice_rxtx.c | 4 ++-- drivers/net/ice/ice_rxtx.h | 2 +- drivers/net/igc/igc_txrx.c | 5 ++--- drivers/net/igc/igc_txrx.h | 3 +-- drivers/net/ixgbe/ixgbe_ethdev.h | 3 +-- drivers/net/ixgbe/ixgbe_rxtx.c | 4 ++-- drivers/net/mlx5/mlx5_rx.c | 26 ++++++++++++------------- drivers/net/mlx5/mlx5_rx.h | 2 +- drivers/net/netvsc/hn_rxtx.c | 4 ++-- drivers/net/netvsc/hn_var.h | 2 +- drivers/net/nfp/nfp_rxtx.c | 4 ++-- drivers/net/nfp/nfp_rxtx.h | 3 +-- drivers/net/octeontx2/otx2_ethdev.h | 2 +- drivers/net/octeontx2/otx2_ethdev_ops.c | 8 ++++---- drivers/net/sfc/sfc_ethdev.c | 12 ++++++------ drivers/net/thunderx/nicvf_ethdev.c | 3 +-- drivers/net/thunderx/nicvf_rxtx.c | 4 ++-- drivers/net/thunderx/nicvf_rxtx.h | 2 +- drivers/net/txgbe/txgbe_ethdev.h | 3 +-- drivers/net/txgbe/txgbe_rxtx.c | 4 ++-- drivers/net/vhost/rte_eth_vhost.c | 4 ++-- lib/ethdev/rte_ethdev.h | 2 +- lib/ethdev/rte_ethdev_core.h | 3 +-- 43 files changed, 103 insertions(+), 110 deletions(-) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 37dc1a7786..fd80538b6c 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -213,6 +213,12 @@ ABI Changes ``rte_security_ipsec_xform`` to allow applications to configure SA soft and hard expiry limits. Limits can be either in number of packets or bytes. +* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed. + Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer + to internal queue data as input parameter. While this change is transparent + to user, it still counts as an ABI change, as ``eth_rx_queue_count_t`` + is used by public inline function ``rte_eth_rx_queue_count``. + Known Issues ------------ diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c index d255f0177b..98658ce621 100644 --- a/drivers/net/ark/ark_ethdev_rx.c +++ b/drivers/net/ark/ark_ethdev_rx.c @@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue) } uint32_t -eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id) +eth_ark_dev_rx_queue_count(void *rx_queue) { struct ark_rx_queue *queue; - queue = dev->data->rx_queues[queue_id]; + queue = rx_queue; return (queue->prod_index - queue->cons_index); /* mod arith */ } diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h index c8dc340a8a..859fcf1e6f 100644 --- a/drivers/net/ark/ark_ethdev_rx.h +++ b/drivers/net/ark/ark_ethdev_rx.h @@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); -uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id); +uint32_t eth_ark_dev_rx_queue_count(void *rx_queue); int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id); int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id); uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts, diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h index f547571b5c..e808460520 100644 --- a/drivers/net/atlantic/atl_ethdev.h +++ b/drivers/net/atlantic/atl_ethdev.h @@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id); +uint32_t atl_rx_queue_count(void *rx_queue); int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset); int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c index 7d367c9306..35bb13044e 100644 --- a/drivers/net/atlantic/atl_rxtx.c +++ b/drivers/net/atlantic/atl_rxtx.c @@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, /* Return Rx queue avail count */ uint32_t -atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +atl_rx_queue_count(void *rx_queue) { struct atl_rx_queue *rxq; PMD_INIT_FUNC_TRACE(); - if (rx_queue_id >= dev->data->nb_rx_queues) { - PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id); - return 0; - } - - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; if (rxq == NULL) return 0; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 097dd10de9..e07242e961 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -3130,20 +3130,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev) } static uint32_t -bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id) +bnxt_rx_queue_count_op(void *rx_queue) { - struct bnxt *bp = (struct bnxt *)dev->data->dev_private; + struct bnxt *bp; struct bnxt_cp_ring_info *cpr; uint32_t desc = 0, raw_cons, cp_ring_size; struct bnxt_rx_queue *rxq; struct rx_pkt_cmpl *rxcmp; int rc; + rxq = rx_queue; + bp = rxq->bp; + rc = is_bnxt_in_error(bp); if (rc) return rc; - rxq = dev->data->rx_queues[rx_queue_id]; cpr = rxq->cp_ring; raw_cons = cpr->cp_raw_cons; cp_ring_size = cpr->cp_ring_struct->ring_size; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 36d8f9249d..b5589300c9 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused) } static uint32_t -dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +dpaa_dev_rx_queue_count(void *rx_queue) { - struct dpaa_if *dpaa_intf = dev->data->dev_private; - struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id]; + struct qman_fq *rxq = rx_queue; u32 frm_cnt = 0; PMD_INIT_FUNC_TRACE(); if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) { - DPAA_PMD_DEBUG("RX frame count for q(%d) is %u", - rx_queue_id, frm_cnt); + DPAA_PMD_DEBUG("RX frame count for q(%p) is %u", + rx_queue, frm_cnt); } return frm_cnt; } diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index c12169578e..b295af2a57 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1011,10 +1011,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused) } static uint32_t -dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +dpaa2_dev_rx_queue_count(void *rx_queue) { int32_t ret; - struct dpaa2_dev_priv *priv = dev->data->dev_private; struct dpaa2_queue *dpaa2_q; struct qbman_swp *swp; struct qbman_fq_query_np_rslt state; @@ -1031,12 +1030,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) } swp = DPAA2_PER_LCORE_PORTAL; - dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id]; + dpaa2_q = rx_queue; if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) { frame_cnt = qbman_fq_state_frame_count(&state); - DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u", - rx_queue_id, frame_cnt); + DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u", + rx_queue, frame_cnt); } return frame_cnt; } diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h index 3b4d9c3ee6..460e130a83 100644 --- a/drivers/net/e1000/e1000_ethdev.h +++ b/drivers/net/e1000/e1000_ethdev.h @@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); -uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id); +uint32_t eth_igb_rx_queue_count(void *rx_queue); int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset); @@ -476,8 +475,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); -uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id); +uint32_t eth_em_rx_queue_count(void *rx_queue); int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset); diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c index dfd8f2fd00..40de36cb20 100644 --- a/drivers/net/e1000/em_rxtx.c +++ b/drivers/net/e1000/em_rxtx.c @@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev, } uint32_t -eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +eth_em_rx_queue_count(void *rx_queue) { #define EM_RXQ_SCAN_INTERVAL 4 volatile struct e1000_rx_desc *rxdp; struct em_rx_queue *rxq; uint32_t desc = 0; - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; rxdp = &(rxq->rx_ring[rxq->rx_tail]); while ((desc < rxq->nb_rx_desc) && diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index 278d5d2712..3210a0e008 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev, } uint32_t -eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +eth_igb_rx_queue_count(void *rx_queue) { #define IGB_RXQ_SCAN_INTERVAL 4 volatile union e1000_adv_rx_desc *rxdp; struct igb_rx_queue *rxq; uint32_t desc = 0; - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; rxdp = &(rxq->rx_ring[rxq->rx_tail]); while ((desc < rxq->nb_rx_desc) && diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c index 8d5797523b..5b2d60ad9c 100644 --- a/drivers/net/enic/enic_ethdev.c +++ b/drivers/net/enic/enic_ethdev.c @@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq) enic_free_rq(rxq); } -static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id) +static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue) { - struct enic *enic = pmd_priv(dev); + struct enic *enic; + struct vnic_rq *sop_rq; uint32_t queue_count = 0; struct vnic_cq *cq; uint32_t cq_tail; uint16_t cq_idx; - int rq_num; - rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id); - cq = &enic->cq[enic_cq_rq(enic, rq_num)]; + sop_rq = rx_queue; + enic = vnic_dev_priv(sop_rq->vdev); + cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)]; cq_idx = cq->to_clean; cq_tail = ioread32(&cq->ctrl->cq_tail); diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h index 916b856acc..648d12a1b4 100644 --- a/drivers/net/fm10k/fm10k.h +++ b/drivers/net/fm10k/fm10k.h @@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint32_t -fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id); +fm10k_dev_rx_queue_count(void *rx_queue); int fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c index 0a9a27aa5a..eab798e52c 100644 --- a/drivers/net/fm10k/fm10k_rxtx.c +++ b/drivers/net/fm10k/fm10k_rxtx.c @@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } uint32_t -fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +fm10k_dev_rx_queue_count(void *rx_queue) { #define FM10K_RXQ_SCAN_INTERVAL 4 volatile union fm10k_rx_desc *rxdp; struct fm10k_rx_queue *rxq; uint16_t desc = 0; - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; rxdp = &rxq->hw_ring[rxq->next_dd]; while ((desc < rxq->nb_desc) && rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) { diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 481872e395..04791ae7d0 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset) } uint32_t -hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +hns3_rx_queue_count(void *rx_queue) { /* * Number of BDs that have been processed by the driver @@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) */ uint32_t driver_hold_bd_num; struct hns3_rx_queue *rxq; + const struct rte_eth_dev *dev; uint32_t fbd_num; - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; + dev = &rte_eth_devices[rxq->port_id]; + fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG); if (dev->rx_pkt_burst == hns3_recv_pkts_vec || dev->rx_pkt_burst == hns3_recv_pkts_vec_sve) diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h index cd7c21c1d0..34a028701f 100644 --- a/drivers/net/hns3/hns3_rxtx.h +++ b/drivers/net/hns3/hns3_rxtx.h @@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, struct rte_mempool *mp); int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, unsigned int socket, const struct rte_eth_txconf *conf); -uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id); +uint32_t hns3_rx_queue_count(void *rx_queue); int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 3eb82578b0..5493ae6bba 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -2117,14 +2117,14 @@ i40e_dev_rx_queue_release(void *rxq) } uint32_t -i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +i40e_dev_rx_queue_count(void *rx_queue) { #define I40E_RXQ_SCAN_INTERVAL 4 volatile union i40e_rx_desc *rxdp; struct i40e_rx_queue *rxq; uint16_t desc = 0; - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; rxdp = &(rxq->rx_ring[rxq->rx_tail]); while ((desc < rxq->nb_rx_desc) && ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) & diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 5ccf5773e8..a08b80f020 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt); int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq); void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq); -uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id); +uint32_t i40e_dev_rx_queue_count(void *rx_queue); int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset); int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 87afc0b4cb..3dc1f04380 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, /* Get the number of used descriptors of a rx queue */ uint32_t -iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id) +iavf_dev_rxq_count(void *rx_queue) { #define IAVF_RXQ_SCAN_INTERVAL 4 volatile union iavf_rx_desc *rxdp; struct iavf_rx_queue *rxq; uint16_t desc = 0; - rxq = dev->data->rx_queues[queue_id]; + rxq = rx_queue; rxdp = &rxq->rx_ring[rxq->rx_tail]; while ((desc < rxq->nb_rx_desc) && diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index e210b913d6..2f7bec2b63 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); -uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id); +uint32_t iavf_dev_rxq_count(void *rx_queue); int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset); int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 5d7ab4f047..61936b0ab1 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -1427,14 +1427,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, } uint32_t -ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +ice_rx_queue_count(void *rx_queue) { #define ICE_RXQ_SCAN_INTERVAL 4 volatile union ice_rx_flex_desc *rxdp; struct ice_rx_queue *rxq; uint16_t desc = 0; - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; rxdp = &rxq->rx_ring[rxq->rx_tail]; while ((desc < rxq->nb_rx_desc) && rte_le_to_cpu_16(rxdp->wb.status_error0) & diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index b10db0874d..b45abec91a 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -222,7 +222,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, void ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq); void ice_set_tx_function(struct rte_eth_dev *dev); -uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id); +uint32_t ice_rx_queue_count(void *rx_queue); void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index b5489eedd2..437992ecdf 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq) igc_rx_queue_release(rxq); } -uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id) +uint32_t eth_igc_rx_queue_count(void *rx_queue) { /** * Check the DD bit of a rx descriptor of each 4 in a group, @@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev, struct igc_rx_queue *rxq; uint16_t desc = 0; - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; rxdp = &rxq->rx_ring[rxq->rx_tail]; while (desc < rxq->nb_rx_desc - rxq->rx_tail) { diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h index f2b2d75bbc..b0c4b3ebd9 100644 --- a/drivers/net/igc/igc_txrx.h +++ b/drivers/net/igc/igc_txrx.h @@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); -uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id); +uint32_t eth_igc_rx_queue_count(void *rx_queue); int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset); diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index a0ce18ca24..c5027be1dc 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -602,8 +602,7 @@ int ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id); +uint32_t ixgbe_dev_rx_queue_count(void *rx_queue); int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index bfdfd5e755..1f802851e3 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, } uint32_t -ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +ixgbe_dev_rx_queue_count(void *rx_queue) { #define IXGBE_RXQ_SCAN_INTERVAL 4 volatile union ixgbe_adv_rx_desc *rxdp; struct ixgbe_rx_queue *rxq; uint32_t desc = 0; - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; rxdp = &(rxq->rx_ring[rxq->rx_tail]); while ((desc < rxq->nb_rx_desc) && diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index e3b1051ba4..1a9eb35acc 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, /** * DPDK callback to get the number of used descriptors in a RX queue. * - * @param dev - * Pointer to the device structure. - * - * @param rx_queue_id - * The Rx queue. + * @param rx_queue + * The Rx queue pointer. * * @return * The number of used rx descriptor. * -EINVAL if the queue is invalid */ uint32_t -mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +mlx5_rx_queue_count(void *rx_queue) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq; + struct mlx5_rxq_data *rxq = rx_queue; + struct rte_eth_dev *dev; + + if (!rxq) { + rte_errno = EINVAL; + return -rte_errno; + } + + dev = &rte_eth_devices[rxq->port_id]; if (dev->rx_pkt_burst == NULL || dev->rx_pkt_burst == removed_rx_burst) { rte_errno = ENOTSUP; return -rte_errno; } - rxq = (*priv->rxqs)[rx_queue_id]; - if (!rxq) { - rte_errno = EINVAL; - return -rte_errno; - } + return rx_queue_count(rxq); } diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 3f2b99fb65..5e4ac7324d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset); -uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id); +uint32_t mlx5_rx_queue_count(void *rx_queue); void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index c6bf7cc132..30aac371c8 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg) * For this device that means how many packets are pending in the ring. */ uint32_t -hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id) +hn_dev_rx_queue_count(void *rx_queue) { - struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id]; + struct hn_rx_queue *rxq = rx_queue; return rte_ring_count(rxq->rx_ring); } diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h index 43642408bc..2a2bac9338 100644 --- a/drivers/net/netvsc/hn_var.h +++ b/drivers/net/netvsc/hn_var.h @@ -215,7 +215,7 @@ int hn_dev_rx_queue_setup(struct rte_eth_dev *dev, void hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); void hn_dev_rx_queue_release(void *arg); -uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id); +uint32_t hn_dev_rx_queue_count(void *rx_queue); int hn_dev_rx_queue_status(void *rxq, uint16_t offset); void hn_dev_free_queues(struct rte_eth_dev *dev); diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 1402c5f84a..4b2ac4cc43 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) } uint32_t -nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx) +nfp_net_rx_queue_count(void *rx_queue) { struct nfp_net_rxq *rxq; struct nfp_net_rx_desc *rxds; uint32_t idx; uint32_t count; - rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx]; + rxq = rx_queue; idx = rxq->rd_p; diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index b0a8bf81b0..0fd50a6c22 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -275,8 +275,7 @@ struct nfp_net_rxq { } __rte_aligned(64); int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev); -uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev, - uint16_t queue_idx); +uint32_t nfp_net_rx_queue_count(void *rx_queue); uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); void nfp_net_rx_queue_release(void *rxq); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 7871e3d30b..6696db6f6f 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_burst_mode *mode); int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_burst_mode *mode); -uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx); +uint32_t otx2_nix_rx_queue_count(void *rx_queue); int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt); int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset); int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 552e6bd43d..e6f8e5bfc1 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev, } uint32_t -otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx) +otx2_nix_rx_queue_count(void *rx_queue) { - struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx]; - struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_eth_rxq *rxq = rx_queue; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev); uint32_t head, tail; - nix_rx_head_tail_get(dev, &head, &tail, queue_idx); + nix_rx_head_tail_get(dev, &head, &tail, rxq->rq); return (tail - head) % rxq->qlen; } diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index 2db0d000c3..4b5713f3ec 100644 --- a/drivers/net/sfc/sfc_ethdev.c +++ b/drivers/net/sfc/sfc_ethdev.c @@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid, * use any process-local pointers from the adapter data. */ static uint32_t -sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid) +sfc_rx_queue_count(void *rx_queue) { - const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev); - struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev); - sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid; + struct sfc_dp_rxq *dp_rxq = rx_queue; + const struct sfc_dp_rx *dp_rx; struct sfc_rxq_info *rxq_info; - rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid); + dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq); + rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq); if ((rxq_info->state & SFC_RXQ_STARTED) == 0) return 0; - return sap->dp_rx->qdesc_npending(rxq_info->dp); + return dp_rx->qdesc_npending(dp_rxq); } /* diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c index 561a98fc81..0e87620e42 100644 --- a/drivers/net/thunderx/nicvf_ethdev.c +++ b/drivers/net/thunderx/nicvf_ethdev.c @@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq) if (dev->rx_pkt_burst == NULL) return; - while ((rxq_cnt = nicvf_dev_rx_queue_count(dev, - nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) { + while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) { nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts, NICVF_MAX_RX_FREE_THRESH); PMD_DRV_LOG(INFO, "nb_pkts=%d rxq_cnt=%d", nb_pkts, rxq_cnt); diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c index 91e09ff8d5..0d4f4ae87e 100644 --- a/drivers/net/thunderx/nicvf_rxtx.c +++ b/drivers/net/thunderx/nicvf_rxtx.c @@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue, } uint32_t -nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx) +nicvf_dev_rx_queue_count(void *rx_queue) { struct nicvf_rxq *rxq; - rxq = dev->data->rx_queues[queue_idx]; + rxq = rx_queue; return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK; } diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h index d6ed660b4e..271f329dc4 100644 --- a/drivers/net/thunderx/nicvf_rxtx.h +++ b/drivers/net/thunderx/nicvf_rxtx.h @@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init, *(uint64_t *)(&pkt->rearm_data) = init.value; } -uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx); +uint32_t nicvf_dev_rx_queue_count(void *rx_queue); uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx); uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts, diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index 3021933965..569cd6a48f 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -446,8 +446,7 @@ int txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id); +uint32_t txgbe_dev_rx_queue_count(void *rx_queue); int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset); int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index 1a261287d1..2a7cfdeedb 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, } uint32_t -txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +txgbe_dev_rx_queue_count(void *rx_queue) { #define TXGBE_RXQ_SCAN_INTERVAL 4 volatile struct txgbe_rx_desc *rxdp; struct txgbe_rx_queue *rxq; uint32_t desc = 0; - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = rx_queue; rxdp = &rxq->rx_ring[rxq->rx_tail]; while ((desc < rxq->nb_rx_desc) && diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index a202931e9a..f2b3f142d8 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused, } static uint32_t -eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +eth_rx_queue_count(void *rx_queue) { struct vhost_queue *vq; - vq = dev->data->rx_queues[rx_queue_id]; + vq = rx_queue; if (vq == NULL) return 0; diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index afdc53b674..9642b7c00f 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -5060,7 +5060,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) dev->data->rx_queues[queue_id] == NULL) return -EINVAL; - return (int)(*dev->rx_queue_count)(dev, queue_id); + return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]); } /** diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index d2c9ec42c7..948c0b71c1 100644 --- a/lib/ethdev/rte_ethdev_core.h +++ b/lib/ethdev/rte_ethdev_core.h @@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq, /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */ -typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev, - uint16_t rx_queue_id); +typedef uint32_t (*eth_rx_queue_count_t)(void *rxq); /**< @internal Get number of used descriptors on a receive queue. */ typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset); From patchwork Mon Oct 4 13:55:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 100447 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0516A0C4D; Mon, 4 Oct 2021 16:34:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 440F541383; Mon, 4 Oct 2021 16:34:19 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id CCEE341381 for ; Mon, 4 Oct 2021 16:34:17 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10126"; a="205570378" X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="205570378" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Oct 2021 06:57:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="558508768" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by FMSMGA003.fm.intel.com with ESMTP; 04 Oct 2021 06:57:32 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, adwivedi@marvell.com, shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com, john.miller@atomicrules.com, irusskikh@marvell.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com, hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com, longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com, andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, jiawenwu@trustnetic.com, jianwang@trustnetic.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com, Konstantin Ananyev Date: Mon, 4 Oct 2021 14:55:59 +0100 Message-Id: <20211004135603.20593-4-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20211004135603.20593-1-konstantin.ananyev@intel.com> References: <20211001140255.5726-1-konstantin.ananyev@intel.com> <20211004135603.20593-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Copy public function pointers (rx_pkt_burst(), etc.) and related pointers to internal data from rte_eth_dev structure into a separate flat array. That array will remain in a public header. The intention here is to make rte_eth_dev and related structures internal. That should allow future possible changes to core eth_dev structures to be transparent to the user and help to avoid ABI/API breakages. The plan is to keep minimal part of data from rte_eth_dev public, so we still can use inline functions for 'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown. Signed-off-by: Konstantin Ananyev --- lib/ethdev/ethdev_private.c | 52 ++++++++++++++++++++++++++++++++++++ lib/ethdev/ethdev_private.h | 7 +++++ lib/ethdev/rte_ethdev.c | 27 +++++++++++++++++++ lib/ethdev/rte_ethdev_core.h | 45 +++++++++++++++++++++++++++++++ 4 files changed, 131 insertions(+) diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 012cf73ca2..3eeda6e9f9 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data) RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str); return str == NULL ? -1 : 0; } + +static uint16_t +dummy_eth_rx_burst(__rte_unused void *rxq, + __rte_unused struct rte_mbuf **rx_pkts, + __rte_unused uint16_t nb_pkts) +{ + RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n"); + rte_errno = ENOTSUP; + return 0; +} + +static uint16_t +dummy_eth_tx_burst(__rte_unused void *txq, + __rte_unused struct rte_mbuf **tx_pkts, + __rte_unused uint16_t nb_pkts) +{ + RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n"); + rte_errno = ENOTSUP; + return 0; +} + +void +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo) +{ + static void *dummy_data[RTE_MAX_QUEUES_PER_PORT]; + static const struct rte_eth_fp_ops dummy_ops = { + .rx_pkt_burst = dummy_eth_rx_burst, + .tx_pkt_burst = dummy_eth_tx_burst, + .rxq = {.data = dummy_data, .clbk = dummy_data,}, + .txq = {.data = dummy_data, .clbk = dummy_data,}, + }; + + *fpo = dummy_ops; +} + +void +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo, + const struct rte_eth_dev *dev) +{ + fpo->rx_pkt_burst = dev->rx_pkt_burst; + fpo->tx_pkt_burst = dev->tx_pkt_burst; + fpo->tx_pkt_prepare = dev->tx_pkt_prepare; + fpo->rx_queue_count = dev->rx_queue_count; + fpo->rx_descriptor_status = dev->rx_descriptor_status; + fpo->tx_descriptor_status = dev->tx_descriptor_status; + + fpo->rxq.data = dev->data->rx_queues; + fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs; + + fpo->txq.data = dev->data->tx_queues; + fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs; +} diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h index 3724429577..40333e7651 100644 --- a/lib/ethdev/ethdev_private.h +++ b/lib/ethdev/ethdev_private.h @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp, /* Parse devargs value for representor parameter. */ int rte_eth_devargs_parse_representor_ports(char *str, void *data); +/* reset eth 'fast' API to dummy values */ +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo); + +/* setup eth 'fast' API to ethdev values */ +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo, + const struct rte_eth_dev *dev); + #endif /* _ETH_PRIVATE_H_ */ diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 424bc260fa..036c82cbfb 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -44,6 +44,9 @@ static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data"; struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS]; +/* public 'fast' API */ +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS]; + /* spinlock for eth device callbacks */ static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER; @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev) rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_DESTROY, NULL); + eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id); + rte_spinlock_lock(ð_dev_shared_data->ownership_lock); eth_dev->state = RTE_ETH_DEV_UNUSED; @@ -1788,6 +1793,9 @@ rte_eth_dev_start(uint16_t port_id) (*dev->dev_ops->link_update)(dev, 0); } + /* expose selection of PMD rx/tx function */ + eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev); + rte_ethdev_trace_start(port_id); return 0; } @@ -1810,6 +1818,9 @@ rte_eth_dev_stop(uint16_t port_id) return 0; } + /* point rx/tx functions to dummy ones */ + eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id); + dev->data->dev_started = 0; ret = (*dev->dev_ops->dev_stop)(dev); rte_ethdev_trace_stop(port_id, ret); @@ -4568,6 +4579,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id) return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id)); } +RTE_INIT(eth_dev_init_fp_ops) +{ + uint32_t i; + + for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++) + eth_dev_fp_ops_reset(rte_eth_fp_ops + i); +} + RTE_INIT(eth_dev_init_cb_lists) { uint16_t i; @@ -4736,6 +4755,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev) if (dev == NULL) return; + /* + * for secondary process, at that point we expect device + * to be already 'usable', so shared data and all function pointers + * for 'fast' devops have to be setup properly inside rte_eth_dev. + */ + if (rte_eal_process_type() == RTE_PROC_SECONDARY) + eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev); + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL); dev->state = RTE_ETH_DEV_ATTACHED; diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index 948c0b71c1..fe47a660c7 100644 --- a/lib/ethdev/rte_ethdev_core.h +++ b/lib/ethdev/rte_ethdev_core.h @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset); typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset); /**< @internal Check the status of a Tx descriptor */ +/** + * @internal + * Structure used to hold opaque pointernals to internal ethdev RX/TXi + * queues data. + * The main purpose to expose these pointers at all - allow compiler + * to fetch this data for 'fast' ethdev inline functions in advance. + */ +struct rte_ethdev_qdata { + void **data; + /**< points to array of internal queue data pointers */ + void **clbk; + /**< points to array of queue callback data pointers */ +}; + +/** + * @internal + * 'fast' ethdev funcions and related data are hold in a flat array. + * one entry per ethdev. + */ +struct rte_eth_fp_ops { + + /** first 64B line */ + eth_rx_burst_t rx_pkt_burst; + /**< PMD receive function. */ + eth_tx_burst_t tx_pkt_burst; + /**< PMD transmit function. */ + eth_tx_prep_t tx_pkt_prepare; + /**< PMD transmit prepare function. */ + eth_rx_queue_count_t rx_queue_count; + /**< Get the number of used RX descriptors. */ + eth_rx_descriptor_status_t rx_descriptor_status; + /**< Check the status of a Rx descriptor. */ + eth_tx_descriptor_status_t tx_descriptor_status; + /**< Check the status of a Tx descriptor. */ + uintptr_t reserved[2]; + + /** second 64B line */ + struct rte_ethdev_qdata rxq; + struct rte_ethdev_qdata txq; + uintptr_t reserved2[4]; + +} __rte_cache_aligned; + +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS]; + /** * @internal From patchwork Mon Oct 4 13:56:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 100435 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F01AA0C4D; Mon, 4 Oct 2021 15:59:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3AAD64132A; Mon, 4 Oct 2021 15:59:00 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 16CA141320 for ; Mon, 4 Oct 2021 15:58:57 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10126"; a="206247089" X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="206247089" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Oct 2021 06:57:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="558508874" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by FMSMGA003.fm.intel.com with ESMTP; 04 Oct 2021 06:57:48 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, adwivedi@marvell.com, shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com, john.miller@atomicrules.com, irusskikh@marvell.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com, hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com, longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com, andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, jiawenwu@trustnetic.com, jianwang@trustnetic.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com, Konstantin Ananyev Date: Mon, 4 Oct 2021 14:56:00 +0100 Message-Id: <20211004135603.20593-5-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20211004135603.20593-1-konstantin.ananyev@intel.com> References: <20211001140255.5726-1-konstantin.ananyev@intel.com> <20211004135603.20593-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rework 'fast' burst functions to use rte_eth_fp_ops[]. While it is an API/ABI breakage, this change is intended to be transparent for both users (no changes in user app is required) and PMD developers (no changes in PMD is required). One extra thing to note - RX/TX callback invocation will cause extra function call with these changes. That might cause some insignificant slowdown for code-path where RX/TX callbacks are heavily involved. Signed-off-by: Konstantin Ananyev --- lib/ethdev/ethdev_private.c | 31 +++++ lib/ethdev/rte_ethdev.h | 242 ++++++++++++++++++++++++++---------- lib/ethdev/version.map | 5 + 3 files changed, 210 insertions(+), 68 deletions(-) diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 3eeda6e9f9..27d29b2ac6 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo, fpo->txq.data = dev->data->tx_queues; fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs; } + +uint16_t +__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts, + void *opaque) +{ + const struct rte_eth_rxtx_callback *cb = opaque; + + while (cb != NULL) { + nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx, + nb_pkts, cb->param); + cb = cb->next; + } + + return nb_rx; +} + +uint16_t +__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque) +{ + const struct rte_eth_rxtx_callback *cb = opaque; + + while (cb != NULL) { + nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts, + cb->param); + cb = cb->next; + } + + return nb_pkts; +} diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 9642b7c00f..7f68be406e 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id, #include +/** + * @internal + * Helper routine for eth driver rx_burst API. + * Should be called at exit from PMD's rte_eth_rx_bulk implementation. + * Does necessary post-processing - invokes RX callbacks if any, etc. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The index of the receive queue from which to retrieve input packets. + * @param rx_pkts + * The address of an array of pointers to *rte_mbuf* structures that + * have been retrieved from the device. + * @param nb_pkts + * The number of packets that were retrieved from the device. + * @param nb_pkts + * The number of elements in *rx_pkts* array. + * @param opaque + * Opaque pointer of RX queue callback related data. + * + * @return + * The number of packets effectively supplied to the *rx_pkts* array. + */ +uint16_t __rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts, + void *opaque); + /** * * Retrieve a burst of input packets from a receive queue of an Ethernet @@ -4995,23 +5022,37 @@ static inline uint16_t rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **rx_pkts, const uint16_t nb_pkts) { - struct rte_eth_dev *dev = &rte_eth_devices[port_id]; uint16_t nb_rx; + struct rte_eth_fp_ops *p; + void *cb, *qd; + +#ifdef RTE_ETHDEV_DEBUG_RX + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return 0; + } +#endif + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->rxq.data[queue_id]; #ifdef RTE_ETHDEV_DEBUG_RX RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0); - if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id); + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n", + queue_id, port_id); return 0; } #endif - nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id], - rx_pkts, nb_pkts); + + nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts); #ifdef RTE_ETHDEV_RXTX_CALLBACKS - struct rte_eth_rxtx_callback *cb; /* __ATOMIC_RELEASE memory order was used when the * call back was inserted into the list. @@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is * not required. */ - cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id], - __ATOMIC_RELAXED); - - if (unlikely(cb != NULL)) { - do { - nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx, - nb_pkts, cb->param); - cb = cb->next; - } while (cb != NULL); - } + cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED); + if (unlikely(cb != NULL)) + nb_rx = __rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_rx, + nb_pkts, cb); #endif rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx); @@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, static inline int rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) { - struct rte_eth_dev *dev; + struct rte_eth_fp_ops *p; + void *qd; + + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return -EINVAL; + } + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->rxq.data[queue_id]; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); - dev = &rte_eth_devices[port_id]; - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP); - if (queue_id >= dev->data->nb_rx_queues || - dev->data->rx_queues[queue_id] == NULL) + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP); + if (qd == NULL) return -EINVAL; - return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]); + return (int)(*p->rx_queue_count)(qd); } /** @@ -5133,21 +5179,30 @@ static inline int rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, uint16_t offset) { - struct rte_eth_dev *dev; - void *rxq; + struct rte_eth_fp_ops *p; + void *qd; #ifdef RTE_ETHDEV_DEBUG_RX - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return -EINVAL; + } #endif - dev = &rte_eth_devices[port_id]; + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->rxq.data[queue_id]; + #ifdef RTE_ETHDEV_DEBUG_RX - if (queue_id >= dev->data->nb_rx_queues) + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + if (qd == NULL) return -ENODEV; #endif - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP); - rxq = dev->data->rx_queues[queue_id]; - - return (*dev->rx_descriptor_status)(rxq, offset); + RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP); + return (*p->rx_descriptor_status)(qd, offset); } /**@{@name Tx hardware descriptor states @@ -5194,23 +5249,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, static inline int rte_eth_tx_descriptor_status(uint16_t port_id, uint16_t queue_id, uint16_t offset) { - struct rte_eth_dev *dev; - void *txq; + struct rte_eth_fp_ops *p; + void *qd; #ifdef RTE_ETHDEV_DEBUG_TX - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return -EINVAL; + } #endif - dev = &rte_eth_devices[port_id]; + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->txq.data[queue_id]; + #ifdef RTE_ETHDEV_DEBUG_TX - if (queue_id >= dev->data->nb_tx_queues) + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + if (qd == NULL) return -ENODEV; #endif - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP); - txq = dev->data->tx_queues[queue_id]; - - return (*dev->tx_descriptor_status)(txq, offset); + RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP); + return (*p->tx_descriptor_status)(qd, offset); } +/** + * @internal + * Helper routine for eth driver tx_burst API. + * Should be called before entry PMD's rte_eth_tx_bulk implementation. + * Does necessary pre-processing - invokes TX callbacks if any, etc. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The index of the transmit queue through which output packets must be + * sent. + * @param tx_pkts + * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures + * which contain the output packets. + * @param nb_pkts + * The maximum number of packets to transmit. + * @return + * The number of output packets to transmit. + */ +uint16_t __rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id, + struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque); + /** * Send a burst of output packets on a transmit queue of an Ethernet device. * @@ -5281,20 +5367,34 @@ static inline uint16_t rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + struct rte_eth_fp_ops *p; + void *cb, *qd; + +#ifdef RTE_ETHDEV_DEBUG_TX + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + return 0; + } +#endif + + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->txq.data[queue_id]; #ifdef RTE_ETHDEV_DEBUG_TX RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); - RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0); - if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id); + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n", + queue_id, port_id); return 0; } #endif #ifdef RTE_ETHDEV_RXTX_CALLBACKS - struct rte_eth_rxtx_callback *cb; /* __ATOMIC_RELEASE memory order was used when the * call back was inserted into the list. @@ -5302,21 +5402,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is * not required. */ - cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id], - __ATOMIC_RELAXED); - - if (unlikely(cb != NULL)) { - do { - nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts, - cb->param); - cb = cb->next; - } while (cb != NULL); - } + cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED); + if (unlikely(cb != NULL)) + nb_pkts = __rte_eth_tx_prolog(port_id, queue_id, tx_pkts, + nb_pkts, cb); #endif - rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, - nb_pkts); - return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts); + nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts); + + rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts); + return nb_pkts; } /** @@ -5379,31 +5474,42 @@ static inline uint16_t rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct rte_eth_dev *dev; + struct rte_eth_fp_ops *p; + void *qd; #ifdef RTE_ETHDEV_DEBUG_TX - if (!rte_eth_dev_is_valid_port(port_id)) { - RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id); + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); rte_errno = ENODEV; return 0; } #endif - dev = &rte_eth_devices[port_id]; + /* fetch pointer to queue data */ + p = &rte_eth_fp_ops[port_id]; + qd = p->txq.data[queue_id]; #ifdef RTE_ETHDEV_DEBUG_TX - if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id); + if (!rte_eth_dev_is_valid_port(port_id)) { + RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id); + rte_errno = ENODEV; + return 0; + } + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n", + queue_id, port_id); rte_errno = EINVAL; return 0; } #endif - if (!dev->tx_pkt_prepare) + if (!p->tx_pkt_prepare) return nb_pkts; - return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id], - tx_pkts, nb_pkts); + return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts); } #else diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 904bce6ea1..2348ec3c3c 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -1,6 +1,10 @@ DPDK_22 { global: + # internal functions called by public inline ones + __rte_eth_rx_epilog; + __rte_eth_tx_prolog; + rte_eth_add_first_rx_callback; rte_eth_add_rx_callback; rte_eth_add_tx_callback; @@ -76,6 +80,7 @@ DPDK_22 { rte_eth_find_next_of; rte_eth_find_next_owned_by; rte_eth_find_next_sibling; + rte_eth_fp_ops; rte_eth_iterator_cleanup; rte_eth_iterator_init; rte_eth_iterator_next; From patchwork Mon Oct 4 13:56:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 100437 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2034CA0C4D; Mon, 4 Oct 2021 16:00:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06E6C4127B; Mon, 4 Oct 2021 16:00:59 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 938C241247 for ; Mon, 4 Oct 2021 16:00:57 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10126"; a="311641224" X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="311641224" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Oct 2021 06:58:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="558509003" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by FMSMGA003.fm.intel.com with ESMTP; 04 Oct 2021 06:58:01 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, adwivedi@marvell.com, shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com, john.miller@atomicrules.com, irusskikh@marvell.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com, hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com, longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com, andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, jiawenwu@trustnetic.com, jianwang@trustnetic.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com, Konstantin Ananyev Date: Mon, 4 Oct 2021 14:56:01 +0100 Message-Id: <20211004135603.20593-6-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20211004135603.20593-1-konstantin.ananyev@intel.com> References: <20211001140255.5726-1-konstantin.ananyev@intel.com> <20211004135603.20593-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 5/7] ethdev: add API to retrieve multiple ethernet addresses X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce rte_eth_macaddrs_get() to allow user to retrieve all ethernet addresses assigned to given port. Change testpmd to use this new function and avoid referencing directly rte_eth_devices[]. Signed-off-by: Konstantin Ananyev --- app/test-pmd/config.c | 23 +++++++++++------------ doc/guides/rel_notes/release_21_11.rst | 5 +++++ lib/ethdev/rte_ethdev.c | 25 +++++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 19 +++++++++++++++++++ lib/ethdev/version.map | 3 +++ 5 files changed, 63 insertions(+), 12 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 9c66329e96..7221644230 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -5215,20 +5215,20 @@ show_macs(portid_t port_id) { char buf[RTE_ETHER_ADDR_FMT_SIZE]; struct rte_eth_dev_info dev_info; - struct rte_ether_addr *addr; - uint32_t i, num_macs = 0; - struct rte_eth_dev *dev; - - dev = &rte_eth_devices[port_id]; + int32_t i, rc, num_macs = 0; if (eth_dev_info_get_print_err(port_id, &dev_info)) return; - for (i = 0; i < dev_info.max_mac_addrs; i++) { - addr = &dev->data->mac_addrs[i]; + struct rte_ether_addr addr[dev_info.max_mac_addrs]; + rc = rte_eth_macaddrs_get(port_id, addr, dev_info.max_mac_addrs); + if (rc < 0) + return; + + for (i = 0; i < rc; i++) { /* skip zero address */ - if (rte_is_zero_ether_addr(addr)) + if (rte_is_zero_ether_addr(&addr[i])) continue; num_macs++; @@ -5236,14 +5236,13 @@ show_macs(portid_t port_id) printf("Number of MAC address added: %d\n", num_macs); - for (i = 0; i < dev_info.max_mac_addrs; i++) { - addr = &dev->data->mac_addrs[i]; + for (i = 0; i < rc; i++) { /* skip zero address */ - if (rte_is_zero_ether_addr(addr)) + if (rte_is_zero_ether_addr(&addr[i])) continue; - rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, addr); + rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, &addr[i]); printf(" %s\n", buf); } } diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index fd80538b6c..91c392c14e 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -125,6 +125,11 @@ New Features * Added tests to validate packets hard expiry. * Added tests to verify tunnel header verification in IPsec inbound. +* **Add new function into ethdev lib.** + + * Added ``rte_eth_macaddrs_get`` to allow user to retrieve all Ethernet + addresses aasigned to given ethernet port. + Removed Items ------------- diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 036c82cbfb..b051eff70e 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -3574,6 +3574,31 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask, return ret; } +int +rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[], uint32_t num) +{ + int32_t ret; + struct rte_eth_dev *dev; + struct rte_eth_dev_info dev_info; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + ret = rte_eth_dev_info_get(port_id, &dev_info); + if (ret != 0) + return ret; + + if (ma == NULL) { + RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__); + return -EINVAL; + } + + num = RTE_MIN(dev_info.max_mac_addrs, num); + memcpy(ma, dev->data->mac_addrs, num * sizeof(ma[0])); + + return num; +} + int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr) { diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 7f68be406e..047f7c9c5a 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -3037,6 +3037,25 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, */ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr); +/** + * Retrieve the Ethernet addresses of an Ethernet device. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param ma + * A pointer to an array of structures of type *ether_addr* to be filled with + * the Ethernet addresses of the Ethernet device. + * @param num + * Number of elements in the *ma* array. + * @return + * - number of retrieved addresses if successful + * - (-ENODEV) if *port_id* invalid. + * - (-EINVAL) if bad parameter. + */ +__rte_experimental +int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr ma[], + uint32_t num); + /** * Retrieve the contextual information of an Ethernet device. * diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 2348ec3c3c..0881202381 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -252,6 +252,9 @@ EXPERIMENTAL { rte_mtr_meter_policy_delete; rte_mtr_meter_policy_update; rte_mtr_meter_policy_validate; + + # added in 21.11 + rte_eth_macaddrs_get; }; INTERNAL { From patchwork Mon Oct 4 13:56:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 100438 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 690BCA0C4D; Mon, 4 Oct 2021 16:01:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72E4C41360; Mon, 4 Oct 2021 16:01:43 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 99459412AE for ; Mon, 4 Oct 2021 16:01:41 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10126"; a="224204850" X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="224204850" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Oct 2021 06:58:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="558509106" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by FMSMGA003.fm.intel.com with ESMTP; 04 Oct 2021 06:58:18 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, adwivedi@marvell.com, shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com, john.miller@atomicrules.com, irusskikh@marvell.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com, hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com, longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com, andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, jiawenwu@trustnetic.com, jianwang@trustnetic.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com, Konstantin Ananyev Date: Mon, 4 Oct 2021 14:56:02 +0100 Message-Id: <20211004135603.20593-7-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20211004135603.20593-1-konstantin.ananyev@intel.com> References: <20211001140255.5726-1-konstantin.ananyev@intel.com> <20211004135603.20593-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 6/7] ethdev: remove legacy Rx descriptor done API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_eth_rx_descriptor_status() should be used as a replacement. Signed-off-by: Andrew Rybchenko Reviewed-by: Ferruh Yigit Acked-by: Konstantin Ananyev --- doc/guides/nics/features.rst | 6 +----- doc/guides/rel_notes/deprecation.rst | 5 ----- doc/guides/rel_notes/release_21_11.rst | 4 ++++ drivers/net/e1000/e1000_ethdev.h | 4 ---- drivers/net/e1000/em_ethdev.c | 1 - drivers/net/e1000/em_rxtx.c | 17 ---------------- drivers/net/e1000/igb_ethdev.c | 2 -- drivers/net/e1000/igb_rxtx.c | 17 ---------------- drivers/net/fm10k/fm10k.h | 3 --- drivers/net/fm10k/fm10k_ethdev.c | 1 - drivers/net/fm10k/fm10k_rxtx.c | 25 ------------------------ drivers/net/i40e/i40e_ethdev.c | 1 - drivers/net/i40e/i40e_ethdev_vf.c | 1 - drivers/net/i40e/i40e_rxtx.c | 26 ------------------------- drivers/net/i40e/i40e_rxtx.h | 1 - drivers/net/igc/igc_ethdev.c | 1 - drivers/net/igc/igc_txrx.c | 18 ----------------- drivers/net/igc/igc_txrx.h | 2 -- drivers/net/ixgbe/ixgbe_ethdev.c | 2 -- drivers/net/ixgbe/ixgbe_ethdev.h | 2 -- drivers/net/ixgbe/ixgbe_rxtx.c | 18 ----------------- drivers/net/octeontx2/otx2_ethdev.c | 1 - drivers/net/octeontx2/otx2_ethdev.h | 1 - drivers/net/octeontx2/otx2_ethdev_ops.c | 12 ------------ drivers/net/sfc/sfc_ethdev.c | 17 ---------------- drivers/net/virtio/virtio_ethdev.c | 1 - lib/ethdev/rte_ethdev.c | 1 - lib/ethdev/rte_ethdev.h | 25 ------------------------ lib/ethdev/rte_ethdev_core.h | 4 ---- 29 files changed, 5 insertions(+), 214 deletions(-) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index 4fce8cd1c9..a02ef25409 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -662,14 +662,10 @@ Rx descriptor status -------------------- Supports check the status of a Rx descriptor. When ``rx_descriptor_status`` is -used, status can be "Available", "Done" or "Unavailable". When -``rx_descriptor_done`` is used, status can be "DD bit is set" or "DD bit is -not set". +used, status can be "Available", "Done" or "Unavailable". * **[implements] rte_eth_dev**: ``rx_descriptor_status``. * **[related] API**: ``rte_eth_rx_descriptor_status()``. -* **[implements] rte_eth_dev**: ``rx_descriptor_done``. -* **[related] API**: ``rte_eth_rx_descriptor_done()``. .. _nic_features_tx_descriptor_status: diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 05fc2fdee7..82e843a0b3 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -106,11 +106,6 @@ Deprecation Notices the device packet overhead can be calculated as: ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu`` -* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done`` - will be removed in 21.11. - Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status`` - APIs can be used as replacement. - * ethdev: The port mirroring API can be replaced with a more fine grain flow API. The structs ``rte_eth_mirror_conf``, ``rte_eth_vlan_mirror`` and the functions ``rte_eth_mirror_rule_set``, ``rte_eth_mirror_rule_reset`` will be marked diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 91c392c14e..6055551443 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -150,6 +150,10 @@ Removed Items blacklist/whitelist are removed. Users must use the new block/allow list arguments. +* ethdev: Removed ``rx_descriptor_done`` dev_ops and + ``rte_eth_rx_descriptor_done``. Existing ``rte_eth_rx_descriptor_status`` + APIs can be used as a replacement. + API Changes ----------- diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h index 460e130a83..fff52958df 100644 --- a/drivers/net/e1000/e1000_ethdev.h +++ b/drivers/net/e1000/e1000_ethdev.h @@ -401,8 +401,6 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, uint32_t eth_igb_rx_queue_count(void *rx_queue); -int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset); - int eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset); int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset); @@ -477,8 +475,6 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, uint32_t eth_em_rx_queue_count(void *rx_queue); -int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset); - int eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset); int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index a0ca371b02..9e157e4ffe 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -247,7 +247,6 @@ eth_em_dev_init(struct rte_eth_dev *eth_dev) eth_dev->dev_ops = ð_em_ops; eth_dev->rx_queue_count = eth_em_rx_queue_count; - eth_dev->rx_descriptor_done = eth_em_rx_descriptor_done; eth_dev->rx_descriptor_status = eth_em_rx_descriptor_status; eth_dev->tx_descriptor_status = eth_em_tx_descriptor_status; eth_dev->rx_pkt_burst = (eth_rx_burst_t)ð_em_recv_pkts; diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c index 40de36cb20..13ea3a77f4 100644 --- a/drivers/net/e1000/em_rxtx.c +++ b/drivers/net/e1000/em_rxtx.c @@ -1511,23 +1511,6 @@ eth_em_rx_queue_count(void *rx_queue) return desc; } -int -eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset) -{ - volatile struct e1000_rx_desc *rxdp; - struct em_rx_queue *rxq = rx_queue; - uint32_t desc; - - if (unlikely(offset >= rxq->nb_rx_desc)) - return 0; - desc = rxq->rx_tail + offset; - if (desc >= rxq->nb_rx_desc) - desc -= rxq->nb_rx_desc; - - rxdp = &rxq->rx_ring[desc]; - return !!(rxdp->status & E1000_RXD_STAT_DD); -} - int eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset) { diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index d80fad01e3..e1bc3852fc 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -726,7 +726,6 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev) eth_dev->dev_ops = ð_igb_ops; eth_dev->rx_queue_count = eth_igb_rx_queue_count; - eth_dev->rx_descriptor_done = eth_igb_rx_descriptor_done; eth_dev->rx_descriptor_status = eth_igb_rx_descriptor_status; eth_dev->tx_descriptor_status = eth_igb_tx_descriptor_status; eth_dev->rx_pkt_burst = ð_igb_recv_pkts; @@ -920,7 +919,6 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev) PMD_INIT_FUNC_TRACE(); eth_dev->dev_ops = &igbvf_eth_dev_ops; - eth_dev->rx_descriptor_done = eth_igb_rx_descriptor_done; eth_dev->rx_descriptor_status = eth_igb_rx_descriptor_status; eth_dev->tx_descriptor_status = eth_igb_tx_descriptor_status; eth_dev->rx_pkt_burst = ð_igb_recv_pkts; diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index 3210a0e008..0ee1b8d48d 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -1791,23 +1791,6 @@ eth_igb_rx_queue_count(void *rx_queue) return desc; } -int -eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset) -{ - volatile union e1000_adv_rx_desc *rxdp; - struct igb_rx_queue *rxq = rx_queue; - uint32_t desc; - - if (unlikely(offset >= rxq->nb_rx_desc)) - return 0; - desc = rxq->rx_tail + offset; - if (desc >= rxq->nb_rx_desc) - desc -= rxq->nb_rx_desc; - - rxdp = &rxq->rx_ring[desc]; - return !!(rxdp->wb.upper.status_error & E1000_RXD_STAT_DD); -} - int eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset) { diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h index 648d12a1b4..17c73c4dc5 100644 --- a/drivers/net/fm10k/fm10k.h +++ b/drivers/net/fm10k/fm10k.h @@ -326,9 +326,6 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue, uint32_t fm10k_dev_rx_queue_count(void *rx_queue); -int -fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); - int fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset); diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 3236290e40..f9c287a6ba 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -3062,7 +3062,6 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev) dev->dev_ops = &fm10k_eth_dev_ops; dev->rx_queue_count = fm10k_dev_rx_queue_count; - dev->rx_descriptor_done = fm10k_dev_rx_descriptor_done; dev->rx_descriptor_status = fm10k_dev_rx_descriptor_status; dev->tx_descriptor_status = fm10k_dev_tx_descriptor_status; dev->rx_pkt_burst = &fm10k_recv_pkts; diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c index eab798e52c..b3515ae96a 100644 --- a/drivers/net/fm10k/fm10k_rxtx.c +++ b/drivers/net/fm10k/fm10k_rxtx.c @@ -393,31 +393,6 @@ fm10k_dev_rx_queue_count(void *rx_queue) return desc; } -int -fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset) -{ - volatile union fm10k_rx_desc *rxdp; - struct fm10k_rx_queue *rxq = rx_queue; - uint16_t desc; - int ret; - - if (unlikely(offset >= rxq->nb_desc)) { - PMD_DRV_LOG(ERR, "Invalid RX descriptor offset %u", offset); - return 0; - } - - desc = rxq->next_dd + offset; - if (desc >= rxq->nb_desc) - desc -= rxq->nb_desc; - - rxdp = &rxq->hw_ring[desc]; - - ret = !!(rxdp->w.status & - rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)); - - return ret; -} - int fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset) { diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index bd97d93dd7..e5e26783bf 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -1434,7 +1434,6 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused) dev->dev_ops = &i40e_eth_dev_ops; dev->rx_queue_count = i40e_dev_rx_queue_count; - dev->rx_descriptor_done = i40e_dev_rx_descriptor_done; dev->rx_descriptor_status = i40e_dev_rx_descriptor_status; dev->tx_descriptor_status = i40e_dev_tx_descriptor_status; dev->rx_pkt_burst = i40e_recv_pkts; diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index e8dd6d1dab..d669ffd250 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -1571,7 +1571,6 @@ i40evf_dev_init(struct rte_eth_dev *eth_dev) /* assign ops func pointer */ eth_dev->dev_ops = &i40evf_eth_dev_ops; eth_dev->rx_queue_count = i40e_dev_rx_queue_count; - eth_dev->rx_descriptor_done = i40e_dev_rx_descriptor_done; eth_dev->rx_descriptor_status = i40e_dev_rx_descriptor_status; eth_dev->tx_descriptor_status = i40e_dev_tx_descriptor_status; eth_dev->rx_pkt_burst = &i40e_recv_pkts; diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 5493ae6bba..fad432d1bd 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -2145,32 +2145,6 @@ i40e_dev_rx_queue_count(void *rx_queue) return desc; } -int -i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset) -{ - volatile union i40e_rx_desc *rxdp; - struct i40e_rx_queue *rxq = rx_queue; - uint16_t desc; - int ret; - - if (unlikely(offset >= rxq->nb_rx_desc)) { - PMD_DRV_LOG(ERR, "Invalid RX descriptor id %u", offset); - return 0; - } - - desc = rxq->rx_tail + offset; - if (desc >= rxq->nb_rx_desc) - desc -= rxq->nb_rx_desc; - - rxdp = &(rxq->rx_ring[desc]); - - ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) & - I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT) & - (1 << I40E_RX_DESC_STATUS_DD_SHIFT)); - - return ret; -} - int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset) { diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index a08b80f020..d495a741b6 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -226,7 +226,6 @@ int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq); void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq); uint32_t i40e_dev_rx_queue_count(void *rx_queue); -int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset); int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index 224a095483..9ddfe68a78 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -1227,7 +1227,6 @@ eth_igc_dev_init(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); dev->dev_ops = ð_igc_ops; - dev->rx_descriptor_done = eth_igc_rx_descriptor_done; dev->rx_queue_count = eth_igc_rx_queue_count; dev->rx_descriptor_status = eth_igc_rx_descriptor_status; dev->tx_descriptor_status = eth_igc_tx_descriptor_status; diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index 437992ecdf..2498cfd290 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -756,24 +756,6 @@ uint32_t eth_igc_rx_queue_count(void *rx_queue) return desc; } -int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset) -{ - volatile union igc_adv_rx_desc *rxdp; - struct igc_rx_queue *rxq = rx_queue; - uint32_t desc; - - if (unlikely(!rxq || offset >= rxq->nb_rx_desc)) - return 0; - - desc = rxq->rx_tail + offset; - if (desc >= rxq->nb_rx_desc) - desc -= rxq->nb_rx_desc; - - rxdp = &rxq->rx_ring[desc]; - return !!(rxdp->wb.upper.status_error & - rte_cpu_to_le_32(IGC_RXD_STAT_DD)); -} - int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset) { struct igc_rx_queue *rxq = rx_queue; diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h index b0c4b3ebd9..3b4c7450cd 100644 --- a/drivers/net/igc/igc_txrx.h +++ b/drivers/net/igc/igc_txrx.h @@ -24,8 +24,6 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, uint32_t eth_igc_rx_queue_count(void *rx_queue); -int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset); - int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset); int eth_igc_tx_descriptor_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 47693c0c47..78f61d3dac 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -1057,7 +1057,6 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) eth_dev->dev_ops = &ixgbe_eth_dev_ops; eth_dev->rx_queue_count = ixgbe_dev_rx_queue_count; - eth_dev->rx_descriptor_done = ixgbe_dev_rx_descriptor_done; eth_dev->rx_descriptor_status = ixgbe_dev_rx_descriptor_status; eth_dev->tx_descriptor_status = ixgbe_dev_tx_descriptor_status; eth_dev->rx_pkt_burst = &ixgbe_recv_pkts; @@ -1546,7 +1545,6 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev) PMD_INIT_FUNC_TRACE(); eth_dev->dev_ops = &ixgbevf_eth_dev_ops; - eth_dev->rx_descriptor_done = ixgbe_dev_rx_descriptor_done; eth_dev->rx_descriptor_status = ixgbe_dev_rx_descriptor_status; eth_dev->tx_descriptor_status = ixgbe_dev_tx_descriptor_status; eth_dev->rx_pkt_burst = &ixgbe_recv_pkts; diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index c5027be1dc..6b7a4079db 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -604,8 +604,6 @@ int ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint32_t ixgbe_dev_rx_queue_count(void *rx_queue); -int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); - int ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset); int ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 1f802851e3..8e056db761 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -3281,24 +3281,6 @@ ixgbe_dev_rx_queue_count(void *rx_queue) return desc; } -int -ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset) -{ - volatile union ixgbe_adv_rx_desc *rxdp; - struct ixgbe_rx_queue *rxq = rx_queue; - uint32_t desc; - - if (unlikely(offset >= rxq->nb_rx_desc)) - return 0; - desc = rxq->rx_tail + offset; - if (desc >= rxq->nb_rx_desc) - desc -= rxq->nb_rx_desc; - - rxdp = &rxq->rx_ring[desc]; - return !!(rxdp->wb.upper.status_error & - rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD)); -} - int ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset) { diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 75d4cabf2e..4b33056085 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -2449,7 +2449,6 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) int rc, max_entries; eth_dev->dev_ops = &otx2_eth_dev_ops; - eth_dev->rx_descriptor_done = otx2_nix_rx_descriptor_done; eth_dev->rx_queue_count = otx2_nix_rx_queue_count; eth_dev->rx_descriptor_status = otx2_nix_rx_descriptor_status; eth_dev->tx_descriptor_status = otx2_nix_tx_descriptor_status; diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 6696db6f6f..d28fcaa281 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -433,7 +433,6 @@ int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_burst_mode *mode); uint32_t otx2_nix_rx_queue_count(void *rx_queue); int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt); -int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset); int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset); int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index e6f8e5bfc1..3a763f691b 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -365,18 +365,6 @@ nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset) return 0; } -int -otx2_nix_rx_descriptor_done(void *rx_queue, uint16_t offset) -{ - struct otx2_eth_rxq *rxq = rx_queue; - uint32_t head, tail; - - nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev), - &head, &tail, rxq->rq); - - return nix_offset_has_packet(head, tail, offset); -} - int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset) { diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index 4b5713f3ec..c9b01480f8 100644 --- a/drivers/net/sfc/sfc_ethdev.c +++ b/drivers/net/sfc/sfc_ethdev.c @@ -1296,21 +1296,6 @@ sfc_rx_queue_count(void *rx_queue) return dp_rx->qdesc_npending(dp_rxq); } -/* - * The function is used by the secondary process as well. It must not - * use any process-local pointers from the adapter data. - */ -static int -sfc_rx_descriptor_done(void *queue, uint16_t offset) -{ - struct sfc_dp_rxq *dp_rxq = queue; - const struct sfc_dp_rx *dp_rx; - - dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq); - - return offset < dp_rx->qdesc_npending(dp_rxq); -} - /* * The function is used by the secondary process as well. It must not * use any process-local pointers from the adapter data. @@ -2045,7 +2030,6 @@ sfc_eth_dev_set_ops(struct rte_eth_dev *dev) dev->tx_pkt_burst = dp_tx->pkt_burst; dev->rx_queue_count = sfc_rx_queue_count; - dev->rx_descriptor_done = sfc_rx_descriptor_done; dev->rx_descriptor_status = sfc_rx_descriptor_status; dev->tx_descriptor_status = sfc_tx_descriptor_status; dev->dev_ops = &sfc_eth_dev_ops; @@ -2153,7 +2137,6 @@ sfc_eth_dev_secondary_init(struct rte_eth_dev *dev, uint32_t logtype_main) dev->tx_pkt_prepare = dp_tx->pkt_prepare; dev->tx_pkt_burst = dp_tx->pkt_burst; dev->rx_queue_count = sfc_rx_queue_count; - dev->rx_descriptor_done = sfc_rx_descriptor_done; dev->rx_descriptor_status = sfc_rx_descriptor_status; dev->tx_descriptor_status = sfc_tx_descriptor_status; dev->dev_ops = &sfc_eth_dev_secondary_ops; diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index da1633d77e..c82089930f 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1894,7 +1894,6 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev) } eth_dev->dev_ops = &virtio_eth_dev_ops; - eth_dev->rx_descriptor_done = virtio_dev_rx_queue_done; if (rte_eal_process_type() == RTE_PROC_SECONDARY) { set_rxtx_funcs(eth_dev); diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index b051eff70e..6b8e32a38a 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -593,7 +593,6 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev) eth_dev->tx_pkt_burst = NULL; eth_dev->tx_pkt_prepare = NULL; eth_dev->rx_queue_count = NULL; - eth_dev->rx_descriptor_done = NULL; eth_dev->rx_descriptor_status = NULL; eth_dev->tx_descriptor_status = NULL; eth_dev->dev_ops = NULL; diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 047f7c9c5a..992ca4ee0d 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -5128,31 +5128,6 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) return (int)(*p->rx_queue_count)(qd); } -/** - * Check if the DD bit of the specific RX descriptor in the queue has been set - * - * @param port_id - * The port identifier of the Ethernet device. - * @param queue_id - * The queue id on the specific port. - * @param offset - * The offset of the descriptor ID from tail. - * @return - * - (1) if the specific DD bit is set. - * - (0) if the specific DD bit is not set. - * - (-ENODEV) if *port_id* invalid. - * - (-ENOTSUP) if the device does not support this function - */ -__rte_deprecated -static inline int -rte_eth_rx_descriptor_done(uint16_t port_id, uint16_t queue_id, uint16_t offset) -{ - struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); - RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_done, -ENOTSUP); - return (*dev->rx_descriptor_done)(dev->data->rx_queues[queue_id], offset); -} - /**@{@name Rx hardware descriptor states * @see rte_eth_rx_descriptor_status */ diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index fe47a660c7..63078e1ef4 100644 --- a/lib/ethdev/rte_ethdev_core.h +++ b/lib/ethdev/rte_ethdev_core.h @@ -44,9 +44,6 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq, typedef uint32_t (*eth_rx_queue_count_t)(void *rxq); /**< @internal Get number of used descriptors on a receive queue. */ -typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset); -/**< @internal Check DD bit of specific RX descriptor */ - typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset); /**< @internal Check the status of a Rx descriptor */ @@ -129,7 +126,6 @@ struct rte_eth_dev { eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */ eth_rx_queue_count_t rx_queue_count; /**< Get the number of used RX descriptors. */ - eth_rx_descriptor_done_t rx_descriptor_done; /**< Check rxd DD bit. */ eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */ eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */ From patchwork Mon Oct 4 13:56:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 100436 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5CCC5A0C4D; Mon, 4 Oct 2021 15:59:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4E7C04134E; Mon, 4 Oct 2021 15:59:21 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 55CD741338 for ; Mon, 4 Oct 2021 15:59:19 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10126"; a="206247291" X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="206247291" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Oct 2021 06:58:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,346,1624345200"; d="scan'208";a="558509199" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by FMSMGA003.fm.intel.com with ESMTP; 04 Oct 2021 06:58:34 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: xiaoyun.li@intel.com, anoobj@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, adwivedi@marvell.com, shepard.siegel@atomicrules.com, ed.czeck@atomicrules.com, john.miller@atomicrules.com, irusskikh@marvell.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, haiyue.wang@intel.com, johndale@cisco.com, hyonkim@cisco.com, qi.z.zhang@intel.com, xiao.w.wang@intel.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, matan@nvidia.com, viacheslavo@nvidia.com, sthemmin@microsoft.com, longli@microsoft.com, heinrich.kuhn@corigine.com, kirankumark@marvell.com, andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, jiawenwu@trustnetic.com, jianwang@trustnetic.com, maxime.coquelin@redhat.com, chenbo.xia@intel.com, thomas@monjalon.net, ferruh.yigit@intel.com, mdr@ashroe.eu, jay.jayatheerthan@intel.com, Konstantin Ananyev Date: Mon, 4 Oct 2021 14:56:03 +0100 Message-Id: <20211004135603.20593-8-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20211004135603.20593-1-konstantin.ananyev@intel.com> References: <20211001140255.5726-1-konstantin.ananyev@intel.com> <20211004135603.20593-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related data into private header (ethdev_driver.h). Few minor changes to keep DPDK building after that. Signed-off-by: Konstantin Ananyev --- doc/guides/rel_notes/release_21_11.rst | 6 + drivers/common/octeontx2/otx2_sec_idev.c | 2 +- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 2 +- drivers/net/cxgbe/base/adapter.h | 2 +- drivers/net/dpaa2/dpaa2_ptp.c | 2 +- drivers/net/netvsc/hn_var.h | 1 + lib/ethdev/ethdev_driver.h | 149 ++++++++++++++++++ lib/ethdev/rte_ethdev_core.h | 143 ----------------- lib/ethdev/version.map | 2 +- lib/eventdev/rte_event_eth_rx_adapter.c | 2 +- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_eventdev.c | 2 +- lib/metrics/rte_metrics_telemetry.c | 2 +- 13 files changed, 165 insertions(+), 152 deletions(-) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 6055551443..2944149943 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -228,6 +228,12 @@ ABI Changes to user, it still counts as an ABI change, as ``eth_rx_queue_count_t`` is used by public inline function ``rte_eth_rx_queue_count``. +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback`` + private data structures. ``rte_eth_devices[]`` can't be accessible directly + by user any more. While it is an ABI breakage, this change is intended + to be transparent for both users (no changes in user app is required) and + PMD developers (no changes in PMD is required). + Known Issues ------------ diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c index 6e9643c383..b561b67174 100644 --- a/drivers/common/octeontx2/otx2_sec_idev.c +++ b/drivers/common/octeontx2/otx2_sec_idev.c @@ -4,7 +4,7 @@ #include #include -#include +#include #include #include "otx2_common.h" diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index 37fad11d91..f0b72e05c2 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -6,7 +6,7 @@ #include #include -#include +#include #include #include "otx2_cryptodev.h" diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h index 01a2a9d147..1c7c8afe16 100644 --- a/drivers/net/cxgbe/base/adapter.h +++ b/drivers/net/cxgbe/base/adapter.h @@ -12,7 +12,7 @@ #include #include #include -#include +#include #include "../cxgbe_compat.h" #include "../cxgbe_ofld.h" diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c index 899dd5d442..8d79e39244 100644 --- a/drivers/net/dpaa2/dpaa2_ptp.c +++ b/drivers/net/dpaa2/dpaa2_ptp.c @@ -10,7 +10,7 @@ #include #include -#include +#include #include #include #include diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h index 2a2bac9338..74e6e6010d 100644 --- a/drivers/net/netvsc/hn_var.h +++ b/drivers/net/netvsc/hn_var.h @@ -7,6 +7,7 @@ */ #include +#include /* * Tunable ethdev params diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index cc2c75261c..63b04dce32 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -17,6 +17,155 @@ #include +/** + * @internal + * Structure used to hold information about the callbacks to be called for a + * queue on RX and TX. + */ +struct rte_eth_rxtx_callback { + struct rte_eth_rxtx_callback *next; + union{ + rte_rx_callback_fn rx; + rte_tx_callback_fn tx; + } fn; + void *param; +}; + +/** + * @internal + * The generic data structure associated with each ethernet device. + * + * Pointers to burst-oriented packet receive and transmit functions are + * located at the beginning of the structure, along with the pointer to + * where all the data elements for the particular device are stored in shared + * memory. This split allows the function pointer and driver data to be per- + * process, while the actual configuration data for the device is shared. + */ +struct rte_eth_dev { + eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */ + eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */ + eth_tx_prep_t tx_pkt_prepare; + /**< Pointer to PMD transmit prepare function. */ + eth_rx_queue_count_t rx_queue_count; + /**< Get the number of used RX descriptors. */ + eth_rx_descriptor_status_t rx_descriptor_status; + /**< Check the status of a Rx descriptor. */ + eth_tx_descriptor_status_t tx_descriptor_status; + /**< Check the status of a Tx descriptor. */ + + /** + * Next two fields are per-device data but *data is shared between + * primary and secondary processes and *process_private is per-process + * private. The second one is managed by PMDs if necessary. + */ + struct rte_eth_dev_data *data; /**< Pointer to device data. */ + void *process_private; /**< Pointer to per-process device data. */ + const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */ + struct rte_device *device; /**< Backing device */ + struct rte_intr_handle *intr_handle; /**< Device interrupt handle */ + /** User application callbacks for NIC interrupts */ + struct rte_eth_dev_cb_list link_intr_cbs; + /** + * User-supplied functions called from rx_burst to post-process + * received packets before passing them to the user + */ + struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT]; + /** + * User-supplied functions called from tx_burst to pre-process + * received packets before passing them to the driver for transmission. + */ + struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT]; + enum rte_eth_dev_state state; /**< Flag indicating the port state */ + void *security_ctx; /**< Context for security ops */ + + uint64_t reserved_64s[4]; /**< Reserved for future fields */ + void *reserved_ptrs[4]; /**< Reserved for future fields */ +} __rte_cache_aligned; + +struct rte_eth_dev_sriov; +struct rte_eth_dev_owner; + +/** + * @internal + * The data part, with no function pointers, associated with each ethernet + * device. This structure is safe to place in shared memory to be common + * among different processes in a multi-process configuration. + */ +struct rte_eth_dev_data { + char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */ + + void **rx_queues; /**< Array of pointers to RX queues. */ + void **tx_queues; /**< Array of pointers to TX queues. */ + uint16_t nb_rx_queues; /**< Number of RX queues. */ + uint16_t nb_tx_queues; /**< Number of TX queues. */ + + struct rte_eth_dev_sriov sriov; /**< SRIOV data */ + + void *dev_private; + /**< PMD-specific private data. + * @see rte_eth_dev_release_port() + */ + + struct rte_eth_link dev_link; /**< Link-level information & status. */ + struct rte_eth_conf dev_conf; /**< Configuration applied to device. */ + uint16_t mtu; /**< Maximum Transmission Unit. */ + uint32_t min_rx_buf_size; + /**< Common RX buffer size handled by all queues. */ + + uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */ + struct rte_ether_addr *mac_addrs; + /**< Device Ethernet link address. + * @see rte_eth_dev_release_port() + */ + uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR]; + /**< Bitmap associating MAC addresses to pools. */ + struct rte_ether_addr *hash_mac_addrs; + /**< Device Ethernet MAC addresses of hash filtering. + * @see rte_eth_dev_release_port() + */ + uint16_t port_id; /**< Device [external] port identifier. */ + + __extension__ + uint8_t promiscuous : 1, + /**< RX promiscuous mode ON(1) / OFF(0). */ + scattered_rx : 1, + /**< RX of scattered packets is ON(1) / OFF(0) */ + all_multicast : 1, + /**< RX all multicast mode ON(1) / OFF(0). */ + dev_started : 1, + /**< Device state: STARTED(1) / STOPPED(0). */ + lro : 1, + /**< RX LRO is ON(1) / OFF(0) */ + dev_configured : 1; + /**< Indicates whether the device is configured. + * CONFIGURED(1) / NOT CONFIGURED(0). + */ + uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT]; + /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */ + uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT]; + /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */ + uint32_t dev_flags; /**< Capabilities. */ + int numa_node; /**< NUMA node connection. */ + struct rte_vlan_filter_conf vlan_filter_conf; + /**< VLAN filter configuration. */ + struct rte_eth_dev_owner owner; /**< The port owner. */ + uint16_t representor_id; + /**< Switch-specific identifier. + * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags. + */ + + pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */ + uint64_t reserved_64s[4]; /**< Reserved for future fields */ + void *reserved_ptrs[4]; /**< Reserved for future fields */ +} __rte_cache_aligned; + +/** + * @internal + * The pool of *rte_eth_dev* structures. The size of the pool + * is configured at compile-time in the file. + */ +extern struct rte_eth_dev rte_eth_devices[]; + /**< @internal Declaration of the hairpin peer queue information structure. */ struct rte_hairpin_peer_info; diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index 63078e1ef4..2d07db0811 100644 --- a/lib/ethdev/rte_ethdev_core.h +++ b/lib/ethdev/rte_ethdev_core.h @@ -95,147 +95,4 @@ struct rte_eth_fp_ops { extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS]; - -/** - * @internal - * Structure used to hold information about the callbacks to be called for a - * queue on RX and TX. - */ -struct rte_eth_rxtx_callback { - struct rte_eth_rxtx_callback *next; - union{ - rte_rx_callback_fn rx; - rte_tx_callback_fn tx; - } fn; - void *param; -}; - -/** - * @internal - * The generic data structure associated with each ethernet device. - * - * Pointers to burst-oriented packet receive and transmit functions are - * located at the beginning of the structure, along with the pointer to - * where all the data elements for the particular device are stored in shared - * memory. This split allows the function pointer and driver data to be per- - * process, while the actual configuration data for the device is shared. - */ -struct rte_eth_dev { - eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */ - eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */ - eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */ - - eth_rx_queue_count_t rx_queue_count; /**< Get the number of used RX descriptors. */ - eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */ - eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */ - - /** - * Next two fields are per-device data but *data is shared between - * primary and secondary processes and *process_private is per-process - * private. The second one is managed by PMDs if necessary. - */ - struct rte_eth_dev_data *data; /**< Pointer to device data. */ - void *process_private; /**< Pointer to per-process device data. */ - const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */ - struct rte_device *device; /**< Backing device */ - struct rte_intr_handle *intr_handle; /**< Device interrupt handle */ - /** User application callbacks for NIC interrupts */ - struct rte_eth_dev_cb_list link_intr_cbs; - /** - * User-supplied functions called from rx_burst to post-process - * received packets before passing them to the user - */ - struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT]; - /** - * User-supplied functions called from tx_burst to pre-process - * received packets before passing them to the driver for transmission. - */ - struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT]; - enum rte_eth_dev_state state; /**< Flag indicating the port state */ - void *security_ctx; /**< Context for security ops */ - - uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ -} __rte_cache_aligned; - -struct rte_eth_dev_sriov; -struct rte_eth_dev_owner; - -/** - * @internal - * The data part, with no function pointers, associated with each ethernet device. - * - * This structure is safe to place in shared memory to be common among different - * processes in a multi-process configuration. - */ -struct rte_eth_dev_data { - char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */ - - void **rx_queues; /**< Array of pointers to RX queues. */ - void **tx_queues; /**< Array of pointers to TX queues. */ - uint16_t nb_rx_queues; /**< Number of RX queues. */ - uint16_t nb_tx_queues; /**< Number of TX queues. */ - - struct rte_eth_dev_sriov sriov; /**< SRIOV data */ - - void *dev_private; - /**< PMD-specific private data. - * @see rte_eth_dev_release_port() - */ - - struct rte_eth_link dev_link; /**< Link-level information & status. */ - struct rte_eth_conf dev_conf; /**< Configuration applied to device. */ - uint16_t mtu; /**< Maximum Transmission Unit. */ - uint32_t min_rx_buf_size; - /**< Common RX buffer size handled by all queues. */ - - uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */ - struct rte_ether_addr *mac_addrs; - /**< Device Ethernet link address. - * @see rte_eth_dev_release_port() - */ - uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR]; - /**< Bitmap associating MAC addresses to pools. */ - struct rte_ether_addr *hash_mac_addrs; - /**< Device Ethernet MAC addresses of hash filtering. - * @see rte_eth_dev_release_port() - */ - uint16_t port_id; /**< Device [external] port identifier. */ - - __extension__ - uint8_t promiscuous : 1, /**< RX promiscuous mode ON(1) / OFF(0). */ - scattered_rx : 1, /**< RX of scattered packets is ON(1) / OFF(0) */ - all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */ - dev_started : 1, /**< Device state: STARTED(1) / STOPPED(0). */ - lro : 1, /**< RX LRO is ON(1) / OFF(0) */ - dev_configured : 1; - /**< Indicates whether the device is configured. - * CONFIGURED(1) / NOT CONFIGURED(0). - */ - uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT]; - /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */ - uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT]; - /**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */ - uint32_t dev_flags; /**< Capabilities. */ - int numa_node; /**< NUMA node connection. */ - struct rte_vlan_filter_conf vlan_filter_conf; - /**< VLAN filter configuration. */ - struct rte_eth_dev_owner owner; /**< The port owner. */ - uint16_t representor_id; - /**< Switch-specific identifier. - * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags. - */ - - pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */ - uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ -} __rte_cache_aligned; - -/** - * @internal - * The pool of *rte_eth_dev* structures. The size of the pool - * is configured at compile-time in the file. - */ -extern struct rte_eth_dev rte_eth_devices[]; - #endif /* _RTE_ETHDEV_CORE_H_ */ diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 0881202381..3dc494a016 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -75,7 +75,6 @@ DPDK_22 { rte_eth_dev_udp_tunnel_port_add; rte_eth_dev_udp_tunnel_port_delete; rte_eth_dev_vlan_filter; - rte_eth_devices; rte_eth_find_next; rte_eth_find_next_of; rte_eth_find_next_owned_by; @@ -272,6 +271,7 @@ INTERNAL { rte_eth_dev_release_port; rte_eth_dev_internal_reset; rte_eth_devargs_parse; + rte_eth_devices; rte_eth_dma_zone_free; rte_eth_dma_zone_reserve; rte_eth_hairpin_queue_peer_bind; diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 13dfb28401..89c4ca5d40 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -11,7 +11,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c index 18c0359db7..1c06c8707c 100644 --- a/lib/eventdev/rte_event_eth_tx_adapter.c +++ b/lib/eventdev/rte_event_eth_tx_adapter.c @@ -3,7 +3,7 @@ */ #include #include -#include +#include #include "eventdev_pmd.h" #include "rte_eventdev_trace.h" diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index e347d6dfd5..ebef5f0906 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -29,7 +29,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 269f8ef613..5be21b2e86 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -2,7 +2,7 @@ * Copyright(c) 2020 Intel Corporation */ -#include +#include #include #ifdef RTE_LIB_TELEMETRY #include