From patchwork Mon Oct 12 16:46:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 80390 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F6CEA04B6; Mon, 12 Oct 2020 18:46:16 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 10EC91D946; Mon, 12 Oct 2020 18:46:14 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 109881D942; Mon, 12 Oct 2020 18:46:10 +0200 (CEST) IronPort-SDR: EzXN6cIiGCCkLjZzp9YixEC8qTNAdoCtOZNE+hvlyYboU5+Uf7JPg9EQvXFP2oD7rxqPLhyPGc 5q+n6VM44PbQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="145633410" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="145633410" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 09:46:06 -0700 IronPort-SDR: tOAGsZLHG+aTxa5SPFCdl1NjGXRNPCD0u09vERTGzp0vCqDvqRjanMBudSCFcLK/V0npmRE119 mPL+5gb9QTpg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="317980265" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.180]) by orsmga006.jf.intel.com with ESMTP; 12 Oct 2020 09:46:04 -0700 From: Ferruh Yigit To: Ferruh Yigit , Thomas Monjalon , Andrew Rybchenko Cc: dev@dpdk.org, techboard@dpdk.org, Min Hu Date: Mon, 12 Oct 2020 17:46:00 +0100 Message-Id: <20201012164602.1965694-1-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Subject: [dpdk-dev] [RFC 1/2] ethdev: move queue stats to xstats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Queue stats are stored in 'struct rte_eth_stats' as array and array size is defined by 'RTE_ETHDEV_QUEUE_STAT_CNTRS' compile time flag. As a result of technical board discussion, decided to remove the queue statistics from 'struct rte_eth_stats' in the long term. Instead PMDs should represent the queue statistics via xstats, this gives more flexibility on the number of the queues supported. Currently queue stats in the xstats are filled by ethdev layer, using the basic stats, when queue stats removed from basic stats the responsibility to fill the relevant xstats will be pushed to the PMDs. During the switch period, a temporary 'RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS' device flag is created. The PMDs providing the queue stats in the xstats should set this flag to bypass the relevant part in ethdev layer. When all PMDs switch to the xstats for the queue stats, queue stats related fields from 'struct rte_eth_stats' will be removed, as well as 'RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS' flag. Later 'RTE_ETHDEV_QUEUE_STAT_CNTRS' compile time flag also can be removed. Signed-off-by: Ferruh Yigit --- lib/librte_ethdev/rte_ethdev.c | 19 +++++++++++++++---- lib/librte_ethdev/rte_ethdev.h | 2 ++ 2 files changed, 17 insertions(+), 4 deletions(-) diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c index 892c246a53..66665ad4ee 100644 --- a/lib/librte_ethdev/rte_ethdev.c +++ b/lib/librte_ethdev/rte_ethdev.c @@ -2446,8 +2446,10 @@ get_xstats_basic_count(struct rte_eth_dev *dev) nb_txqs = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS); count = RTE_NB_STATS; - count += nb_rxqs * RTE_NB_RXQ_STATS; - count += nb_txqs * RTE_NB_TXQ_STATS; + if ((dev->data->dev_flags & RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS) == 0) { + count += nb_rxqs * RTE_NB_RXQ_STATS; + count += nb_txqs * RTE_NB_TXQ_STATS; + } return count; } @@ -2538,6 +2540,10 @@ rte_eth_basic_stats_get_names(struct rte_eth_dev *dev, sizeof(xstats_names[0].name)); cnt_used_entries++; } + + if (dev->data->dev_flags & RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS) + return cnt_used_entries; + num_q = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS); for (id_queue = 0; id_queue < num_q; id_queue++) { for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) { @@ -2736,6 +2742,9 @@ rte_eth_basic_stats_get(uint16_t port_id, struct rte_eth_xstat *xstats) xstats[count++].value = val; } + if (dev->data->dev_flags & RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS) + return count; + /* per-rxq stats */ for (q = 0; q < nb_rxqs; q++) { for (i = 0; i < RTE_NB_RXQ_STATS; i++) { @@ -2871,8 +2880,10 @@ rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats, nb_txqs = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS); /* Return generic statistics */ - count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) + - (nb_txqs * RTE_NB_TXQ_STATS); + count = RTE_NB_STATS; + if ((dev->data->dev_flags & RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS) == 0) + count += (nb_rxqs * RTE_NB_RXQ_STATS) + + (nb_txqs * RTE_NB_TXQ_STATS); /* implemented by the driver */ if (dev->dev_ops->xstats_get != NULL) { diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h index 5bcfbb8b04..42c83303e1 100644 --- a/lib/librte_ethdev/rte_ethdev.h +++ b/lib/librte_ethdev/rte_ethdev.h @@ -1694,6 +1694,8 @@ struct rte_eth_dev_owner { #define RTE_ETH_DEV_REPRESENTOR 0x0010 /** Device does not support MAC change after started */ #define RTE_ETH_DEV_NOLIVE_MAC_ADDR 0x0020 +/** Device provides queue stats in xstats */ +#define RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS 0x0040 /** * Iterates over valid ethdev ports owned by a specific owner. From patchwork Mon Oct 12 16:46:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 80391 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 70DCFA04B6; Mon, 12 Oct 2020 18:46:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9E2181D954; Mon, 12 Oct 2020 18:46:15 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 074B31D942; Mon, 12 Oct 2020 18:46:12 +0200 (CEST) IronPort-SDR: e7S4hFMrVZ7xDCtw6rRuLYrMwus2YnrG0tOBuJSTS8RPtGGVERx6ouC8lhO5sOb9aCXMg2Uv8f 41cM+ZzyVrng== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="145633417" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="145633417" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 09:46:09 -0700 IronPort-SDR: aUuy1RFHjGwGTykEPp04NdVP7Y/HIg+Yb9CcAYFHQIazrhPLqBRv7/APhtYUZa5iRF31c2XM1Q 8v40F+icx8hQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="317980285" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.180]) by orsmga006.jf.intel.com with ESMTP; 12 Oct 2020 09:46:07 -0700 From: Ferruh Yigit To: Ferruh Yigit , Bruce Richardson , Ray Kinsella , Neil Horman , Thomas Monjalon , Andrew Rybchenko Cc: dev@dpdk.org, techboard@dpdk.org, Min Hu Date: Mon, 12 Oct 2020 17:46:01 +0100 Message-Id: <20201012164602.1965694-2-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012164602.1965694-1-ferruh.yigit@intel.com> References: <20201012164602.1965694-1-ferruh.yigit@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [RFC 2/2] doc: announce queue stats moving to xstats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Queue stats will be removed from basic stats to xstats. It will be PMDs responsibility to fill queue stats based on number of queues they have. Until all PMDs implement the xstats, a temporary 'RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS' device flag created. PMDs switched to the xstats should set this flag to bypass ethdev layer for queue stats. Signed-off-by: Ferruh Yigit --- config/rte_config.h | 2 +- doc/guides/rel_notes/deprecation.rst | 7 +++++++ lib/librte_ethdev/rte_ethdev.h | 3 +++ 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/config/rte_config.h b/config/rte_config.h index 03d90d78bc..9ef3b75940 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -55,7 +55,7 @@ /* ether defines */ #define RTE_MAX_QUEUES_PER_PORT 1024 -#define RTE_ETHDEV_QUEUE_STAT_CNTRS 16 +#define RTE_ETHDEV_QUEUE_STAT_CNTRS 16 /* max 256 */ #define RTE_ETHDEV_RXTX_CALLBACKS 1 /* cryptodev defines */ diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 584e720879..9143cfc529 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -164,6 +164,13 @@ Deprecation Notices following the IPv6 header, as proposed in RFC https://mails.dpdk.org/archives/dev/2020-August/177257.html. +* ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``. + Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``, + ``q_errors``. + Instead queue stats will be received via xstats API. Current method support + will be limited to maximum 256 queues. + Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed. + * security: The API ``rte_security_session_create`` takes only single mempool for session and session private data. So the application need to create mempool for twice the number of sessions needed and will also lead to diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h index 42c83303e1..160481c747 100644 --- a/lib/librte_ethdev/rte_ethdev.h +++ b/lib/librte_ethdev/rte_ethdev.h @@ -253,6 +253,7 @@ struct rte_eth_stats { uint64_t ierrors; /**< Total number of erroneous received packets. */ uint64_t oerrors; /**< Total number of failed transmitted packets. */ uint64_t rx_nombuf; /**< Total number of RX mbuf allocation failures. */ + /* Queue stats are limited to max 256 queues */ uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS]; /**< Total number of queue RX packets. */ uint64_t q_opackets[RTE_ETHDEV_QUEUE_STAT_CNTRS]; @@ -2701,6 +2702,7 @@ int rte_eth_xstats_reset(uint16_t port_id); * The per-queue packet statistics functionality number that the transmit * queue is to be assigned. * The value must be in the range [0, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1]. + * Max RTE_ETHDEV_QUEUE_STAT_CNTRS being 256. * @return * Zero if successful. Non-zero otherwise. */ @@ -2721,6 +2723,7 @@ int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, * The per-queue packet statistics functionality number that the receive * queue is to be assigned. * The value must be in the range [0, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1]. + * Max RTE_ETHDEV_QUEUE_STAT_CNTRS being 256. * @return * Zero if successful. Non-zero otherwise. */