[RFC,2/2] doc: announce queue stats moving to xstats
Checks
Commit Message
Queue stats will be removed from basic stats to xstats.
It will be PMDs responsibility to fill queue stats based on number of
queues they have.
Until all PMDs implement the xstats, a temporary
'RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS' device flag created. PMDs switched
to the xstats should set this flag to bypass ethdev layer for queue
stats.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
config/rte_config.h | 2 +-
doc/guides/rel_notes/deprecation.rst | 7 +++++++
lib/librte_ethdev/rte_ethdev.h | 3 +++
3 files changed, 11 insertions(+), 1 deletion(-)
Comments
On Mon, 12 Oct 2020 17:46:01 +0100
Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> Queue stats will be removed from basic stats to xstats.
>
> It will be PMDs responsibility to fill queue stats based on number of
> queues they have.
>
> Until all PMDs implement the xstats, a temporary
> 'RTE_ETH_DEV_QUEUE_STATS_IN_XSTATS' device flag created. PMDs switched
> to the xstats should set this flag to bypass ethdev layer for queue
> stats.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Maybe we need to have better API's to query per queue?
For example, there is no way to query if queue is started or not.
@@ -55,7 +55,7 @@
/* ether defines */
#define RTE_MAX_QUEUES_PER_PORT 1024
-#define RTE_ETHDEV_QUEUE_STAT_CNTRS 16
+#define RTE_ETHDEV_QUEUE_STAT_CNTRS 16 /* max 256 */
#define RTE_ETHDEV_RXTX_CALLBACKS 1
/* cryptodev defines */
@@ -164,6 +164,13 @@ Deprecation Notices
following the IPv6 header, as proposed in RFC
https://mails.dpdk.org/archives/dev/2020-August/177257.html.
+* ethdev: Queue specific stats fields will be removed from ``struct rte_eth_stats``.
+ Mentioned fields are: ``q_ipackets``, ``q_opackets``, ``q_ibytes``, ``q_obytes``,
+ ``q_errors``.
+ Instead queue stats will be received via xstats API. Current method support
+ will be limited to maximum 256 queues.
+ Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
+
* security: The API ``rte_security_session_create`` takes only single mempool
for session and session private data. So the application need to create
mempool for twice the number of sessions needed and will also lead to
@@ -253,6 +253,7 @@ struct rte_eth_stats {
uint64_t ierrors; /**< Total number of erroneous received packets. */
uint64_t oerrors; /**< Total number of failed transmitted packets. */
uint64_t rx_nombuf; /**< Total number of RX mbuf allocation failures. */
+ /* Queue stats are limited to max 256 queues */
uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS];
/**< Total number of queue RX packets. */
uint64_t q_opackets[RTE_ETHDEV_QUEUE_STAT_CNTRS];
@@ -2701,6 +2702,7 @@ int rte_eth_xstats_reset(uint16_t port_id);
* The per-queue packet statistics functionality number that the transmit
* queue is to be assigned.
* The value must be in the range [0, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1].
+ * Max RTE_ETHDEV_QUEUE_STAT_CNTRS being 256.
* @return
* Zero if successful. Non-zero otherwise.
*/
@@ -2721,6 +2723,7 @@ int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id,
* The per-queue packet statistics functionality number that the receive
* queue is to be assigned.
* The value must be in the range [0, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1].
+ * Max RTE_ETHDEV_QUEUE_STAT_CNTRS being 256.
* @return
* Zero if successful. Non-zero otherwise.
*/