[v5] net/gve: add Rx/Tx queue stats as extended stats

Message ID 20230221141814.13674-1-levendsayar@gmail.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers
Series [v5] net/gve: add Rx/Tx queue stats as extended stats |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation warning apply patch failure
ci/Intel-compilation warning apply issues
ci/iol-testing warning apply patch failure

Commit Message

Levend Sayar Feb. 21, 2023, 2:18 p.m. UTC
  Google Virtual NIC rx/tx queue stats are added as extended stats.

Signed-off-by: Levend Sayar <levendsayar@gmail.com>
---
 drivers/net/gve/gve_ethdev.c | 137 +++++++++++++++++++++++++++++++----
 drivers/net/gve/gve_ethdev.h |  28 +++++--
 drivers/net/gve/gve_rx.c     |  12 +--
 drivers/net/gve/gve_tx.c     |  11 +--
 4 files changed, 157 insertions(+), 31 deletions(-)
  

Comments

Ferruh Yigit Feb. 21, 2023, 3:58 p.m. UTC | #1
On 2/21/2023 2:18 PM, Levend Sayar wrote:
> Google Virtual NIC rx/tx queue stats are added as extended stats.
> 
> Signed-off-by: Levend Sayar <levendsayar@gmail.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>

<...>

> @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq)
>  	if (nb_alloc <= rxq->nb_avail) {
>  		diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc);
>  		if (diag < 0) {
> +			rxq->stats.no_mbufs_bulk++;

It is not common to record bulk alloc failures, but as 'no_mbufs'
already recorded conventionally, I guess it is OK to keep this extra
stat if it is helpful.
  
Levend Sayar Feb. 21, 2023, 4:44 p.m. UTC | #2
Thanks Ferruh for the review.

> On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote:
> 
> On 2/21/2023 2:18 PM, Levend Sayar wrote:
>> Google Virtual NIC rx/tx queue stats are added as extended stats.
>> 
>> Signed-off-by: Levend Sayar <levendsayar@gmail.com>
> 
> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
> 
> <...>
> 
>> @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq)
>> 	if (nb_alloc <= rxq->nb_avail) {
>> 		diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc);
>> 		if (diag < 0) {
>> +			rxq->stats.no_mbufs_bulk++;
> 
> It is not common to record bulk alloc failures, but as 'no_mbufs'
> already recorded conventionally, I guess it is OK to keep this extra
> stat if it is helpful.
>
  
Junfeng Guo Feb. 23, 2023, 2:49 a.m. UTC | #3
Thanks!

> -----Original Message-----
> From: Levend Sayar <levendsayar@gmail.com>
> Sent: Wednesday, February 22, 2023 00:44
> To: Ferruh Yigit <ferruh.yigit@amd.com>
> Cc: Guo, Junfeng <junfeng.guo@intel.com>; dev@dpdk.org
> Subject: Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats
> 
> Thanks Ferruh for the review.
> 
> > On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote:
> >
> > On 2/21/2023 2:18 PM, Levend Sayar wrote:
> >> Google Virtual NIC rx/tx queue stats are added as extended stats.
> >>
> >> Signed-off-by: Levend Sayar <levendsayar@gmail.com>
> >
> > Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
> >

Acked-by: Junfeng Guo <junfeng.guo@intel.com>

> > <...>
> >
> >> @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq)
> >> 	if (nb_alloc <= rxq->nb_avail) {
> >> 		diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq-
> >sw_ring[idx], nb_alloc);
> >> 		if (diag < 0) {
> >> +			rxq->stats.no_mbufs_bulk++;
> >
> > It is not common to record bulk alloc failures, but as 'no_mbufs'
> > already recorded conventionally, I guess it is OK to keep this extra
> > stat if it is helpful.
> >
  
Levend Sayar Feb. 23, 2023, 6:28 a.m. UTC | #4
Thanks Junfeng for acknowledging.

> On 23 Feb 2023, at 05:49, Guo, Junfeng <junfeng.guo@intel.com> wrote:
> 
> Thanks!
> 
>> -----Original Message-----
>> From: Levend Sayar <levendsayar@gmail.com>
>> Sent: Wednesday, February 22, 2023 00:44
>> To: Ferruh Yigit <ferruh.yigit@amd.com>
>> Cc: Guo, Junfeng <junfeng.guo@intel.com>; dev@dpdk.org
>> Subject: Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats
>> 
>> Thanks Ferruh for the review.
>> 
>>> On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>>> 
>>> On 2/21/2023 2:18 PM, Levend Sayar wrote:
>>>> Google Virtual NIC rx/tx queue stats are added as extended stats.
>>>> 
>>>> Signed-off-by: Levend Sayar <levendsayar@gmail.com>
>>> 
>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
>>> 
> 
> Acked-by: Junfeng Guo <junfeng.guo@intel.com>
> 
>>> <...>
>>> 
>>>> @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq)
>>>> 	if (nb_alloc <= rxq->nb_avail) {
>>>> 		diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq-
>>> sw_ring[idx], nb_alloc);
>>>> 		if (diag < 0) {
>>>> +			rxq->stats.no_mbufs_bulk++;
>>> 
>>> It is not common to record bulk alloc failures, but as 'no_mbufs'
>>> already recorded conventionally, I guess it is OK to keep this extra
>>> stat if it is helpful.
>>> 
>
  
Ferruh Yigit Feb. 23, 2023, 11:09 a.m. UTC | #5
On 2/23/2023 2:49 AM, Guo, Junfeng wrote:

>> -----Original Message-----
>> From: Levend Sayar <levendsayar@gmail.com>
>> Sent: Wednesday, February 22, 2023 00:44
>> To: Ferruh Yigit <ferruh.yigit@amd.com>
>> Cc: Guo, Junfeng <junfeng.guo@intel.com>; dev@dpdk.org
>> Subject: Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats
>>
>> Thanks Ferruh for the review.
>>
>>> On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>>>
>>> On 2/21/2023 2:18 PM, Levend Sayar wrote:
>>>> Google Virtual NIC rx/tx queue stats are added as extended stats.
>>>>
>>>> Signed-off-by: Levend Sayar <levendsayar@gmail.com>
>>>
>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
>>>
> 
> Acked-by: Junfeng Guo <junfeng.guo@intel.com>
> 

while merging 'gve_xstats_name_offset' structs moved down to group them
with xstats dev_ops.

Applied to dpdk-next-net/main, thanks.
  
Levend Sayar Feb. 23, 2023, 12:30 p.m. UTC | #6
Thanks Ferruh for applying.

> On 23 Feb 2023, at 14:09, Ferruh Yigit <ferruh.yigit@amd.com> wrote:
> 
> On 2/23/2023 2:49 AM, Guo, Junfeng wrote:
> 
>>> -----Original Message-----
>>> From: Levend Sayar <levendsayar@gmail.com>
>>> Sent: Wednesday, February 22, 2023 00:44
>>> To: Ferruh Yigit <ferruh.yigit@amd.com>
>>> Cc: Guo, Junfeng <junfeng.guo@intel.com>; dev@dpdk.org
>>> Subject: Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats
>>> 
>>> Thanks Ferruh for the review.
>>> 
>>>> On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>>>> 
>>>> On 2/21/2023 2:18 PM, Levend Sayar wrote:
>>>>> Google Virtual NIC rx/tx queue stats are added as extended stats.
>>>>> 
>>>>> Signed-off-by: Levend Sayar <levendsayar@gmail.com>
>>>> 
>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
>>>> 
>> 
>> Acked-by: Junfeng Guo <junfeng.guo@intel.com>
>> 
> 
> while merging 'gve_xstats_name_offset' structs moved down to group them
> with xstats dev_ops.
> 
> Applied to dpdk-next-net/main, thanks.
  

Patch

diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index fef2458a16..21f0c0fca2 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -6,9 +6,26 @@ 
 #include "base/gve_adminq.h"
 #include "base/gve_register.h"
 
+#define TX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_tx_stats, x)
+#define RX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_rx_stats, x)
+
 const char gve_version_str[] = GVE_VERSION;
 static const char gve_version_prefix[] = GVE_VERSION_PREFIX;
 
+static const struct gve_xstats_name_offset tx_xstats_name_offset[] = {
+	{ "packets", TX_QUEUE_STATS_OFFSET(packets) },
+	{ "bytes",   TX_QUEUE_STATS_OFFSET(bytes) },
+	{ "errors",  TX_QUEUE_STATS_OFFSET(errors) },
+};
+
+static const struct gve_xstats_name_offset rx_xstats_name_offset[] = {
+	{ "packets",                RX_QUEUE_STATS_OFFSET(packets) },
+	{ "bytes",                  RX_QUEUE_STATS_OFFSET(bytes) },
+	{ "errors",                 RX_QUEUE_STATS_OFFSET(errors) },
+	{ "mbuf_alloc_errors",      RX_QUEUE_STATS_OFFSET(no_mbufs) },
+	{ "mbuf_alloc_errors_bulk", RX_QUEUE_STATS_OFFSET(no_mbufs_bulk) },
+};
+
 static void
 gve_write_version(uint8_t *driver_version_register)
 {
@@ -328,9 +345,9 @@  gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 		if (txq == NULL)
 			continue;
 
-		stats->opackets += txq->packets;
-		stats->obytes += txq->bytes;
-		stats->oerrors += txq->errors;
+		stats->opackets += txq->stats.packets;
+		stats->obytes += txq->stats.bytes;
+		stats->oerrors += txq->stats.errors;
 	}
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -338,10 +355,10 @@  gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 		if (rxq == NULL)
 			continue;
 
-		stats->ipackets += rxq->packets;
-		stats->ibytes += rxq->bytes;
-		stats->ierrors += rxq->errors;
-		stats->rx_nombuf += rxq->no_mbufs;
+		stats->ipackets += rxq->stats.packets;
+		stats->ibytes += rxq->stats.bytes;
+		stats->ierrors += rxq->stats.errors;
+		stats->rx_nombuf += rxq->stats.no_mbufs;
 	}
 
 	return 0;
@@ -357,9 +374,7 @@  gve_dev_stats_reset(struct rte_eth_dev *dev)
 		if (txq == NULL)
 			continue;
 
-		txq->packets  = 0;
-		txq->bytes = 0;
-		txq->errors = 0;
+		memset(&txq->stats, 0, sizeof(txq->stats));
 	}
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -367,10 +382,7 @@  gve_dev_stats_reset(struct rte_eth_dev *dev)
 		if (rxq == NULL)
 			continue;
 
-		rxq->packets  = 0;
-		rxq->bytes = 0;
-		rxq->errors = 0;
-		rxq->no_mbufs = 0;
+		memset(&rxq->stats, 0, sizeof(rxq->stats));
 	}
 
 	return 0;
@@ -403,6 +415,101 @@  gve_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	return 0;
 }
 
+static int
+gve_xstats_count(struct rte_eth_dev *dev)
+{
+	uint16_t i, count = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		if (dev->data->tx_queues[i])
+			count += RTE_DIM(tx_xstats_name_offset);
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (dev->data->rx_queues[i])
+			count += RTE_DIM(rx_xstats_name_offset);
+	}
+
+	return count;
+}
+
+static int
+gve_xstats_get(struct rte_eth_dev *dev,
+			struct rte_eth_xstat *xstats,
+			unsigned int size)
+{
+	uint16_t i, j, count = gve_xstats_count(dev);
+	const char *stats;
+
+	if (xstats == NULL || size < count)
+		return count;
+
+	count = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		const struct gve_tx_queue *txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		stats = (const char *)&txq->stats;
+		for (j = 0; j < RTE_DIM(tx_xstats_name_offset); j++, count++) {
+			xstats[count].id = count;
+			xstats[count].value = *(const uint64_t *)
+				(stats + tx_xstats_name_offset[j].offset);
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		const struct gve_rx_queue *rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		stats = (const char *)&rxq->stats;
+		for (j = 0; j < RTE_DIM(rx_xstats_name_offset); j++, count++) {
+			xstats[count].id = count;
+			xstats[count].value = *(const uint64_t *)
+				(stats + rx_xstats_name_offset[j].offset);
+		}
+	}
+
+	return count;
+}
+
+static int
+gve_xstats_get_names(struct rte_eth_dev *dev,
+			struct rte_eth_xstat_name *xstats_names,
+			unsigned int size)
+{
+	uint16_t i, j, count = gve_xstats_count(dev);
+
+	if (xstats_names == NULL || size < count)
+		return count;
+
+	count = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		if (dev->data->tx_queues[i] == NULL)
+			continue;
+
+		for (j = 0; j < RTE_DIM(tx_xstats_name_offset); j++)
+			snprintf(xstats_names[count++].name,
+				 RTE_ETH_XSTATS_NAME_SIZE,
+				 "tx_q%u_%s", i, tx_xstats_name_offset[j].name);
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (dev->data->rx_queues[i] == NULL)
+			continue;
+
+		for (j = 0; j < RTE_DIM(rx_xstats_name_offset); j++)
+			snprintf(xstats_names[count++].name,
+				 RTE_ETH_XSTATS_NAME_SIZE,
+				 "rx_q%u_%s", i, rx_xstats_name_offset[j].name);
+	}
+
+	return count;
+}
+
 static const struct eth_dev_ops gve_eth_dev_ops = {
 	.dev_configure        = gve_dev_configure,
 	.dev_start            = gve_dev_start,
@@ -417,6 +524,8 @@  static const struct eth_dev_ops gve_eth_dev_ops = {
 	.stats_get            = gve_dev_stats_get,
 	.stats_reset          = gve_dev_stats_reset,
 	.mtu_set              = gve_dev_mtu_set,
+	.xstats_get           = gve_xstats_get,
+	.xstats_get_names     = gve_xstats_get_names,
 };
 
 static void
diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index 64e571bcae..42a02cf5d4 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -67,6 +67,25 @@  struct gve_tx_iovec {
 	uint32_t iov_len;
 };
 
+struct gve_tx_stats {
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+};
+
+struct gve_rx_stats {
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+	uint64_t no_mbufs;
+	uint64_t no_mbufs_bulk;
+};
+
+struct gve_xstats_name_offset {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
 struct gve_tx_queue {
 	volatile union gve_tx_desc *tx_desc_ring;
 	const struct rte_memzone *mz;
@@ -93,9 +112,7 @@  struct gve_tx_queue {
 	struct gve_tx_iovec *iov_ring;
 
 	/* stats items */
-	uint64_t packets;
-	uint64_t bytes;
-	uint64_t errors;
+	struct gve_tx_stats stats;
 
 	uint16_t port_id;
 	uint16_t queue_id;
@@ -136,10 +153,7 @@  struct gve_rx_queue {
 	struct gve_queue_page_list *qpl;
 
 	/* stats items */
-	uint64_t packets;
-	uint64_t bytes;
-	uint64_t errors;
-	uint64_t no_mbufs;
+	struct gve_rx_stats stats;
 
 	struct gve_priv *hw;
 	const struct rte_memzone *qres_mz;
diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c
index d346efa57c..8d8f94efff 100644
--- a/drivers/net/gve/gve_rx.c
+++ b/drivers/net/gve/gve_rx.c
@@ -20,6 +20,7 @@  gve_rx_refill(struct gve_rx_queue *rxq)
 	if (nb_alloc <= rxq->nb_avail) {
 		diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc);
 		if (diag < 0) {
+			rxq->stats.no_mbufs_bulk++;
 			for (i = 0; i < nb_alloc; i++) {
 				nmb = rte_pktmbuf_alloc(rxq->mpool);
 				if (!nmb)
@@ -27,7 +28,7 @@  gve_rx_refill(struct gve_rx_queue *rxq)
 				rxq->sw_ring[idx + i] = nmb;
 			}
 			if (i != nb_alloc) {
-				rxq->no_mbufs += nb_alloc - i;
+				rxq->stats.no_mbufs += nb_alloc - i;
 				nb_alloc = i;
 			}
 		}
@@ -55,6 +56,7 @@  gve_rx_refill(struct gve_rx_queue *rxq)
 			nb_alloc = rxq->nb_rx_desc - idx;
 		diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc);
 		if (diag < 0) {
+			rxq->stats.no_mbufs_bulk++;
 			for (i = 0; i < nb_alloc; i++) {
 				nmb = rte_pktmbuf_alloc(rxq->mpool);
 				if (!nmb)
@@ -62,7 +64,7 @@  gve_rx_refill(struct gve_rx_queue *rxq)
 				rxq->sw_ring[idx + i] = nmb;
 			}
 			if (i != nb_alloc) {
-				rxq->no_mbufs += nb_alloc - i;
+				rxq->stats.no_mbufs += nb_alloc - i;
 				nb_alloc = i;
 			}
 		}
@@ -106,7 +108,7 @@  gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			break;
 
 		if (rxd->flags_seq & GVE_RXF_ERR) {
-			rxq->errors++;
+			rxq->stats.errors++;
 			continue;
 		}
 
@@ -154,8 +156,8 @@  gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		gve_rx_refill(rxq);
 
 	if (nb_rx) {
-		rxq->packets += nb_rx;
-		rxq->bytes += bytes;
+		rxq->stats.packets += nb_rx;
+		rxq->stats.bytes += bytes;
 	}
 
 	return nb_rx;
diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c
index 9b41c59358..fee3b939c7 100644
--- a/drivers/net/gve/gve_tx.c
+++ b/drivers/net/gve/gve_tx.c
@@ -366,9 +366,9 @@  gve_tx_burst_qpl(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		txq->tx_tail = tx_tail;
 		txq->sw_tail = sw_id;
 
-		txq->packets += nb_tx;
-		txq->bytes += bytes;
-		txq->errors += nb_pkts - nb_tx;
+		txq->stats.packets += nb_tx;
+		txq->stats.bytes += bytes;
+		txq->stats.errors += nb_pkts - nb_tx;
 	}
 
 	return nb_tx;
@@ -455,8 +455,9 @@  gve_tx_burst_ra(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		rte_write32(rte_cpu_to_be_32(tx_tail), txq->qtx_tail);
 		txq->tx_tail = tx_tail;
 
-		txq->packets += nb_tx;
-		txq->bytes += bytes;
+		txq->stats.packets += nb_tx;
+		txq->stats.bytes += bytes;
+		txq->stats.errors += nb_pkts - nb_tx;
 	}
 
 	return nb_tx;