net/gve: add support for basic stats

Message ID 20221124073335.3985214-1-junfeng.guo@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers
Series net/gve: add support for basic stats |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/iol-broadcom-Functional fail Functional Testing issues
ci/Intel-compilation success Compilation OK
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/intel-Testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/github-robot: build success github build: passed
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS

Commit Message

Junfeng Guo Nov. 24, 2022, 7:33 a.m. UTC
  Add support for dev_ops of stats_get and stats_reset.

Queue stats update will be moved into xstat [1], but the basic stats
items may still be required. So we just keep the remaining ones and
will implement the queue stats via xstats in the coming release.

[1]
https://elixir.bootlin.com/dpdk/v22.07/ \
	source/doc/guides/rel_notes/deprecation.rst#L118

Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
 doc/guides/nics/features/gve.ini |  1 +
 drivers/net/gve/gve_ethdev.c     | 60 ++++++++++++++++++++++++++++++++
 drivers/net/gve/gve_ethdev.h     | 11 ++++++
 drivers/net/gve/gve_rx.c         | 15 ++++++--
 drivers/net/gve/gve_tx.c         | 13 +++++++
 5 files changed, 98 insertions(+), 2 deletions(-)
  

Comments

Stephen Hemminger Nov. 24, 2022, 4:59 p.m. UTC | #1
On Thu, 24 Nov 2022 15:33:35 +0800
Junfeng Guo <junfeng.guo@intel.com> wrote:

> +static int
> +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
> +{
> +	uint16_t i;
> +
> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +		struct gve_tx_queue *txq = dev->data->tx_queues[i];
> +		if (txq == NULL)
> +			continue;
> +
> +		stats->opackets += txq->packets;
> +		stats->obytes += txq->bytes;
> +		stats->oerrors += txq->errors;
> +	}
> +
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		struct gve_rx_queue *rxq = dev->data->rx_queues[i];
> +		if (rxq == NULL)
> +			continue;
> +
> +		stats->ipackets += rxq->packets;
> +		stats->ibytes += rxq->bytes;
> +		stats->ierrors += rxq->errors;
> +		stats->rx_nombuf += rxq->no_mbufs;
> +	}
> +
> +	return 0;
> +}
> +

The driver should be filling in the per-queue stats as well.
q_ipackets, q_opackets, q_ibytes, q_obytes, q_errors
  
Ferruh Yigit Nov. 24, 2022, 5:26 p.m. UTC | #2
On 11/24/2022 4:59 PM, Stephen Hemminger wrote:
> On Thu, 24 Nov 2022 15:33:35 +0800
> Junfeng Guo <junfeng.guo@intel.com> wrote:
> 
>> +static int
>> +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
>> +{
>> +	uint16_t i;
>> +
>> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
>> +		struct gve_tx_queue *txq = dev->data->tx_queues[i];
>> +		if (txq == NULL)
>> +			continue;
>> +
>> +		stats->opackets += txq->packets;
>> +		stats->obytes += txq->bytes;
>> +		stats->oerrors += txq->errors;
>> +	}
>> +
>> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
>> +		struct gve_rx_queue *rxq = dev->data->rx_queues[i];
>> +		if (rxq == NULL)
>> +			continue;
>> +
>> +		stats->ipackets += rxq->packets;
>> +		stats->ibytes += rxq->bytes;
>> +		stats->ierrors += rxq->errors;
>> +		stats->rx_nombuf += rxq->no_mbufs;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
> 
> The driver should be filling in the per-queue stats as well.
> q_ipackets, q_opackets, q_ibytes, q_obytes, q_errors


Hi Stephen,

Queue stats moved to xstats, and there is a long term goal to move all
queue stats from basic stats to xstats for all PMDs, and remove interim
'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag.

That is why request to new PMDs is to not introduce queue stats in basic
stats, but in xstats.
  
Rushil Gupta Nov. 26, 2022, 3:16 a.m. UTC | #3
Makes sense.




On Thu, Nov 24, 2022 at 9:26 AM Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>
> On 11/24/2022 4:59 PM, Stephen Hemminger wrote:
> > On Thu, 24 Nov 2022 15:33:35 +0800
> > Junfeng Guo <junfeng.guo@intel.com> wrote:
> >
> >> +static int
> >> +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
> >> +{
> >> +    uint16_t i;
> >> +
> >> +    for (i = 0; i < dev->data->nb_tx_queues; i++) {
> >> +            struct gve_tx_queue *txq = dev->data->tx_queues[i];
> >> +            if (txq == NULL)
> >> +                    continue;
> >> +
> >> +            stats->opackets += txq->packets;
> >> +            stats->obytes += txq->bytes;
> >> +            stats->oerrors += txq->errors;
> >> +    }
> >> +
> >> +    for (i = 0; i < dev->data->nb_rx_queues; i++) {
> >> +            struct gve_rx_queue *rxq = dev->data->rx_queues[i];
> >> +            if (rxq == NULL)
> >> +                    continue;
> >> +
> >> +            stats->ipackets += rxq->packets;
> >> +            stats->ibytes += rxq->bytes;
> >> +            stats->ierrors += rxq->errors;
> >> +            stats->rx_nombuf += rxq->no_mbufs;
> >> +    }
> >> +
> >> +    return 0;
> >> +}
> >> +
> >
> > The driver should be filling in the per-queue stats as well.
> > q_ipackets, q_opackets, q_ibytes, q_obytes, q_errors
>
>
> Hi Stephen,
>
> Queue stats moved to xstats, and there is a long term goal to move all
> queue stats from basic stats to xstats for all PMDs, and remove interim
> 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag.
>
> That is why request to new PMDs is to not introduce queue stats in basic
> stats, but in xstats.
  
Stephen Hemminger Nov. 26, 2022, 5:34 p.m. UTC | #4
On Fri, 25 Nov 2022 19:16:00 -0800
Rushil Gupta <rushilg@google.com> wrote:

> > >
> > > The driver should be filling in the per-queue stats as well.
> > > q_ipackets, q_opackets, q_ibytes, q_obytes, q_errors  
> >
> >
> > Hi Stephen,
> >
> > Queue stats moved to xstats, and there is a long term goal to move all
> > queue stats from basic stats to xstats for all PMDs, and remove interim
> > 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag.
> >
> > That is why request to new PMDs is to not introduce queue stats in basic
> > stats, but in xstats.

Agree that xstats are better but:

* the current checked in version of GVE does not have driver op for xstats

* if driver fills in the q_XXX[] stats then eth_dev_get_xstats_basic will be
  able to use it.

The per queue stats are limited to 256 queues and bloats the info structure,
but until an API breaking change is made to remove them, drivers should support
it.
  
Ferruh Yigit Nov. 28, 2022, 11:12 a.m. UTC | #5
On 11/26/2022 5:34 PM, Stephen Hemminger wrote:
> On Fri, 25 Nov 2022 19:16:00 -0800
> Rushil Gupta <rushilg@google.com> wrote:
> 
>>>>
>>>> The driver should be filling in the per-queue stats as well.
>>>> q_ipackets, q_opackets, q_ibytes, q_obytes, q_errors  
>>>
>>>
>>> Hi Stephen,
>>>
>>> Queue stats moved to xstats, and there is a long term goal to move all
>>> queue stats from basic stats to xstats for all PMDs, and remove interim
>>> 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag.
>>>
>>> That is why request to new PMDs is to not introduce queue stats in basic
>>> stats, but in xstats.
> 
> Agree that xstats are better but:
> 
> * the current checked in version of GVE does not have driver op for xstats
> 
> * if driver fills in the q_XXX[] stats then eth_dev_get_xstats_basic will be
>   able to use it.
> 

That is an option, but it will require driver to add
'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag what we want to get rid of in
long term.
That is why we are requesting new driver to add the xstats
implementation instead of adding queue support in basic stats. It is up
to PMD to provide xstats implementation if they want queue stats.


> The per queue stats are limited to 256 queues and bloats the info structure,
> but until an API breaking change is made to remove them, drivers should support
> it.
  
Stephen Hemminger Nov. 28, 2022, 5:24 p.m. UTC | #6
On Mon, 28 Nov 2022 11:12:38 +0000
Ferruh Yigit <ferruh.yigit@amd.com> wrote:

> On 11/26/2022 5:34 PM, Stephen Hemminger wrote:
> > On Fri, 25 Nov 2022 19:16:00 -0800
> > Rushil Gupta <rushilg@google.com> wrote:
> >   
> >>>>
> >>>> The driver should be filling in the per-queue stats as well.
> >>>> q_ipackets, q_opackets, q_ibytes, q_obytes, q_errors    
> >>>
> >>>
> >>> Hi Stephen,
> >>>
> >>> Queue stats moved to xstats, and there is a long term goal to move all
> >>> queue stats from basic stats to xstats for all PMDs, and remove interim
> >>> 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag.
> >>>
> >>> That is why request to new PMDs is to not introduce queue stats in basic
> >>> stats, but in xstats.  
> > 
> > Agree that xstats are better but:
> > 
> > * the current checked in version of GVE does not have driver op for xstats
> > 
> > * if driver fills in the q_XXX[] stats then eth_dev_get_xstats_basic will be
> >   able to use it.
> >   
> 
> That is an option, but it will require driver to add
> 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag what we want to get rid of in
> long term.
> That is why we are requesting new driver to add the xstats
> implementation instead of adding queue support in basic stats. It is up
> to PMD to provide xstats implementation if they want queue stats.

Agreed, having xstats in driver is the best way.
Doing what virtio does now would be a good example.
  
Junfeng Guo Nov. 29, 2022, 1:42 a.m. UTC | #7
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Tuesday, November 29, 2022 01:24
> To: Ferruh Yigit <ferruh.yigit@amd.com>
> Cc: Rushil Gupta <rushilg@google.com>; Guo, Junfeng
> <junfeng.guo@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> dev@dpdk.org; jeroendb@google.com; jrkim@google.com; Li, Xiaoyun
> <xiaoyun.li@intel.com>
> Subject: Re: [PATCH] net/gve: add support for basic stats
> 
> On Mon, 28 Nov 2022 11:12:38 +0000
> Ferruh Yigit <ferruh.yigit@amd.com> wrote:
> 
> > On 11/26/2022 5:34 PM, Stephen Hemminger wrote:
> > > On Fri, 25 Nov 2022 19:16:00 -0800
> > > Rushil Gupta <rushilg@google.com> wrote:
> > >
> > >>>>
> > >>>> The driver should be filling in the per-queue stats as well.
> > >>>> q_ipackets, q_opackets, q_ibytes, q_obytes, q_errors
> > >>>
> > >>>
> > >>> Hi Stephen,
> > >>>
> > >>> Queue stats moved to xstats, and there is a long term goal to move
> all
> > >>> queue stats from basic stats to xstats for all PMDs, and remove
> interim
> > >>> 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag.
> > >>>
> > >>> That is why request to new PMDs is to not introduce queue stats in
> basic
> > >>> stats, but in xstats.
> > >
> > > Agree that xstats are better but:
> > >
> > > * the current checked in version of GVE does not have driver op for
> xstats
> > >
> > > * if driver fills in the q_XXX[] stats then eth_dev_get_xstats_basic will
> be
> > >   able to use it.
> > >
> >
> > That is an option, but it will require driver to add
> > 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag what we want to get
> rid of in
> > long term.
> > That is why we are requesting new driver to add the xstats
> > implementation instead of adding queue support in basic stats. It is up
> > to PMD to provide xstats implementation if they want queue stats.
> 
> Agreed, having xstats in driver is the best way.
> Doing what virtio does now would be a good example.

Sure, the xstats implementation is planed to be added in the coming releases.
Currently the gve PMD only have some basic features supported, and will
have the rest features enabled gradually (might co-dev with Google team).
Thanks Stephen and Ferruh for talking about this feature and giving the
detailed explanations and concerns.

>
  
Ferruh Yigit Dec. 7, 2022, 3:09 p.m. UTC | #8
On 11/24/2022 7:33 AM, Junfeng Guo wrote:
> Add support for dev_ops of stats_get and stats_reset.
> 
> Queue stats update will be moved into xstat [1], but the basic stats
> items may still be required. So we just keep the remaining ones and
> will implement the queue stats via xstats in the coming release.
> 
> [1]
> https://elixir.bootlin.com/dpdk/v22.07/ \
> 	source/doc/guides/rel_notes/deprecation.rst#L118
> 
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>

<...>

> +static int
> +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
> +{
> +	uint16_t i;
> +
> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +		struct gve_tx_queue *txq = dev->data->tx_queues[i];
> +		if (txq == NULL)
> +			continue;
> +
> +		stats->opackets += txq->packets;
> +		stats->obytes += txq->bytes;
> +		stats->oerrors += txq->errors;

Hi Junfeng, Qi, Jingjing, Beilei,

Above logic looks wrong to me, did you test it?

If the 'gve_dev_stats_get()' called multiple times (without stats reset
in between), same values will be keep added to stats.
Some hw based implementations does this, because reading from stats
registers automatically reset those registers but this shouldn't be case
for this driver.

I expect it to be something like:

local_stats = 0
foreach queue
	local_stats += queue->stats
stats = local_stats
  
Stephen Hemminger Dec. 7, 2022, 4:39 p.m. UTC | #9
On Wed, 7 Dec 2022 15:09:08 +0000
Ferruh Yigit <ferruh.yigit@amd.com> wrote:

> On 11/24/2022 7:33 AM, Junfeng Guo wrote:
> > Add support for dev_ops of stats_get and stats_reset.
> > 
> > Queue stats update will be moved into xstat [1], but the basic stats
> > items may still be required. So we just keep the remaining ones and
> > will implement the queue stats via xstats in the coming release.
> > 
> > [1]
> > https://elixir.bootlin.com/dpdk/v22.07/ \
> > 	source/doc/guides/rel_notes/deprecation.rst#L118
> > 
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>  
> 
> <...>
> 
> > +static int
> > +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
> > +{
> > +	uint16_t i;
> > +
> > +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> > +		struct gve_tx_queue *txq = dev->data->tx_queues[i];
> > +		if (txq == NULL)
> > +			continue;
> > +
> > +		stats->opackets += txq->packets;
> > +		stats->obytes += txq->bytes;
> > +		stats->oerrors += txq->errors;  
> 
> Hi Junfeng, Qi, Jingjing, Beilei,
> 
> Above logic looks wrong to me, did you test it?
> 
> If the 'gve_dev_stats_get()' called multiple times (without stats reset
> in between), same values will be keep added to stats.
> Some hw based implementations does this, because reading from stats
> registers automatically reset those registers but this shouldn't be case
> for this driver.
> 
> I expect it to be something like:
> 
> local_stats = 0
> foreach queue
> 	local_stats += queue->stats
> stats = local_stats

The zero of local_stats is unnecessary.
The only caller of the PMD stats_get is rte_ethdev_stats_get
and it zeros the stats structure before calling the PMD.


int
rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
{
	struct rte_eth_dev *dev;

	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
	dev = &rte_eth_devices[port_id];

	memset(stats, 0, sizeof(*stats));
...
	stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
	return eth_err(port_id, (*dev->dev_ops->stats_get)(dev, stats));

If any PMD has extra memset in their stats get that could be removed.
  
Rushil Gupta Dec. 19, 2022, 7:17 p.m. UTC | #10
Hi all
Josh just found out some inconsistencies in the Tx/Rx statistics sum
for all ports. Not sure if we can screenshot here but it goes like
this:
Tx-dropped for port0: 447034
Tx-dropped for port1: 0
Accumulated forward statistics for all ports: 807630

Please note that this issue is only with Tx-dropped (not Tx-packets/Tx-total).


On Wed, Dec 7, 2022 at 8:39 AM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Wed, 7 Dec 2022 15:09:08 +0000
> Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>
> > On 11/24/2022 7:33 AM, Junfeng Guo wrote:
> > > Add support for dev_ops of stats_get and stats_reset.
> > >
> > > Queue stats update will be moved into xstat [1], but the basic stats
> > > items may still be required. So we just keep the remaining ones and
> > > will implement the queue stats via xstats in the coming release.
> > >
> > > [1]
> > > https://elixir.bootlin.com/dpdk/v22.07/ \
> > >     source/doc/guides/rel_notes/deprecation.rst#L118
> > >
> > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > > Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
> >
> > <...>
> >
> > > +static int
> > > +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
> > > +{
> > > +   uint16_t i;
> > > +
> > > +   for (i = 0; i < dev->data->nb_tx_queues; i++) {
> > > +           struct gve_tx_queue *txq = dev->data->tx_queues[i];
> > > +           if (txq == NULL)
> > > +                   continue;
> > > +
> > > +           stats->opackets += txq->packets;
> > > +           stats->obytes += txq->bytes;
> > > +           stats->oerrors += txq->errors;
> >
> > Hi Junfeng, Qi, Jingjing, Beilei,
> >
> > Above logic looks wrong to me, did you test it?
> >
> > If the 'gve_dev_stats_get()' called multiple times (without stats reset
> > in between), same values will be keep added to stats.
> > Some hw based implementations does this, because reading from stats
> > registers automatically reset those registers but this shouldn't be case
> > for this driver.
> >
> > I expect it to be something like:
> >
> > local_stats = 0
> > foreach queue
> >       local_stats += queue->stats
> > stats = local_stats
>
> The zero of local_stats is unnecessary.
> The only caller of the PMD stats_get is rte_ethdev_stats_get
> and it zeros the stats structure before calling the PMD.
>
>
> int
> rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
> {
>         struct rte_eth_dev *dev;
>
>         RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>         dev = &rte_eth_devices[port_id];
>
>         memset(stats, 0, sizeof(*stats));
> ...
>         stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
>         return eth_err(port_id, (*dev->dev_ops->stats_get)(dev, stats));
>
> If any PMD has extra memset in their stats get that could be removed.
  
Joshua Washington Dec. 19, 2022, 7:38 p.m. UTC | #11
Hello,

As it turns out, this error actually propagates to the "total" stats as
well, which I assume is just calculated by adding TX-packets and
TX-dropped. Here are the full stats from the example that Rushil mentioned:

  ---------------------- Forward statistics for port 0
 ----------------------
  RX-packets: 2453802        RX-dropped: 0             RX-total: 2453802
  TX-packets: 34266881       TX-dropped: 447034        TX-total: 34713915

----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1
 ----------------------
  RX-packets: 34713915       RX-dropped: 0             RX-total: 34713915
  TX-packets: 2453802        TX-dropped: 0             TX-total: 2453802

----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all
ports+++++++++++++++
  RX-packets: 37167717       RX-dropped: 0             RX-total: 37167717
  TX-packets: 36720683       TX-dropped: 807630        TX-total: 37528313

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

It can be seen that the stats for the individual ports are consistent, but
the TX-total and TX-dropped are not consistent with the stats for the
individual ports, as I believe that the TX-total and RX-total accumulated
stats should be equal.


On Mon, Dec 19, 2022 at 11:17 AM Rushil Gupta <rushilg@google.com> wrote:

> Hi all
> Josh just found out some inconsistencies in the Tx/Rx statistics sum
> for all ports. Not sure if we can screenshot here but it goes like
> this:
> Tx-dropped for port0: 447034
> Tx-dropped for port1: 0
> Accumulated forward statistics for all ports: 807630
>
> Please note that this issue is only with Tx-dropped (not
> Tx-packets/Tx-total).
>
>
> On Wed, Dec 7, 2022 at 8:39 AM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > On Wed, 7 Dec 2022 15:09:08 +0000
> > Ferruh Yigit <ferruh.yigit@amd.com> wrote:
> >
> > > On 11/24/2022 7:33 AM, Junfeng Guo wrote:
> > > > Add support for dev_ops of stats_get and stats_reset.
> > > >
> > > > Queue stats update will be moved into xstat [1], but the basic stats
> > > > items may still be required. So we just keep the remaining ones and
> > > > will implement the queue stats via xstats in the coming release.
> > > >
> > > > [1]
> > > > https://elixir.bootlin.com/dpdk/v22.07/ \
> > > >     source/doc/guides/rel_notes/deprecation.rst#L118
> > > >
> > > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > > > Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
> > >
> > > <...>
> > >
> > > > +static int
> > > > +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
> *stats)
> > > > +{
> > > > +   uint16_t i;
> > > > +
> > > > +   for (i = 0; i < dev->data->nb_tx_queues; i++) {
> > > > +           struct gve_tx_queue *txq = dev->data->tx_queues[i];
> > > > +           if (txq == NULL)
> > > > +                   continue;
> > > > +
> > > > +           stats->opackets += txq->packets;
> > > > +           stats->obytes += txq->bytes;
> > > > +           stats->oerrors += txq->errors;
> > >
> > > Hi Junfeng, Qi, Jingjing, Beilei,
> > >
> > > Above logic looks wrong to me, did you test it?
> > >
> > > If the 'gve_dev_stats_get()' called multiple times (without stats reset
> > > in between), same values will be keep added to stats.
> > > Some hw based implementations does this, because reading from stats
> > > registers automatically reset those registers but this shouldn't be
> case
> > > for this driver.
> > >
> > > I expect it to be something like:
> > >
> > > local_stats = 0
> > > foreach queue
> > >       local_stats += queue->stats
> > > stats = local_stats
> >
> > The zero of local_stats is unnecessary.
> > The only caller of the PMD stats_get is rte_ethdev_stats_get
> > and it zeros the stats structure before calling the PMD.
> >
> >
> > int
> > rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
> > {
> >         struct rte_eth_dev *dev;
> >
> >         RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> >         dev = &rte_eth_devices[port_id];
> >
> >         memset(stats, 0, sizeof(*stats));
> > ...
> >         stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
> >         return eth_err(port_id, (*dev->dev_ops->stats_get)(dev, stats));
> >
> > If any PMD has extra memset in their stats get that could be removed.
>
  
Ferruh Yigit Jan. 18, 2023, 4:22 p.m. UTC | #12
On 12/19/2022 7:38 PM, Joshua Washington wrote:
> Hello,
> 
> As it turns out, this error actually propagates to the "total" stats as
> well, which I assume is just calculated by adding TX-packets and
> TX-dropped. Here are the full stats from the example that Rushil mentioned:
> 
>   ---------------------- Forward statistics for port 0
>  ----------------------
>   RX-packets: 2453802        RX-dropped: 0             RX-total: 2453802
>   TX-packets: 34266881       TX-dropped: 447034        TX-total: 34713915
>  
> ----------------------------------------------------------------------------
> 
>   ---------------------- Forward statistics for port 1
>  ----------------------
>   RX-packets: 34713915       RX-dropped: 0             RX-total: 34713915
>   TX-packets: 2453802        TX-dropped: 0             TX-total: 2453802
>  
> ----------------------------------------------------------------------------
> 
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 37167717       RX-dropped: 0             RX-total: 37167717
>   TX-packets: 36720683       TX-dropped: 807630        TX-total: 37528313
>  
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> It can be seen that the stats for the individual ports are consistent,
> but the TX-total and TX-dropped are not consistent with the stats for
> the individual ports, as I believe that the TX-total and RX-total
> accumulated stats should be equal.
> 

Hi Joshua, Rushil,

As I checked for it, issue may be related to testpmd stats display,

While displaying per port TX-dropped value, it only takes
'ports_stats[pt_id].tx_dropped' into account,
but for accumulated TX-dropped results it takes both
'ports_stats[pt_id].tx_dropped' & 'stats.oerrors' into account.

If you can reproduce it easily, can you please test with following change:

 diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
 index 134d79a55547..49322d07d7d6 100644
 --- a/app/test-pmd/testpmd.c
 +++ b/app/test-pmd/testpmd.c
 @@ -2056,6 +2056,8 @@ fwd_stats_display(void)
                         fwd_cycles += fs->core_cycles;
         }
         for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
 +               uint64_t tx_dropped = 0;
 +
                 pt_id = fwd_ports_ids[i];
                 port = &ports[pt_id];

 @@ -2077,8 +2079,9 @@ fwd_stats_display(void)
                 total_recv += stats.ipackets;
                 total_xmit += stats.opackets;
                 total_rx_dropped += stats.imissed;
 -               total_tx_dropped += ports_stats[pt_id].tx_dropped;
 -               total_tx_dropped += stats.oerrors;
 +               tx_dropped += ports_stats[pt_id].tx_dropped;
 +               tx_dropped += stats.oerrors;
 +               total_tx_dropped += tx_dropped;
                 total_rx_nombuf  += stats.rx_nombuf;

                 printf("\n  %s Forward statistics for port %-2d %s\n",
 @@ -2105,8 +2108,8 @@ fwd_stats_display(void)

                 printf("  TX-packets: %-14"PRIu64" TX-dropped: %-14"PRIu64
                        "TX-total: %-"PRIu64"\n",
 -                      stats.opackets, ports_stats[pt_id].tx_dropped,
 -                      stats.opackets + ports_stats[pt_id].tx_dropped);
 +                      stats.opackets, tx_dropped,
 +                      stats.opackets + tx_dropped);

                 if (record_burst_stats) {
                         if (ports_stats[pt_id].rx_stream)


> 
> On Mon, Dec 19, 2022 at 11:17 AM Rushil Gupta <rushilg@google.com
> <mailto:rushilg@google.com>> wrote:
> 
>     Hi all
>     Josh just found out some inconsistencies in the Tx/Rx statistics sum
>     for all ports. Not sure if we can screenshot here but it goes like
>     this:
>     Tx-dropped for port0: 447034
>     Tx-dropped for port1: 0
>     Accumulated forward statistics for all ports: 807630
> 
>     Please note that this issue is only with Tx-dropped (not
>     Tx-packets/Tx-total).
> 
> 
>     On Wed, Dec 7, 2022 at 8:39 AM Stephen Hemminger
>     <stephen@networkplumber.org <mailto:stephen@networkplumber.org>> wrote:
>     >
>     > On Wed, 7 Dec 2022 15:09:08 +0000
>     > Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>
>     wrote:
>     >
>     > > On 11/24/2022 7:33 AM, Junfeng Guo wrote:
>     > > > Add support for dev_ops of stats_get and stats_reset.
>     > > >
>     > > > Queue stats update will be moved into xstat [1], but the basic
>     stats
>     > > > items may still be required. So we just keep the remaining
>     ones and
>     > > > will implement the queue stats via xstats in the coming release.
>     > > >
>     > > > [1]
>     > > > https://elixir.bootlin.com/dpdk/v22.07/
>     <https://elixir.bootlin.com/dpdk/v22.07/> \
>     > > >     source/doc/guides/rel_notes/deprecation.rst#L118
>     > > >
>     > > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com
>     <mailto:xiaoyun.li@intel.com>>
>     > > > Signed-off-by: Junfeng Guo <junfeng.guo@intel.com
>     <mailto:junfeng.guo@intel.com>>
>     > >
>     > > <...>
>     > >
>     > > > +static int
>     > > > +gve_dev_stats_get(struct rte_eth_dev *dev, struct
>     rte_eth_stats *stats)
>     > > > +{
>     > > > +   uint16_t i;
>     > > > +
>     > > > +   for (i = 0; i < dev->data->nb_tx_queues; i++) {
>     > > > +           struct gve_tx_queue *txq = dev->data->tx_queues[i];
>     > > > +           if (txq == NULL)
>     > > > +                   continue;
>     > > > +
>     > > > +           stats->opackets += txq->packets;
>     > > > +           stats->obytes += txq->bytes;
>     > > > +           stats->oerrors += txq->errors;
>     > >
>     > > Hi Junfeng, Qi, Jingjing, Beilei,
>     > >
>     > > Above logic looks wrong to me, did you test it?
>     > >
>     > > If the 'gve_dev_stats_get()' called multiple times (without
>     stats reset
>     > > in between), same values will be keep added to stats.
>     > > Some hw based implementations does this, because reading from stats
>     > > registers automatically reset those registers but this shouldn't
>     be case
>     > > for this driver.
>     > >
>     > > I expect it to be something like:
>     > >
>     > > local_stats = 0
>     > > foreach queue
>     > >       local_stats += queue->stats
>     > > stats = local_stats
>     >
>     > The zero of local_stats is unnecessary.
>     > The only caller of the PMD stats_get is rte_ethdev_stats_get
>     > and it zeros the stats structure before calling the PMD.
>     >
>     >
>     > int
>     > rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
>     > {
>     >         struct rte_eth_dev *dev;
>     >
>     >         RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>     >         dev = &rte_eth_devices[port_id];
>     >
>     >         memset(stats, 0, sizeof(*stats));
>     > ...
>     >         stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
>     >         return eth_err(port_id, (*dev->dev_ops->stats_get)(dev,
>     stats));
>     >
>     > If any PMD has extra memset in their stats get that could be removed.
> 
> 
> 
> -- 
> 
> Joshua Washington | Software Engineer | joshwash@google.com
> <mailto:joshwash@google.com> | (414) 366-4423
>
  
Joshua Washington Jan. 31, 2023, 1:51 a.m. UTC | #13
Hello,

I tested it out, and the updates to testpmd seem to work.

Before applying the second patch:
  ---------------------- Forward statistics for port 0
 ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 43769238       TX-dropped: 62634         TX-total: 43831872

----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1
 ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 43761119       TX-dropped: 70753         TX-total: 43831872

----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all
ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 87530357       TX-dropped: 157302        TX-total: 87687659

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

62634 + 70753 = 133387 != 157302

After applying the second patch:
  ---------------------- Forward statistics for port 0
 ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 12590721       TX-dropped: 36638         TX-total: 12627359

----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1
 ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 12596255       TX-dropped: 31746         TX-total: 12628001

----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all
ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 25186976       TX-dropped: 68384         TX-total: 25255360

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Thanks,
Josh

On Wed, Jan 18, 2023 at 8:22 AM Ferruh Yigit <ferruh.yigit@amd.com> wrote:

> On 12/19/2022 7:38 PM, Joshua Washington wrote:
> > Hello,
> >
> > As it turns out, this error actually propagates to the "total" stats as
> > well, which I assume is just calculated by adding TX-packets and
> > TX-dropped. Here are the full stats from the example that Rushil
> mentioned:
> >
> >   ---------------------- Forward statistics for port 0
> >  ----------------------
> >   RX-packets: 2453802        RX-dropped: 0             RX-total: 2453802
> >   TX-packets: 34266881       TX-dropped: 447034        TX-total: 34713915
> >
> >
> ----------------------------------------------------------------------------
> >
> >   ---------------------- Forward statistics for port 1
> >  ----------------------
> >   RX-packets: 34713915       RX-dropped: 0             RX-total: 34713915
> >   TX-packets: 2453802        TX-dropped: 0             TX-total: 2453802
> >
> >
> ----------------------------------------------------------------------------
> >
> >   +++++++++++++++ Accumulated forward statistics for all
> > ports+++++++++++++++
> >   RX-packets: 37167717       RX-dropped: 0             RX-total: 37167717
> >   TX-packets: 36720683       TX-dropped: 807630        TX-total: 37528313
> >
> >
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >
> > It can be seen that the stats for the individual ports are consistent,
> > but the TX-total and TX-dropped are not consistent with the stats for
> > the individual ports, as I believe that the TX-total and RX-total
> > accumulated stats should be equal.
> >
>
> Hi Joshua, Rushil,
>
> As I checked for it, issue may be related to testpmd stats display,
>
> While displaying per port TX-dropped value, it only takes
> 'ports_stats[pt_id].tx_dropped' into account,
> but for accumulated TX-dropped results it takes both
> 'ports_stats[pt_id].tx_dropped' & 'stats.oerrors' into account.
>
> If you can reproduce it easily, can you please test with following change:
>
>  diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
>  index 134d79a55547..49322d07d7d6 100644
>  --- a/app/test-pmd/testpmd.c
>  +++ b/app/test-pmd/testpmd.c
>  @@ -2056,6 +2056,8 @@ fwd_stats_display(void)
>                          fwd_cycles += fs->core_cycles;
>          }
>          for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
>  +               uint64_t tx_dropped = 0;
>  +
>                  pt_id = fwd_ports_ids[i];
>                  port = &ports[pt_id];
>
>  @@ -2077,8 +2079,9 @@ fwd_stats_display(void)
>                  total_recv += stats.ipackets;
>                  total_xmit += stats.opackets;
>                  total_rx_dropped += stats.imissed;
>  -               total_tx_dropped += ports_stats[pt_id].tx_dropped;
>  -               total_tx_dropped += stats.oerrors;
>  +               tx_dropped += ports_stats[pt_id].tx_dropped;
>  +               tx_dropped += stats.oerrors;
>  +               total_tx_dropped += tx_dropped;
>                  total_rx_nombuf  += stats.rx_nombuf;
>
>                  printf("\n  %s Forward statistics for port %-2d %s\n",
>  @@ -2105,8 +2108,8 @@ fwd_stats_display(void)
>
>                  printf("  TX-packets: %-14"PRIu64" TX-dropped: %-14"PRIu64
>                         "TX-total: %-"PRIu64"\n",
>  -                      stats.opackets, ports_stats[pt_id].tx_dropped,
>  -                      stats.opackets + ports_stats[pt_id].tx_dropped);
>  +                      stats.opackets, tx_dropped,
>  +                      stats.opackets + tx_dropped);
>
>                  if (record_burst_stats) {
>                          if (ports_stats[pt_id].rx_stream)
>
>
> >
> > On Mon, Dec 19, 2022 at 11:17 AM Rushil Gupta <rushilg@google.com
> > <mailto:rushilg@google.com>> wrote:
> >
> >     Hi all
> >     Josh just found out some inconsistencies in the Tx/Rx statistics sum
> >     for all ports. Not sure if we can screenshot here but it goes like
> >     this:
> >     Tx-dropped for port0: 447034
> >     Tx-dropped for port1: 0
> >     Accumulated forward statistics for all ports: 807630
> >
> >     Please note that this issue is only with Tx-dropped (not
> >     Tx-packets/Tx-total).
> >
> >
> >     On Wed, Dec 7, 2022 at 8:39 AM Stephen Hemminger
> >     <stephen@networkplumber.org <mailto:stephen@networkplumber.org>>
> wrote:
> >     >
> >     > On Wed, 7 Dec 2022 15:09:08 +0000
> >     > Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>
> >     wrote:
> >     >
> >     > > On 11/24/2022 7:33 AM, Junfeng Guo wrote:
> >     > > > Add support for dev_ops of stats_get and stats_reset.
> >     > > >
> >     > > > Queue stats update will be moved into xstat [1], but the basic
> >     stats
> >     > > > items may still be required. So we just keep the remaining
> >     ones and
> >     > > > will implement the queue stats via xstats in the coming
> release.
> >     > > >
> >     > > > [1]
> >     > > > https://elixir.bootlin.com/dpdk/v22.07/
> >     <https://elixir.bootlin.com/dpdk/v22.07/> \
> >     > > >     source/doc/guides/rel_notes/deprecation.rst#L118
> >     > > >
> >     > > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com
> >     <mailto:xiaoyun.li@intel.com>>
> >     > > > Signed-off-by: Junfeng Guo <junfeng.guo@intel.com
> >     <mailto:junfeng.guo@intel.com>>
> >     > >
> >     > > <...>
> >     > >
> >     > > > +static int
> >     > > > +gve_dev_stats_get(struct rte_eth_dev *dev, struct
> >     rte_eth_stats *stats)
> >     > > > +{
> >     > > > +   uint16_t i;
> >     > > > +
> >     > > > +   for (i = 0; i < dev->data->nb_tx_queues; i++) {
> >     > > > +           struct gve_tx_queue *txq = dev->data->tx_queues[i];
> >     > > > +           if (txq == NULL)
> >     > > > +                   continue;
> >     > > > +
> >     > > > +           stats->opackets += txq->packets;
> >     > > > +           stats->obytes += txq->bytes;
> >     > > > +           stats->oerrors += txq->errors;
> >     > >
> >     > > Hi Junfeng, Qi, Jingjing, Beilei,
> >     > >
> >     > > Above logic looks wrong to me, did you test it?
> >     > >
> >     > > If the 'gve_dev_stats_get()' called multiple times (without
> >     stats reset
> >     > > in between), same values will be keep added to stats.
> >     > > Some hw based implementations does this, because reading from
> stats
> >     > > registers automatically reset those registers but this shouldn't
> >     be case
> >     > > for this driver.
> >     > >
> >     > > I expect it to be something like:
> >     > >
> >     > > local_stats = 0
> >     > > foreach queue
> >     > >       local_stats += queue->stats
> >     > > stats = local_stats
> >     >
> >     > The zero of local_stats is unnecessary.
> >     > The only caller of the PMD stats_get is rte_ethdev_stats_get
> >     > and it zeros the stats structure before calling the PMD.
> >     >
> >     >
> >     > int
> >     > rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
> >     > {
> >     >         struct rte_eth_dev *dev;
> >     >
> >     >         RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> >     >         dev = &rte_eth_devices[port_id];
> >     >
> >     >         memset(stats, 0, sizeof(*stats));
> >     > ...
> >     >         stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
> >     >         return eth_err(port_id, (*dev->dev_ops->stats_get)(dev,
> >     stats));
> >     >
> >     > If any PMD has extra memset in their stats get that could be
> removed.
> >
> >
> >
> > --
> >
> > Joshua Washington | Software Engineer | joshwash@google.com
> > <mailto:joshwash@google.com> | (414) 366-4423
> >
>
>
  
Ferruh Yigit Jan. 31, 2023, 10:05 a.m. UTC | #14
On 1/31/2023 1:51 AM, Joshua Washington wrote:
> Hello,
> 
> I tested it out, and the updates to testpmd seem to work.
> 

Hi Joshua,

Thanks for testing, I will send a patch soon.

But this was testpmd issue, do you have any objection with the net/gve
patch, if not can you please record this with ack/review/tested-by tags,
so I can proceed with it.

> Before applying the second patch:
>   ---------------------- Forward statistics for port 0
>  ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 43769238       TX-dropped: 62634         TX-total: 43831872
>  
> ----------------------------------------------------------------------------
> 
>   ---------------------- Forward statistics for port 1
>  ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 43761119       TX-dropped: 70753         TX-total: 43831872
>  
> ----------------------------------------------------------------------------
> 
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 87530357       TX-dropped: 157302        TX-total: 87687659
>  
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> 62634 + 70753 = 133387 != 157302
> 
> After applying the second patch:
>   ---------------------- Forward statistics for port 0
>  ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 12590721       TX-dropped: 36638         TX-total: 12627359
>  
> ----------------------------------------------------------------------------
> 
>   ---------------------- Forward statistics for port 1
>  ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 12596255       TX-dropped: 31746         TX-total: 12628001
>  
> ----------------------------------------------------------------------------
> 
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 25186976       TX-dropped: 68384         TX-total: 25255360
>  
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> Thanks,
> Josh
> 
> On Wed, Jan 18, 2023 at 8:22 AM Ferruh Yigit <ferruh.yigit@amd.com
> <mailto:ferruh.yigit@amd.com>> wrote:
> 
>     On 12/19/2022 7:38 PM, Joshua Washington wrote:
>     > Hello,
>     >
>     > As it turns out, this error actually propagates to the "total"
>     stats as
>     > well, which I assume is just calculated by adding TX-packets and
>     > TX-dropped. Here are the full stats from the example that Rushil
>     mentioned:
>     >
>     >   ---------------------- Forward statistics for port 0
>     >  ----------------------
>     >   RX-packets: 2453802        RX-dropped: 0             RX-total:
>     2453802
>     >   TX-packets: 34266881       TX-dropped: 447034        TX-total:
>     34713915
>     >  
>     >
>     ----------------------------------------------------------------------------
>     >
>     >   ---------------------- Forward statistics for port 1
>     >  ----------------------
>     >   RX-packets: 34713915       RX-dropped: 0             RX-total:
>     34713915
>     >   TX-packets: 2453802        TX-dropped: 0             TX-total:
>     2453802
>     >  
>     >
>     ----------------------------------------------------------------------------
>     >
>     >   +++++++++++++++ Accumulated forward statistics for all
>     > ports+++++++++++++++
>     >   RX-packets: 37167717       RX-dropped: 0             RX-total:
>     37167717
>     >   TX-packets: 36720683       TX-dropped: 807630        TX-total:
>     37528313
>     >  
>     >
>     ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>     >
>     > It can be seen that the stats for the individual ports are consistent,
>     > but the TX-total and TX-dropped are not consistent with the stats for
>     > the individual ports, as I believe that the TX-total and RX-total
>     > accumulated stats should be equal.
>     >
> 
>     Hi Joshua, Rushil,
> 
>     As I checked for it, issue may be related to testpmd stats display,
> 
>     While displaying per port TX-dropped value, it only takes
>     'ports_stats[pt_id].tx_dropped' into account,
>     but for accumulated TX-dropped results it takes both
>     'ports_stats[pt_id].tx_dropped' & 'stats.oerrors' into account.
> 
>     If you can reproduce it easily, can you please test with following
>     change:
> 
>      diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
>      index 134d79a55547..49322d07d7d6 100644
>      --- a/app/test-pmd/testpmd.c
>      +++ b/app/test-pmd/testpmd.c
>      @@ -2056,6 +2056,8 @@ fwd_stats_display(void)
>                              fwd_cycles += fs->core_cycles;
>              }
>              for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
>      +               uint64_t tx_dropped = 0;
>      +
>                      pt_id = fwd_ports_ids[i];
>                      port = &ports[pt_id];
> 
>      @@ -2077,8 +2079,9 @@ fwd_stats_display(void)
>                      total_recv += stats.ipackets;
>                      total_xmit += stats.opackets;
>                      total_rx_dropped += stats.imissed;
>      -               total_tx_dropped += ports_stats[pt_id].tx_dropped;
>      -               total_tx_dropped += stats.oerrors;
>      +               tx_dropped += ports_stats[pt_id].tx_dropped;
>      +               tx_dropped += stats.oerrors;
>      +               total_tx_dropped += tx_dropped;
>                      total_rx_nombuf  += stats.rx_nombuf;
> 
>                      printf("\n  %s Forward statistics for port %-2d %s\n",
>      @@ -2105,8 +2108,8 @@ fwd_stats_display(void)
> 
>                      printf("  TX-packets: %-14"PRIu64" TX-dropped:
>     %-14"PRIu64
>                             "TX-total: %-"PRIu64"\n",
>      -                      stats.opackets, ports_stats[pt_id].tx_dropped,
>      -                      stats.opackets + ports_stats[pt_id].tx_dropped);
>      +                      stats.opackets, tx_dropped,
>      +                      stats.opackets + tx_dropped);
> 
>                      if (record_burst_stats) {
>                              if (ports_stats[pt_id].rx_stream)
> 
> 
>     >
>     > On Mon, Dec 19, 2022 at 11:17 AM Rushil Gupta <rushilg@google.com
>     <mailto:rushilg@google.com>
>     > <mailto:rushilg@google.com <mailto:rushilg@google.com>>> wrote:
>     >
>     >     Hi all
>     >     Josh just found out some inconsistencies in the Tx/Rx
>     statistics sum
>     >     for all ports. Not sure if we can screenshot here but it goes like
>     >     this:
>     >     Tx-dropped for port0: 447034
>     >     Tx-dropped for port1: 0
>     >     Accumulated forward statistics for all ports: 807630
>     >
>     >     Please note that this issue is only with Tx-dropped (not
>     >     Tx-packets/Tx-total).
>     >
>     >
>     >     On Wed, Dec 7, 2022 at 8:39 AM Stephen Hemminger
>     >     <stephen@networkplumber.org
>     <mailto:stephen@networkplumber.org>
>     <mailto:stephen@networkplumber.org
>     <mailto:stephen@networkplumber.org>>> wrote:
>     >     >
>     >     > On Wed, 7 Dec 2022 15:09:08 +0000
>     >     > Ferruh Yigit <ferruh.yigit@amd.com
>     <mailto:ferruh.yigit@amd.com> <mailto:ferruh.yigit@amd.com
>     <mailto:ferruh.yigit@amd.com>>>
>     >     wrote:
>     >     >
>     >     > > On 11/24/2022 7:33 AM, Junfeng Guo wrote:
>     >     > > > Add support for dev_ops of stats_get and stats_reset.
>     >     > > >
>     >     > > > Queue stats update will be moved into xstat [1], but the
>     basic
>     >     stats
>     >     > > > items may still be required. So we just keep the remaining
>     >     ones and
>     >     > > > will implement the queue stats via xstats in the coming
>     release.
>     >     > > >
>     >     > > > [1]
>     >     > > > https://elixir.bootlin.com/dpdk/v22.07/
>     <https://elixir.bootlin.com/dpdk/v22.07/>
>     >     <https://elixir.bootlin.com/dpdk/v22.07/
>     <https://elixir.bootlin.com/dpdk/v22.07/>> \
>     >     > > >     source/doc/guides/rel_notes/deprecation.rst#L118
>     >     > > >
>     >     > > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com
>     <mailto:xiaoyun.li@intel.com>
>     >     <mailto:xiaoyun.li@intel.com <mailto:xiaoyun.li@intel.com>>>
>     >     > > > Signed-off-by: Junfeng Guo <junfeng.guo@intel.com
>     <mailto:junfeng.guo@intel.com>
>     >     <mailto:junfeng.guo@intel.com <mailto:junfeng.guo@intel.com>>>
>     >     > >
>     >     > > <...>
>     >     > >
>     >     > > > +static int
>     >     > > > +gve_dev_stats_get(struct rte_eth_dev *dev, struct
>     >     rte_eth_stats *stats)
>     >     > > > +{
>     >     > > > +   uint16_t i;
>     >     > > > +
>     >     > > > +   for (i = 0; i < dev->data->nb_tx_queues; i++) {
>     >     > > > +           struct gve_tx_queue *txq =
>     dev->data->tx_queues[i];
>     >     > > > +           if (txq == NULL)
>     >     > > > +                   continue;
>     >     > > > +
>     >     > > > +           stats->opackets += txq->packets;
>     >     > > > +           stats->obytes += txq->bytes;
>     >     > > > +           stats->oerrors += txq->errors;
>     >     > >
>     >     > > Hi Junfeng, Qi, Jingjing, Beilei,
>     >     > >
>     >     > > Above logic looks wrong to me, did you test it?
>     >     > >
>     >     > > If the 'gve_dev_stats_get()' called multiple times (without
>     >     stats reset
>     >     > > in between), same values will be keep added to stats.
>     >     > > Some hw based implementations does this, because reading
>     from stats
>     >     > > registers automatically reset those registers but this
>     shouldn't
>     >     be case
>     >     > > for this driver.
>     >     > >
>     >     > > I expect it to be something like:
>     >     > >
>     >     > > local_stats = 0
>     >     > > foreach queue
>     >     > >       local_stats += queue->stats
>     >     > > stats = local_stats
>     >     >
>     >     > The zero of local_stats is unnecessary.
>     >     > The only caller of the PMD stats_get is rte_ethdev_stats_get
>     >     > and it zeros the stats structure before calling the PMD.
>     >     >
>     >     >
>     >     > int
>     >     > rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats)
>     >     > {
>     >     >         struct rte_eth_dev *dev;
>     >     >
>     >     >         RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>     >     >         dev = &rte_eth_devices[port_id];
>     >     >
>     >     >         memset(stats, 0, sizeof(*stats));
>     >     > ...
>     >     >         stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
>     >     >         return eth_err(port_id, (*dev->dev_ops->stats_get)(dev,
>     >     stats));
>     >     >
>     >     > If any PMD has extra memset in their stats get that could be
>     removed.
>     >
>     >
>     >
>     > --
>     >
>     > Joshua Washington | Software Engineer | joshwash@google.com
>     <mailto:joshwash@google.com>
>     > <mailto:joshwash@google.com <mailto:joshwash@google.com>> | (414)
>     366-4423 <tel:(414)%20366-4423>
>     >  
> 
> 
> 
> -- 
> 
> Joshua Washington | Software Engineer | joshwash@google.com
> <mailto:joshwash@google.com> | (414) 366-4423
>
  
Joshua Washington Feb. 2, 2023, 11 p.m. UTC | #15
Tested-by: Joshua Washington <joshwash@google.com>

On Tue, Jan 31, 2023 at 2:05 AM Ferruh Yigit <ferruh.yigit@amd.com> wrote:

> On 1/31/2023 1:51 AM, Joshua Washington wrote:
> > Hello,
> >
> > I tested it out, and the updates to testpmd seem to work.
> >
>
> Hi Joshua,
>
> Thanks for testing, I will send a patch soon.
>
> But this was testpmd issue, do you have any objection with the net/gve
> patch, if not can you please record this with ack/review/tested-by tags,
> so I can proceed with it.
>
> > Before applying the second patch:
> >   ---------------------- Forward statistics for port 0
> >  ----------------------
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 43769238       TX-dropped: 62634         TX-total: 43831872
> >
> >
> ----------------------------------------------------------------------------
> >
> >   ---------------------- Forward statistics for port 1
> >  ----------------------
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 43761119       TX-dropped: 70753         TX-total: 43831872
> >
> >
> ----------------------------------------------------------------------------
> >
> >   +++++++++++++++ Accumulated forward statistics for all
> > ports+++++++++++++++
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 87530357       TX-dropped: 157302        TX-total: 87687659
> >
> >
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >
> > 62634 + 70753 = 133387 != 157302
> >
> > After applying the second patch:
> >   ---------------------- Forward statistics for port 0
> >  ----------------------
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 12590721       TX-dropped: 36638         TX-total: 12627359
> >
> >
> ----------------------------------------------------------------------------
> >
> >   ---------------------- Forward statistics for port 1
> >  ----------------------
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 12596255       TX-dropped: 31746         TX-total: 12628001
> >
> >
> ----------------------------------------------------------------------------
> >
> >   +++++++++++++++ Accumulated forward statistics for all
> > ports+++++++++++++++
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 25186976       TX-dropped: 68384         TX-total: 25255360
> >
> >
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >
> > Thanks,
> > Josh
> >
> > On Wed, Jan 18, 2023 at 8:22 AM Ferruh Yigit <ferruh.yigit@amd.com
> > <mailto:ferruh.yigit@amd.com>> wrote:
> >
> >     On 12/19/2022 7:38 PM, Joshua Washington wrote:
> >     > Hello,
> >     >
> >     > As it turns out, this error actually propagates to the "total"
> >     stats as
> >     > well, which I assume is just calculated by adding TX-packets and
> >     > TX-dropped. Here are the full stats from the example that Rushil
> >     mentioned:
> >     >
> >     >   ---------------------- Forward statistics for port 0
> >     >  ----------------------
> >     >   RX-packets: 2453802        RX-dropped: 0             RX-total:
> >     2453802
> >     >   TX-packets: 34266881       TX-dropped: 447034        TX-total:
> >     34713915
> >     >
> >     >
> >
>  ----------------------------------------------------------------------------
> >     >
> >     >   ---------------------- Forward statistics for port 1
> >     >  ----------------------
> >     >   RX-packets: 34713915       RX-dropped: 0             RX-total:
> >     34713915
> >     >   TX-packets: 2453802        TX-dropped: 0             TX-total:
> >     2453802
> >     >
> >     >
> >
>  ----------------------------------------------------------------------------
> >     >
> >     >   +++++++++++++++ Accumulated forward statistics for all
> >     > ports+++++++++++++++
> >     >   RX-packets: 37167717       RX-dropped: 0             RX-total:
> >     37167717
> >     >   TX-packets: 36720683       TX-dropped: 807630        TX-total:
> >     37528313
> >     >
> >     >
> >
>  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >     >
> >     > It can be seen that the stats for the individual ports are
> consistent,
> >     > but the TX-total and TX-dropped are not consistent with the stats
> for
> >     > the individual ports, as I believe that the TX-total and RX-total
> >     > accumulated stats should be equal.
> >     >
> >
> >     Hi Joshua, Rushil,
> >
> >     As I checked for it, issue may be related to testpmd stats display,
> >
> >     While displaying per port TX-dropped value, it only takes
> >     'ports_stats[pt_id].tx_dropped' into account,
> >     but for accumulated TX-dropped results it takes both
> >     'ports_stats[pt_id].tx_dropped' & 'stats.oerrors' into account.
> >
> >     If you can reproduce it easily, can you please test with following
> >     change:
> >
> >      diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> >      index 134d79a55547..49322d07d7d6 100644
> >      --- a/app/test-pmd/testpmd.c
> >      +++ b/app/test-pmd/testpmd.c
> >      @@ -2056,6 +2056,8 @@ fwd_stats_display(void)
> >                              fwd_cycles += fs->core_cycles;
> >              }
> >              for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
> >      +               uint64_t tx_dropped = 0;
> >      +
> >                      pt_id = fwd_ports_ids[i];
> >                      port = &ports[pt_id];
> >
> >      @@ -2077,8 +2079,9 @@ fwd_stats_display(void)
> >                      total_recv += stats.ipackets;
> >                      total_xmit += stats.opackets;
> >                      total_rx_dropped += stats.imissed;
> >      -               total_tx_dropped += ports_stats[pt_id].tx_dropped;
> >      -               total_tx_dropped += stats.oerrors;
> >      +               tx_dropped += ports_stats[pt_id].tx_dropped;
> >      +               tx_dropped += stats.oerrors;
> >      +               total_tx_dropped += tx_dropped;
> >                      total_rx_nombuf  += stats.rx_nombuf;
> >
> >                      printf("\n  %s Forward statistics for port %-2d
> %s\n",
> >      @@ -2105,8 +2108,8 @@ fwd_stats_display(void)
> >
> >                      printf("  TX-packets: %-14"PRIu64" TX-dropped:
> >     %-14"PRIu64
> >                             "TX-total: %-"PRIu64"\n",
> >      -                      stats.opackets,
> ports_stats[pt_id].tx_dropped,
> >      -                      stats.opackets +
> ports_stats[pt_id].tx_dropped);
> >      +                      stats.opackets, tx_dropped,
> >      +                      stats.opackets + tx_dropped);
> >
> >                      if (record_burst_stats) {
> >                              if (ports_stats[pt_id].rx_stream)
> >
> >
> >     >
> >     > On Mon, Dec 19, 2022 at 11:17 AM Rushil Gupta <rushilg@google.com
> >     <mailto:rushilg@google.com>
> >     > <mailto:rushilg@google.com <mailto:rushilg@google.com>>> wrote:
> >     >
> >     >     Hi all
> >     >     Josh just found out some inconsistencies in the Tx/Rx
> >     statistics sum
> >     >     for all ports. Not sure if we can screenshot here but it goes
> like
> >     >     this:
> >     >     Tx-dropped for port0: 447034
> >     >     Tx-dropped for port1: 0
> >     >     Accumulated forward statistics for all ports: 807630
> >     >
> >     >     Please note that this issue is only with Tx-dropped (not
> >     >     Tx-packets/Tx-total).
> >     >
> >     >
> >     >     On Wed, Dec 7, 2022 at 8:39 AM Stephen Hemminger
> >     >     <stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>
> >     <mailto:stephen@networkplumber.org
> >     <mailto:stephen@networkplumber.org>>> wrote:
> >     >     >
> >     >     > On Wed, 7 Dec 2022 15:09:08 +0000
> >     >     > Ferruh Yigit <ferruh.yigit@amd.com
> >     <mailto:ferruh.yigit@amd.com> <mailto:ferruh.yigit@amd.com
> >     <mailto:ferruh.yigit@amd.com>>>
> >     >     wrote:
> >     >     >
> >     >     > > On 11/24/2022 7:33 AM, Junfeng Guo wrote:
> >     >     > > > Add support for dev_ops of stats_get and stats_reset.
> >     >     > > >
> >     >     > > > Queue stats update will be moved into xstat [1], but the
> >     basic
> >     >     stats
> >     >     > > > items may still be required. So we just keep the
> remaining
> >     >     ones and
> >     >     > > > will implement the queue stats via xstats in the coming
> >     release.
> >     >     > > >
> >     >     > > > [1]
> >     >     > > > https://elixir.bootlin.com/dpdk/v22.07/
> >     <https://elixir.bootlin.com/dpdk/v22.07/>
> >     >     <https://elixir.bootlin.com/dpdk/v22.07/
> >     <https://elixir.bootlin.com/dpdk/v22.07/>> \
> >     >     > > >     source/doc/guides/rel_notes/deprecation.rst#L118
> >     >     > > >
> >     >     > > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com
> >     <mailto:xiaoyun.li@intel.com>
> >     >     <mailto:xiaoyun.li@intel.com <mailto:xiaoyun.li@intel.com>>>
> >     >     > > > Signed-off-by: Junfeng Guo <junfeng.guo@intel.com
> >     <mailto:junfeng.guo@intel.com>
> >     >     <mailto:junfeng.guo@intel.com <mailto:junfeng.guo@intel.com>>>
> >     >     > >
> >     >     > > <...>
> >     >     > >
> >     >     > > > +static int
> >     >     > > > +gve_dev_stats_get(struct rte_eth_dev *dev, struct
> >     >     rte_eth_stats *stats)
> >     >     > > > +{
> >     >     > > > +   uint16_t i;
> >     >     > > > +
> >     >     > > > +   for (i = 0; i < dev->data->nb_tx_queues; i++) {
> >     >     > > > +           struct gve_tx_queue *txq =
> >     dev->data->tx_queues[i];
> >     >     > > > +           if (txq == NULL)
> >     >     > > > +                   continue;
> >     >     > > > +
> >     >     > > > +           stats->opackets += txq->packets;
> >     >     > > > +           stats->obytes += txq->bytes;
> >     >     > > > +           stats->oerrors += txq->errors;
> >     >     > >
> >     >     > > Hi Junfeng, Qi, Jingjing, Beilei,
> >     >     > >
> >     >     > > Above logic looks wrong to me, did you test it?
> >     >     > >
> >     >     > > If the 'gve_dev_stats_get()' called multiple times (without
> >     >     stats reset
> >     >     > > in between), same values will be keep added to stats.
> >     >     > > Some hw based implementations does this, because reading
> >     from stats
> >     >     > > registers automatically reset those registers but this
> >     shouldn't
> >     >     be case
> >     >     > > for this driver.
> >     >     > >
> >     >     > > I expect it to be something like:
> >     >     > >
> >     >     > > local_stats = 0
> >     >     > > foreach queue
> >     >     > >       local_stats += queue->stats
> >     >     > > stats = local_stats
> >     >     >
> >     >     > The zero of local_stats is unnecessary.
> >     >     > The only caller of the PMD stats_get is rte_ethdev_stats_get
> >     >     > and it zeros the stats structure before calling the PMD.
> >     >     >
> >     >     >
> >     >     > int
> >     >     > rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats
> *stats)
> >     >     > {
> >     >     >         struct rte_eth_dev *dev;
> >     >     >
> >     >     >         RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> >     >     >         dev = &rte_eth_devices[port_id];
> >     >     >
> >     >     >         memset(stats, 0, sizeof(*stats));
> >     >     > ...
> >     >     >         stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
> >     >     >         return eth_err(port_id,
> (*dev->dev_ops->stats_get)(dev,
> >     >     stats));
> >     >     >
> >     >     > If any PMD has extra memset in their stats get that could be
> >     removed.
> >     >
> >     >
> >     >
> >     > --
> >     >
> >     > Joshua Washington | Software Engineer | joshwash@google.com
> >     <mailto:joshwash@google.com>
> >     > <mailto:joshwash@google.com <mailto:joshwash@google.com>> | (414)
> >     366-4423 <tel:(414)%20366-4423>
> >     >
> >
> >
> >
> > --
> >
> > Joshua Washington | Software Engineer | joshwash@google.com
> > <mailto:joshwash@google.com> | (414) 366-4423
> >
>
>
  
Ferruh Yigit Feb. 3, 2023, 5:39 p.m. UTC | #16
On 11/24/2022 7:33 AM, Junfeng Guo wrote:
> Add support for dev_ops of stats_get and stats_reset.
> 
> Queue stats update will be moved into xstat [1], but the basic stats
> items may still be required. So we just keep the remaining ones and
> will implement the queue stats via xstats in the coming release.
> 
> [1]
> https://elixir.bootlin.com/dpdk/v22.07/ \
> 	source/doc/guides/rel_notes/deprecation.rst#L118
> 
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
>
> Tested-by: Joshua Washington <joshwash@google.com>
>

Updated commit to add Joshua to .mailmap.

Applied to dpdk-next-net/main, thanks.
  

Patch

diff --git a/doc/guides/nics/features/gve.ini b/doc/guides/nics/features/gve.ini
index cdc46b08a3..838edd456a 100644
--- a/doc/guides/nics/features/gve.ini
+++ b/doc/guides/nics/features/gve.ini
@@ -10,6 +10,7 @@  MTU update           = Y
 TSO                  = Y
 RSS hash             = Y
 L4 checksum offload  = Y
+Basic stats          = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index 97781f0ed3..06d1b796c8 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -319,6 +319,64 @@  gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	uint16_t i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		struct gve_tx_queue *txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		stats->opackets += txq->packets;
+		stats->obytes += txq->bytes;
+		stats->oerrors += txq->errors;
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		struct gve_rx_queue *rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		stats->ipackets += rxq->packets;
+		stats->ibytes += rxq->bytes;
+		stats->ierrors += rxq->errors;
+		stats->rx_nombuf += rxq->no_mbufs;
+	}
+
+	return 0;
+}
+
+static int
+gve_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	uint16_t i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		struct gve_tx_queue *txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		txq->packets  = 0;
+		txq->bytes = 0;
+		txq->errors = 0;
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		struct gve_rx_queue *rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		rxq->packets  = 0;
+		rxq->bytes = 0;
+		rxq->errors = 0;
+		rxq->no_mbufs = 0;
+	}
+
+	return 0;
+}
+
 static int
 gve_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
@@ -357,6 +415,8 @@  static const struct eth_dev_ops gve_eth_dev_ops = {
 	.rx_queue_release     = gve_rx_queue_release,
 	.tx_queue_release     = gve_tx_queue_release,
 	.link_update          = gve_link_update,
+	.stats_get            = gve_dev_stats_get,
+	.stats_reset          = gve_dev_stats_reset,
 	.mtu_set              = gve_dev_mtu_set,
 };
 
diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index 235e55899e..64e571bcae 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -92,6 +92,11 @@  struct gve_tx_queue {
 	struct gve_queue_page_list *qpl;
 	struct gve_tx_iovec *iov_ring;
 
+	/* stats items */
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+
 	uint16_t port_id;
 	uint16_t queue_id;
 
@@ -130,6 +135,12 @@  struct gve_rx_queue {
 	/* only valid for GQI_QPL queue format */
 	struct gve_queue_page_list *qpl;
 
+	/* stats items */
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t errors;
+	uint64_t no_mbufs;
+
 	struct gve_priv *hw;
 	const struct rte_memzone *qres_mz;
 	struct gve_queue_resources *qres;
diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c
index 518c9d109c..66fbcf3930 100644
--- a/drivers/net/gve/gve_rx.c
+++ b/drivers/net/gve/gve_rx.c
@@ -26,8 +26,10 @@  gve_rx_refill(struct gve_rx_queue *rxq)
 					break;
 				rxq->sw_ring[idx + i] = nmb;
 			}
-			if (i != nb_alloc)
+			if (i != nb_alloc) {
+				rxq->no_mbufs += nb_alloc - i;
 				nb_alloc = i;
+			}
 		}
 		rxq->nb_avail -= nb_alloc;
 		next_avail += nb_alloc;
@@ -88,6 +90,7 @@  gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint16_t rx_id = rxq->rx_tail;
 	struct rte_mbuf *rxe;
 	uint16_t nb_rx, len;
+	uint64_t bytes = 0;
 	uint64_t addr;
 	uint16_t i;
 
@@ -99,8 +102,10 @@  gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		if (GVE_SEQNO(rxd->flags_seq) != rxq->expected_seqno)
 			break;
 
-		if (rxd->flags_seq & GVE_RXF_ERR)
+		if (rxd->flags_seq & GVE_RXF_ERR) {
+			rxq->errors++;
 			continue;
+		}
 
 		len = rte_be_to_cpu_16(rxd->len) - GVE_RX_PAD;
 		rxe = rxq->sw_ring[rx_id];
@@ -135,6 +140,7 @@  gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rx_id = 0;
 
 		rx_pkts[nb_rx] = rxe;
+		bytes += len;
 		nb_rx++;
 	}
 
@@ -144,6 +150,11 @@  gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	if (rxq->nb_avail > rxq->free_thresh)
 		gve_rx_refill(rxq);
 
+	if (nb_rx) {
+		rxq->packets += nb_rx;
+		rxq->bytes += bytes;
+	}
+
 	return nb_rx;
 }
 
diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c
index bf4e8fea2c..9b41c59358 100644
--- a/drivers/net/gve/gve_tx.c
+++ b/drivers/net/gve/gve_tx.c
@@ -260,6 +260,7 @@  gve_tx_burst_qpl(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	struct rte_mbuf *tx_pkt, *first;
 	uint16_t sw_id = txq->sw_tail;
 	uint16_t nb_used, i;
+	uint64_t bytes = 0;
 	uint16_t nb_tx = 0;
 	uint32_t hlen;
 
@@ -355,6 +356,8 @@  gve_tx_burst_qpl(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		txq->nb_free -= nb_used;
 		txq->sw_nb_free -= first->nb_segs;
 		tx_tail += nb_used;
+
+		bytes += first->pkt_len;
 	}
 
 end_of_tx:
@@ -362,6 +365,10 @@  gve_tx_burst_qpl(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		rte_write32(rte_cpu_to_be_32(tx_tail), txq->qtx_tail);
 		txq->tx_tail = tx_tail;
 		txq->sw_tail = sw_id;
+
+		txq->packets += nb_tx;
+		txq->bytes += bytes;
+		txq->errors += nb_pkts - nb_tx;
 	}
 
 	return nb_tx;
@@ -380,6 +387,7 @@  gve_tx_burst_ra(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	struct rte_mbuf *tx_pkt, *first;
 	uint16_t nb_used, hlen, i;
 	uint64_t ol_flags, addr;
+	uint64_t bytes = 0;
 	uint16_t nb_tx = 0;
 
 	txr = txq->tx_desc_ring;
@@ -438,12 +446,17 @@  gve_tx_burst_ra(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		txq->nb_free -= nb_used;
 		tx_tail += nb_used;
+
+		bytes += first->pkt_len;
 	}
 
 end_of_tx:
 	if (nb_tx) {
 		rte_write32(rte_cpu_to_be_32(tx_tail), txq->qtx_tail);
 		txq->tx_tail = tx_tail;
+
+		txq->packets += nb_tx;
+		txq->bytes += bytes;
 	}
 
 	return nb_tx;