[v3] net/pcap: improve rx statistics

Message ID 20210909024531.10009-1-chenqiming_huawei@163.com (mailing list archive)
State Superseded, archived
Headers
Series [v3] net/pcap: improve rx statistics |

Checks

Context Check Description
ci/intel-Testing fail Testing issues
ci/checkpatch success coding style OK
ci/github-robot: build success github build: passed
ci/iol-broadcom-Performance success Performance Testing PASS
ci/Intel-compilation success Compilation OK
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS

Commit Message

Qiming Chen Sept. 9, 2021, 2:45 a.m. UTC
  In the receiving direction, if alloc mbuf or jumbo process failed, there
is no err_pkts count, which makes it difficult to locate the problem.
Because alloc mbuf failed, the rx_nombuf field is counted.

Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
---
v2:
  Clear coding style issues.
v3:
  1) Send direction does not release mbuf.
  2) Failed to alloc mbuf is counted to the rx_nombuf field.
---
 drivers/net/pcap/pcap_ethdev.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)
  

Comments

Stephen Hemminger Sept. 9, 2021, 3:29 a.m. UTC | #1
On Thu,  9 Sep 2021 10:45:31 +0800
Qiming Chen <chenqiming_huawei@163.com> wrote:

> In the receiving direction, if alloc mbuf or jumbo process failed, there
> is no err_pkts count, which makes it difficult to locate the problem.
> Because alloc mbuf failed, the rx_nombuf field is counted.
> 
> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
> ---
> v2:
>   Clear coding style issues.
> v3:
>   1) Send direction does not release mbuf.
>   2) Failed to alloc mbuf is counted to the rx_nombuf field.

Looks good, the field "err_pkts" is a confusing name for me.

On Tx it means packets dropped because pcap_sendpacket() returned error.
Looking inside libpcap that means send() failed. On Linux this is
a send on a PF_PACKET socket and it appears to be a blocking socket().
So these errors are not transient conditions.

On Rx it means packets dropped because out of mbufs.

Perhaps a comment or renaming the field would helped.
  
Ferruh Yigit Sept. 9, 2021, 10:20 a.m. UTC | #2
On 9/9/2021 4:29 AM, Stephen Hemminger wrote:
> On Thu,  9 Sep 2021 10:45:31 +0800
> Qiming Chen <chenqiming_huawei@163.com> wrote:
> 
>> In the receiving direction, if alloc mbuf or jumbo process failed, there
>> is no err_pkts count, which makes it difficult to locate the problem.
>> Because alloc mbuf failed, the rx_nombuf field is counted.
>>
>> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
>> ---
>> v2:
>>   Clear coding style issues.
>> v3:
>>   1) Send direction does not release mbuf.
>>   2) Failed to alloc mbuf is counted to the rx_nombuf field.
> 
> Looks good, the field "err_pkts" is a confusing name for me.
> 
> On Tx it means packets dropped because pcap_sendpacket() returned error.
> Looking inside libpcap that means send() failed. On Linux this is
> a send on a PF_PACKET socket and it appears to be a blocking socket().
> So these errors are not transient conditions.
> 
> On Rx it means packets dropped because out of mbufs.
> 

In later versions of the pathc, out of mbufs updates the 'rx_nombuf' value, and
pcap Rx API error updates the 'err_pkts'.

> Perhaps a comment or renaming the field would helped.
>
  

Patch

diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a43..64b0dbf0e4 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -297,8 +297,10 @@  eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			break;
 
 		mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
-		if (unlikely(mbuf == NULL))
-			break;
+		if (unlikely(mbuf == NULL)) {
+			pcap_q->rx_stat.err_pkts++;
+			continue;
+		}
 
 		if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
 			/* pcap packet will fit in the mbuf, can copy it */
@@ -311,6 +313,7 @@  eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 						       mbuf,
 						       packet,
 						       header.caplen) == -1)) {
+				pcap_q->rx_stat.err_pkts++;
 				rte_pktmbuf_free(mbuf);
 				break;
 			}
@@ -742,7 +745,7 @@  eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
 	unsigned int i;
 	unsigned long rx_packets_total = 0, rx_bytes_total = 0;
-	unsigned long rx_missed_total = 0;
+	unsigned long rx_missed_total = 0, rx_nombuf = 0;
 	unsigned long tx_packets_total = 0, tx_bytes_total = 0;
 	unsigned long tx_packets_err_total = 0;
 	const struct pmd_internals *internal = dev->data->dev_private;
@@ -751,6 +754,7 @@  eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			i < dev->data->nb_rx_queues; i++) {
 		stats->q_ipackets[i] = internal->rx_queue[i].rx_stat.pkts;
 		stats->q_ibytes[i] = internal->rx_queue[i].rx_stat.bytes;
+		rx_nombuf += internal->rx_queue[i].rx_stat.err_pkts;
 		rx_packets_total += stats->q_ipackets[i];
 		rx_bytes_total += stats->q_ibytes[i];
 		rx_missed_total += queue_missed_stat_get(dev, i);
@@ -771,6 +775,7 @@  eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->opackets = tx_packets_total;
 	stats->obytes = tx_bytes_total;
 	stats->oerrors = tx_packets_err_total;
+	stats->rx_nombuf = rx_nombuf;
 
 	return 0;
 }
@@ -784,6 +789,7 @@  eth_stats_reset(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		internal->rx_queue[i].rx_stat.pkts = 0;
 		internal->rx_queue[i].rx_stat.bytes = 0;
+		internal->rx_queue[i].rx_stat.err_pkts = 0;
 		queue_missed_stat_reset(dev, i);
 	}