From patchwork Fri May 10 05:01:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140013 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E26C243FEC; Fri, 10 May 2024 07:05:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7ABA7402EC; Fri, 10 May 2024 07:05:14 +0200 (CEST) Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by mails.dpdk.org (Postfix) with ESMTP id AF7DD4025F for ; Fri, 10 May 2024 07:05:12 +0200 (CEST) Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1ed012c1afbso13634875ad.1 for ; Thu, 09 May 2024 22:05:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715317512; x=1715922312; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LQEvvo9hdsnNCJU05asnkzipDryml9gKV3XPEkyKQ1U=; b=PQgPYTbs30Ev+vhbvkd9+o6HrSRGeA/IP7XDm6eeaIZ3jz4repTPya+PI2Qf3z2l1E dH/1qiywsFk4G0Ye4kXMhOeIavGRa+xAQI3BVWkikmtzl2xSmpdSUCF50bZ5tLeT33PA oE69+7090Bhl/HoZIpu1Y84QxoJeLpYfoQO+M9MDnLUnUXH1a0TcIajySAMtnaZg/Py9 VkrK1MNyfOuPc1LRvKGZkDKH2cWTCPNSTu6KBb3XRCH1iwHDIKWOV1BsyOhfqhPnQwQb 0JRg2LCW6Qo3vUtUGb4BJ9gVY28jwnapn98H2ZXhLwYOW3Pdfzg34F6VUws15ElVFSBn kI6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715317512; x=1715922312; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LQEvvo9hdsnNCJU05asnkzipDryml9gKV3XPEkyKQ1U=; b=kDPJftFAthBd9ldUVhLiFT3aUF0AhnTQ33heIMJi7yVXGatv6/6BqO5Yt0ryQWxZkV iDpk/NiBsA/+Ab2vokqilyZVJ/FVfieRS41eI5GAjYUwYKVO3cxFC+ez+U5ElSANeAZr 8osPI+ECG7sHeOQuwPkHf3tH9l4jEoxl+7gb3E2Irq9hNqxKmqWdJtB2AY0NWSsW92Th kBt/loVfn+4fOfe07FLfz8uG4dVcTht5MqvxaI8UsJaXSzcvf3ogAjZWIk1Q6cgeh7Vh KetwrbQlWKBbUCVTJ1SqpHUpBb0FyLNgb8MLpGkVnpukoxAMUU4QA4YnW0KRIYl+wRlg la3A== X-Gm-Message-State: AOJu0Yz7EpsUmvfGB2wSfm627BNObxERSDT1JkYIdI5MLGQzeP5Hw/ml wX8b54XbwcV0demOP8OQ1H2gZANhj71UGVKNU/CnVDku72PwcgMnc4Q15iJ38z+4pJ5xT1C2kaY rqLk= X-Google-Smtp-Source: AGHT+IH7RdwNUUB24GQmblHQzJEWlZ4AZC52oTf5GbFN8x8H7loagzr2jzBvILB/L02V4qypb4C4WQ== X-Received: by 2002:a17:902:dad2:b0:1eb:828:9a71 with SMTP id d9443c01a7336-1ef432a0cbbmr25962525ad.31.1715317511799; Thu, 09 May 2024 22:05:11 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0c25566asm23007385ad.283.2024.05.09.22.05.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 May 2024 22:05:11 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [RFC 1/3] ethdev: add internal helper of SW driver statistics Date: Thu, 9 May 2024 22:01:21 -0700 Message-ID: <20240510050507.14381-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240510050507.14381-1-stephen@networkplumber.org> References: <20240425174617.2126159-1-ferruh.yigit@amd.com> <20240510050507.14381-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This clones the staistic update code from virtio for use by other drivers. It also uses native uint64_t on 64 bit platform but atomic operations on 32 bit platforms. Signed-off-by: Stephen Hemminger ethdev: use atomic on 32 --- lib/ethdev/ethdev_swstats.c | 294 ++++++++++++++++++++++++++++++++++++ lib/ethdev/ethdev_swstats.h | 60 ++++++++ lib/ethdev/meson.build | 2 + lib/ethdev/version.map | 7 + 4 files changed, 363 insertions(+) create mode 100644 lib/ethdev/ethdev_swstats.c create mode 100644 lib/ethdev/ethdev_swstats.h diff --git a/lib/ethdev/ethdev_swstats.c b/lib/ethdev/ethdev_swstats.c new file mode 100644 index 0000000000..81b9ac13b5 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.c @@ -0,0 +1,294 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#include + +#include +#include + +#include "rte_ethdev.h" +#include "ethdev_swstats.h" + +/* + * Handling of 64 bit counters to problems with load/store tearing on 32 bit. + * Store of aligned 64 bit never gets seperated on 64 bit platform. + * But on 32 bit need to use atomic. + */ +#ifdef RTE_ARCH_64 +typedef uint64_t eth_counter_t; + +static inline void +eth_counter_add(eth_counter_t *counter, uint32_t val) +{ + counter += val; +} + +static inline uint64_t +eth_counter_read(const eth_counter_t *counter) +{ + return *counter; +} + +static inline void +eth_counter_reset(eth_counter_t *counter) +{ + *counter = 0; +} +#else +static inline void +eth_counter_add(eth_counter_t *counter, uint32_t val) +{ + rte_atomic_fetch_add_explicit(counter, val, rte_memory_order_relaxed); +} + +static inline uint64_t +eth_counter_read(const eth_counter_t *counter) +{ + return rte_atomic_load_explicit(counter, rte_memory_order_relaxed); +} + +static inline void +eth_counter_reset(eth_counter_t *counter) +{ + rte_atomic_store_explicit(counter, 0, rte_memory_order_relaxed); +} + +#endif + +static void +eth_qsw_reset(struct rte_eth_qsw_stats *qstats) +{ + unsigned int i; + + eth_counter_reset(&qstats->packets); + eth_counter_reset(&qstats->bytes); + eth_counter_reset(&qstats->multicast); + eth_counter_reset(&qstats->broadcast); + + for (i = 0; i < RTE_DIM(qstats->size_bins); i++) + eth_counter_reset(&qstats->size_bins[i]); +} + +void +rte_eth_qsw_update(struct rte_eth_qsw_stats *qstats, const struct rte_mbuf *mbuf) +{ + uint32_t s = mbuf->pkt_len; + uint32_t bin; + const struct rte_ether_addr *ea; + + if (s == 64) { + bin = 1; + } else if (s > 64 && s < 1024) { + /* count zeros, and offset into correct bin */ + bin = (sizeof(s) * 8) - rte_clz32(s) - 5; + } else if (s < 64) { + bin = 0; + } else if (s < 1519) { + bin = 6; + } else { + bin = 7; + } + + eth_counter_add(&qstats->packets, 1); + eth_counter_add(&qstats->bytes, s); + eth_counter_add(&qstats->size_bins[bin], 1); + + ea = rte_pktmbuf_mtod(mbuf, const struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + eth_counter_add(&qstats->broadcast, 1); + else + eth_counter_add(&qstats->multicast, 1); + } +} + +void +rte_eth_qsw_error_inc(struct rte_eth_qsw_stats *qstats) +{ + eth_counter_add(&qstats->errors, 1); +} + +int +rte_eth_qsw_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + unsigned int i; + uint64_t packets, bytes, errors; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + /* assumes that rte_eth_qsw_stats is at start of the queue structure */ + const struct rte_eth_qsw_stats *qstats = dev->data->tx_queues[i]; + + if (qstats == NULL) + continue; + + packets = eth_counter_read(&qstats->packets); + bytes = eth_counter_read(&qstats->bytes); + errors = eth_counter_read(&qstats->errors); + + stats->opackets += packets; + stats->obytes += bytes; + stats->oerrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_opackets[i] = packets; + stats->q_obytes[i] = bytes; + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + /* assumes that rte_eth_qsw_stats is at start of the queue structure */ + const struct rte_eth_qsw_stats *qstats = dev->data->rx_queues[i]; + + if (qstats == NULL) + continue; + + packets = eth_counter_read(&qstats->packets); + bytes = eth_counter_read(&qstats->bytes); + errors = eth_counter_read(&qstats->errors); + + stats->ipackets += packets; + stats->ibytes += bytes; + stats->ierrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_ipackets[i] = packets; + stats->q_ibytes[i] = bytes; + } + } + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + return 0; +} + +int +rte_eth_qsw_stats_reset(struct rte_eth_dev *dev) +{ + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct rte_eth_qsw_stats *qstats = dev->data->tx_queues[i]; + + if (qstats != NULL) + eth_qsw_reset(qstats); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct rte_eth_qsw_stats *qstats = dev->data->rx_queues[i]; + + if (qstats != NULL) + eth_qsw_reset(qstats); + } + + return 0; +} + +struct xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + size_t offset; +}; + +/* [rt]x_qX_ is prepended to the name string here */ +static const struct xstats_name_off eth_swstats_strings[] = { + {"good_packets", offsetof(struct rte_eth_qsw_stats, packets)}, + {"good_bytes", offsetof(struct rte_eth_qsw_stats, bytes)}, + {"errors", offsetof(struct rte_eth_qsw_stats, errors)}, + {"multicast_packets", offsetof(struct rte_eth_qsw_stats, multicast)}, + {"broadcast_packets", offsetof(struct rte_eth_qsw_stats, broadcast)}, + {"undersize_packets", offsetof(struct rte_eth_qsw_stats, size_bins[0])}, + {"size_64_packets", offsetof(struct rte_eth_qsw_stats, size_bins[1])}, + {"size_65_127_packets", offsetof(struct rte_eth_qsw_stats, size_bins[2])}, + {"size_128_255_packets", offsetof(struct rte_eth_qsw_stats, size_bins[3])}, + {"size_256_511_packets", offsetof(struct rte_eth_qsw_stats, size_bins[4])}, + {"size_512_1023_packets", offsetof(struct rte_eth_qsw_stats, size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct rte_eth_qsw_stats, size_bins[6])}, + {"size_1519_max_packets", offsetof(struct rte_eth_qsw_stats, size_bins[7])}, +}; +#define NUM_SWSTATS_XSTATS RTE_DIM(eth_swstats_strings) + + +int +rte_eth_qsw_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned limit) +{ + unsigned int i, t, count = 0; + + if (xstats_names == NULL) + return (dev->data->nb_tx_queues + dev->data->nb_rx_queues) * NUM_SWSTATS_XSTATS; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const void *rxq = dev->data->rx_queues[i]; + + if (rxq == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + snprintf(xstats_names[count].name, sizeof(xstats_names[count].name), + "rx_q%u_%s", i, eth_swstats_strings[t].name); + count++; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const void *txq = dev->data->tx_queues[i]; + + if (txq == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + snprintf(xstats_names[count].name, sizeof(xstats_names[count].name), + "tx_q%u_%s", i, eth_swstats_strings[t].name); + count++; + } + } + return count; +} + +int +rte_eth_qsw_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) +{ + unsigned int i, t, count = 0; + const unsigned int nstats + = (dev->data->nb_tx_queues + dev->data->nb_rx_queues) * NUM_SWSTATS_XSTATS; + + if (n < nstats) + return nstats; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + /* assumes that rte_eth_qsw_stats is at start of the queue structure */ + const struct rte_eth_qsw_stats *qstats = dev->data->rx_queues[i]; + + if (qstats == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + const uint64_t *valuep + = (const uint64_t *)((const char *)qstats + + eth_swstats_strings[t].offset); + + xstats[count].value = *valuep; + xstats[count].id = count; + ++count; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const struct rte_eth_qsw_stats *qstats = dev->data->tx_queues[i]; + + if (qstats == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + const uint64_t *valuep + = (const uint64_t *)((const char *)qstats + + eth_swstats_strings[t].offset); + + xstats[count].value = *valuep; + xstats[count].id = count; + ++count; + } + } + + return count; +} diff --git a/lib/ethdev/ethdev_swstats.h b/lib/ethdev/ethdev_swstats.h new file mode 100644 index 0000000000..6309107128 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_ETHDEV_SWSTATS_H_ +#define _RTE_ETHDEV_SWSTATS_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +#ifdef RTE_ARCH_64 +typedef uint64_t eth_counter_t; +#else +typedef RTE_ATOMIC(uint64_t) eth_counter_t; +#endif + +struct rte_eth_qsw_stats { + eth_counter_t packets; + eth_counter_t bytes; + eth_counter_t errors; + eth_counter_t multicast; + eth_counter_t broadcast; + /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ + eth_counter_t size_bins[8]; +}; + +__rte_internal +void +rte_eth_qsw_update(struct rte_eth_qsw_stats *stats, const struct rte_mbuf *mbuf); + +__rte_internal +void +rte_eth_qsw_error_inc(struct rte_eth_qsw_stats *stats); + +__rte_internal +int +rte_eth_qsw_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); + +__rte_internal +int +rte_eth_qsw_stats_reset(struct rte_eth_dev *dev); + +__rte_internal +int +rte_eth_qsw_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit); +__rte_internal +int +rte_eth_qsw_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, + unsigned int n); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_ETHDEV_SWSTATS_H_ */ diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index f1d2586591..7ce29a46d4 100644 --- a/lib/ethdev/meson.build +++ b/lib/ethdev/meson.build @@ -3,6 +3,7 @@ sources = files( 'ethdev_driver.c', + 'ethdev_swstats.c', 'ethdev_private.c', 'ethdev_profile.c', 'ethdev_trace_points.c', @@ -42,6 +43,7 @@ driver_sdk_headers += files( 'ethdev_driver.h', 'ethdev_pci.h', 'ethdev_vdev.h', + 'ethdev_swstats.h', ) if is_linux diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 79f6f5293b..32ebe5ea09 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -358,4 +358,11 @@ INTERNAL { rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; rte_flow_fp_default_ops; + + rte_eth_qsw_error_inc; + rte_eth_qsw_stats_get; + rte_eth_qsw_stats_reset; + rte_eth_qsw_update; + rte_eth_qsw_xstats_get; + rte_eth_qsw_xstats_get_names; }; From patchwork Fri May 10 05:01:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140014 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5009243FEC; Fri, 10 May 2024 07:05:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2CE8D402ED; Fri, 10 May 2024 07:05:16 +0200 (CEST) Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by mails.dpdk.org (Postfix) with ESMTP id 9C493402DD for ; Fri, 10 May 2024 07:05:13 +0200 (CEST) Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-1ecc23e6c9dso11224745ad.2 for ; Thu, 09 May 2024 22:05:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715317513; x=1715922313; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=j57IIvqJXPSUEUdVEK4npGeF72BYO1ygFDwmfO4HNSs=; b=Uwhqui4zpB7+gx8WhF0P1gd/aScIkm5hvyptVtm4QD/ErvxkEWgfyRizI3GqioZkUH IQ4SUMNBEkIgtIIS6w5rZ6hRwwAb8N5S4SzVdwEOike/KxIqO8jdlay784tHCuaWhUP4 7qHPbthmMZ2DUrAhZyKx7DGLBEZVe8vX10zCQlD49uYPawGmYKfqXwfFt1YQ8mWFWamc +9dr0nMcaO9eLLF0GXwXkMzhu6DAvJyDPx17BvyMNALSoy86/usCOtHU4qYhIKq4Mp4i AI+qinUgtn8dlMQE6si9D+H0MMfRmM04IExGlOJpnvEWZUvntveJOeV4TxWaSW0O3joH 7Myw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715317513; x=1715922313; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=j57IIvqJXPSUEUdVEK4npGeF72BYO1ygFDwmfO4HNSs=; b=FDOAavz47JlGk9l5VaaNH0zQPLL7LfaTdsj9IuKwGMh50PlQICnPQ/rWvyectqmrgV utwnPHg5Yzyfigz+cphYnul9FdPc4EUY1/r0x4TrJxnfDQ7DTCBEHeA5yPLo2QKGc/Fn t6eUu5D2DsvsG49+1tTMYQjldhIgfAztP8S2Jn27Iu5umqitF66Kk8H3ywYdz3DIq4XT VrfbDzEGu3R2t0YBZZuIwnCNEjuYtC5l2Hr6vmGoVrd4KKLBZzwM3vdnmf8Shq7w7TwP 7hM/+5npUIdX5kWxYe5PLgkLC7rHIL6Eeh6grM4LAPPsEY6go/JgGgdzreYMYyVX0XRO mQZQ== X-Gm-Message-State: AOJu0Yw1Iiy71+6FT8v/GXq6hZiOlm9FbnSszaijdHTbL0af/fANBDvi L8aQsZMWEgbhKHr7Us9PmkqrXSNgndrMlV27CV582AGMFNzrQ0SbdhN/9t0JEit0hyMZ+TWEUJm RtBs= X-Google-Smtp-Source: AGHT+IHLyL2+r7j8zAD+q0y1vbvzEesabFKdwOT/ujp44mwEJb6ytn18pUf1ioc2HtIjlhiGFinNlA== X-Received: by 2002:a17:902:d3d5:b0:1eb:86c:ed70 with SMTP id d9443c01a7336-1ef44060dfemr15991385ad.59.1715317512701; Thu, 09 May 2024 22:05:12 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0c25566asm23007385ad.283.2024.05.09.22.05.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 May 2024 22:05:12 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [RFC 2/3] net/af_packet: use SW stats helper Date: Thu, 9 May 2024 22:01:22 -0700 Message-ID: <20240510050507.14381-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240510050507.14381-1-stephen@networkplumber.org> References: <20240425174617.2126159-1-ferruh.yigit@amd.com> <20240510050507.14381-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use the new generic SW stats. Signed-off-by: Stephen Hemminger --- drivers/net/af_packet/rte_eth_af_packet.c | 97 ++++------------------- 1 file changed, 16 insertions(+), 81 deletions(-) diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c index 397a32db58..8fac37a1b1 100644 --- a/drivers/net/af_packet/rte_eth_af_packet.c +++ b/drivers/net/af_packet/rte_eth_af_packet.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -29,6 +30,7 @@ #include #include + #define ETH_AF_PACKET_IFACE_ARG "iface" #define ETH_AF_PACKET_NUM_Q_ARG "qpairs" #define ETH_AF_PACKET_BLOCKSIZE_ARG "blocksz" @@ -40,6 +42,8 @@ #define DFLT_FRAME_COUNT (1 << 9) struct pkt_rx_queue { + struct rte_eth_qsw_stats stats; /* must be first */ + int sockfd; struct iovec *rd; @@ -50,12 +54,11 @@ struct pkt_rx_queue { struct rte_mempool *mb_pool; uint16_t in_port; uint8_t vlan_strip; - - volatile unsigned long rx_pkts; - volatile unsigned long rx_bytes; }; struct pkt_tx_queue { + struct rte_eth_qsw_stats stats; /* must be first */ + int sockfd; unsigned int frame_data_size; @@ -63,10 +66,6 @@ struct pkt_tx_queue { uint8_t *map; unsigned int framecount; unsigned int framenum; - - volatile unsigned long tx_pkts; - volatile unsigned long err_pkts; - volatile unsigned long tx_bytes; }; struct pmd_internals { @@ -118,8 +117,6 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; uint8_t *pbuf; struct pkt_rx_queue *pkt_q = queue; - uint16_t num_rx = 0; - unsigned long num_rx_bytes = 0; unsigned int framecount, framenum; if (unlikely(nb_pkts == 0)) @@ -164,13 +161,11 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* account for the receive frame */ bufs[i] = mbuf; - num_rx++; - num_rx_bytes += mbuf->pkt_len; + rte_eth_qsw_update(&pkt_q->stats, mbuf); } pkt_q->framenum = framenum; - pkt_q->rx_pkts += num_rx; - pkt_q->rx_bytes += num_rx_bytes; - return num_rx; + + return i; } /* @@ -205,8 +200,6 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) unsigned int framecount, framenum; struct pollfd pfd; struct pkt_tx_queue *pkt_q = queue; - uint16_t num_tx = 0; - unsigned long num_tx_bytes = 0; int i; if (unlikely(nb_pkts == 0)) @@ -285,8 +278,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) framenum = 0; ppd = (struct tpacket2_hdr *) pkt_q->rd[framenum].iov_base; - num_tx++; - num_tx_bytes += mbuf->pkt_len; + rte_eth_qsw_update(&pkt_q->stats, mbuf); rte_pktmbuf_free(mbuf); } @@ -298,15 +290,9 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * packets will be considered successful even though only some * are sent. */ - - num_tx = 0; - num_tx_bytes = 0; } pkt_q->framenum = framenum; - pkt_q->tx_pkts += num_tx; - pkt_q->err_pkts += i - num_tx; - pkt_q->tx_bytes += num_tx_bytes; return i; } @@ -385,61 +371,6 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) return 0; } -static int -eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *igb_stats) -{ - unsigned i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_ipackets[i] = internal->rx_queue[i].rx_pkts; - igb_stats->q_ibytes[i] = internal->rx_queue[i].rx_bytes; - rx_total += igb_stats->q_ipackets[i]; - rx_bytes_total += igb_stats->q_ibytes[i]; - } - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_opackets[i] = internal->tx_queue[i].tx_pkts; - igb_stats->q_obytes[i] = internal->tx_queue[i].tx_bytes; - tx_total += igb_stats->q_opackets[i]; - tx_err_total += internal->tx_queue[i].err_pkts; - tx_bytes_total += igb_stats->q_obytes[i]; - } - - igb_stats->ipackets = rx_total; - igb_stats->ibytes = rx_bytes_total; - igb_stats->opackets = tx_total; - igb_stats->oerrors = tx_err_total; - igb_stats->obytes = tx_bytes_total; - return 0; -} - -static int -eth_stats_reset(struct rte_eth_dev *dev) -{ - unsigned i; - struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < internal->nb_queues; i++) { - internal->rx_queue[i].rx_pkts = 0; - internal->rx_queue[i].rx_bytes = 0; - } - - for (i = 0; i < internal->nb_queues; i++) { - internal->tx_queue[i].tx_pkts = 0; - internal->tx_queue[i].err_pkts = 0; - internal->tx_queue[i].tx_bytes = 0; - } - - return 0; -} - static int eth_dev_close(struct rte_eth_dev *dev) { @@ -634,8 +565,12 @@ static const struct eth_dev_ops ops = { .rx_queue_setup = eth_rx_queue_setup, .tx_queue_setup = eth_tx_queue_setup, .link_update = eth_link_update, - .stats_get = eth_stats_get, - .stats_reset = eth_stats_reset, + + .stats_get = rte_eth_qsw_stats_get, + .stats_reset = rte_eth_qsw_stats_reset, + .xstats_get = rte_eth_qsw_xstats_get, + .xstats_get_names = rte_eth_qsw_xstats_get_names, + .xstats_reset = rte_eth_qsw_stats_reset, }; /* From patchwork Fri May 10 05:01:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140015 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 574BD43FEC; Fri, 10 May 2024 07:05:35 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6FAF940685; Fri, 10 May 2024 07:05:18 +0200 (CEST) Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by mails.dpdk.org (Postfix) with ESMTP id A6E2E40395 for ; Fri, 10 May 2024 07:05:14 +0200 (CEST) Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-1ed41eb3382so11862695ad.0 for ; Thu, 09 May 2024 22:05:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715317514; x=1715922314; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LLMszliOaj8mteL5ALmEOeHR8g/+1DUpiL1DIWKZMxE=; b=jLpjZ4fy9NN8BgsGIk1h6Zlnf7CZgMl0vuS4iNkkbJsmaBCmb2uSC93k1jNaay5QzL xP2ypaDI/Kz+6OH0hhcnJsQ744SuM6hkzYg5rPmGxeI9gruwswTXbDjklhZp0BDHh6wh 1fEyPx0KdXOuMLwsZ2dueSifg8XUQb3q/umWLLHTx1TsJSyIpDxP4TPagQGgsxWOVMSf eVpBnp0He044rxA5NIFvL4XqGuGcF8vmbRpllBQUv8o/YvqOPrOTdZKFAi8/fDL4Soux FHy0J6XfOuXKsyPkbhwnGQ25WPdoi3gk0lkWEW3Dq++4XlZmmLizCosZUIe9y8KWFlcN 36sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715317514; x=1715922314; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LLMszliOaj8mteL5ALmEOeHR8g/+1DUpiL1DIWKZMxE=; b=HEG8d+lHtFraAQaOSFkn5xOyHqM5QF+mO+NQaVl0bnH12HJiox5yxOks5zC5ZQPS6+ DG9FJuYcj1R6gLaDHOXXbsujcs7bYD0LhfJLjM7lwhfz9fccgZ9g7rs7S+YOFHWIcnE2 BbcaQVk+ep2M0SCH8wHmV4mYxLFtZOVwoYi1goPCswPleKMVS52NHoGG8AIybriXOqIO o4EHl+1jq6ru1o/W/P4tf7m9qnX0axo7lAjP8utApLT7uEDK0eqQPB1wsufujBWuTZvL Kytihp++NghNTlza7sTw+ZzHAZYgbLd8ugwADoHRMOzVRlKAcx/GbnFj29ZRefjYQDTC ePXQ== X-Gm-Message-State: AOJu0YxAGlDqLAiYLThvFPS5kQ1x23Xfz/SGtpSgWxJPRnmZGO3kw6pv X6JgQygjllnZy+2nHhDnE+sv63ZRHCFxmGmAcjZbhkwHvAaCXUX8tVVuT1jXUh/0tEBv9kLC5Y2 +vbw= X-Google-Smtp-Source: AGHT+IEi/C5oT19Dplssh6sQGmxRN+pL3BUEIOJOYjhJ1tgqxVw59TThAQp1ZhxG6Yx0kEbrPBEYMA== X-Received: by 2002:a17:903:1c2:b0:1e2:bbc0:a671 with SMTP id d9443c01a7336-1ef43f4e22amr25020925ad.52.1715317513875; Thu, 09 May 2024 22:05:13 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0c25566asm23007385ad.283.2024.05.09.22.05.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 May 2024 22:05:13 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [RFC 3/3] net/tap: use generic SW stats Date: Thu, 9 May 2024 22:01:23 -0700 Message-ID: <20240510050507.14381-4-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240510050507.14381-1-stephen@networkplumber.org> References: <20240425174617.2126159-1-ferruh.yigit@amd.com> <20240510050507.14381-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use new common sw statistics. Signed-off-by: Stephen Hemminger --- drivers/net/tap/rte_eth_tap.c | 100 ++++++---------------------------- drivers/net/tap/rte_eth_tap.h | 17 ++---- 2 files changed, 21 insertions(+), 96 deletions(-) diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index 69d9da695b..faf978b59e 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -432,7 +432,6 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rx_queue *rxq = queue; struct pmd_process_private *process_private; uint16_t num_rx; - unsigned long num_rx_bytes = 0; uint32_t trigger = tap_trigger; if (trigger == rxq->trigger_seen) @@ -455,7 +454,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* Packet couldn't fit in the provided mbuf */ if (unlikely(rxq->pi.flags & TUN_PKT_STRIP)) { - rxq->stats.ierrors++; + rte_eth_qsw_error_inc(&rxq->stats); continue; } @@ -467,7 +466,9 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *buf = rte_pktmbuf_alloc(rxq->mp); if (unlikely(!buf)) { - rxq->stats.rx_nombuf++; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->in_port]; + ++dev->data->rx_mbuf_alloc_failed; + /* No new buf has been allocated: do nothing */ if (!new_tail || !seg) goto end; @@ -509,11 +510,9 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* account for the receive frame */ bufs[num_rx++] = mbuf; - num_rx_bytes += mbuf->pkt_len; + rte_eth_qsw_update(&rxq->stats, mbuf); } end: - rxq->stats.ipackets += num_rx; - rxq->stats.ibytes += num_rx_bytes; if (trigger && num_rx < nb_pkts) rxq->trigger_seen = trigger; @@ -523,8 +522,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) static inline int tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs, - struct rte_mbuf **pmbufs, - uint16_t *num_packets, unsigned long *num_tx_bytes) + struct rte_mbuf **pmbufs) { struct pmd_process_private *process_private; int i; @@ -647,8 +645,7 @@ tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs, if (n <= 0) return -1; - (*num_packets)++; - (*num_tx_bytes) += rte_pktmbuf_pkt_len(mbuf); + rte_eth_qsw_update(&txq->stats, mbuf); } return 0; } @@ -660,8 +657,6 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { struct tx_queue *txq = queue; uint16_t num_tx = 0; - uint16_t num_packets = 0; - unsigned long num_tx_bytes = 0; uint32_t max_size; int i; @@ -693,7 +688,7 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) tso_segsz = mbuf_in->tso_segsz + hdrs_len; if (unlikely(tso_segsz == hdrs_len) || tso_segsz > *txq->mtu) { - txq->stats.errs++; + rte_eth_qsw_error_inc(&txq->stats); break; } gso_ctx->gso_size = tso_segsz; @@ -728,10 +723,10 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) num_mbufs = 1; } - ret = tap_write_mbufs(txq, num_mbufs, mbuf, - &num_packets, &num_tx_bytes); + ret = tap_write_mbufs(txq, num_mbufs, mbuf); if (ret == -1) { - txq->stats.errs++; + rte_eth_qsw_error_inc(&txq->stats); + /* free tso mbufs */ if (num_tso_mbufs > 0) rte_pktmbuf_free_bulk(mbuf, num_tso_mbufs); @@ -749,10 +744,6 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } } - txq->stats.opackets += num_packets; - txq->stats.errs += nb_pkts - num_tx; - txq->stats.obytes += num_tx_bytes; - return num_tx; } @@ -1052,68 +1043,6 @@ tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) return 0; } -static int -tap_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *tap_stats) -{ - unsigned int i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - unsigned long rx_nombuf = 0, ierrors = 0; - const struct pmd_internals *pmd = dev->data->dev_private; - - /* rx queue statistics */ - imax = (dev->data->nb_rx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_rx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - for (i = 0; i < imax; i++) { - tap_stats->q_ipackets[i] = pmd->rxq[i].stats.ipackets; - tap_stats->q_ibytes[i] = pmd->rxq[i].stats.ibytes; - rx_total += tap_stats->q_ipackets[i]; - rx_bytes_total += tap_stats->q_ibytes[i]; - rx_nombuf += pmd->rxq[i].stats.rx_nombuf; - ierrors += pmd->rxq[i].stats.ierrors; - } - - /* tx queue statistics */ - imax = (dev->data->nb_tx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_tx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - - for (i = 0; i < imax; i++) { - tap_stats->q_opackets[i] = pmd->txq[i].stats.opackets; - tap_stats->q_obytes[i] = pmd->txq[i].stats.obytes; - tx_total += tap_stats->q_opackets[i]; - tx_err_total += pmd->txq[i].stats.errs; - tx_bytes_total += tap_stats->q_obytes[i]; - } - - tap_stats->ipackets = rx_total; - tap_stats->ibytes = rx_bytes_total; - tap_stats->ierrors = ierrors; - tap_stats->rx_nombuf = rx_nombuf; - tap_stats->opackets = tx_total; - tap_stats->oerrors = tx_err_total; - tap_stats->obytes = tx_bytes_total; - return 0; -} - -static int -tap_stats_reset(struct rte_eth_dev *dev) -{ - int i; - struct pmd_internals *pmd = dev->data->dev_private; - - for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { - pmd->rxq[i].stats.ipackets = 0; - pmd->rxq[i].stats.ibytes = 0; - pmd->rxq[i].stats.ierrors = 0; - pmd->rxq[i].stats.rx_nombuf = 0; - - pmd->txq[i].stats.opackets = 0; - pmd->txq[i].stats.errs = 0; - pmd->txq[i].stats.obytes = 0; - } - - return 0; -} static int tap_dev_close(struct rte_eth_dev *dev) @@ -1917,8 +1846,11 @@ static const struct eth_dev_ops ops = { .mac_addr_set = tap_mac_set, .mtu_set = tap_mtu_set, .set_mc_addr_list = tap_set_mc_addr_list, - .stats_get = tap_stats_get, - .stats_reset = tap_stats_reset, + .stats_get = rte_eth_qsw_stats_get, + .stats_reset = rte_eth_qsw_stats_reset, + .xstats_get_names = rte_eth_qsw_xstats_get_names, + .xstats_get = rte_eth_qsw_xstats_get, + .xstats_reset = rte_eth_qsw_stats_reset, .dev_supported_ptypes_get = tap_dev_supported_ptypes_get, .rss_hash_update = tap_rss_hash_update, .flow_ops_get = tap_dev_flow_ops_get, diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h index 5ac93f93e9..c05a89a6ab 100644 --- a/drivers/net/tap/rte_eth_tap.h +++ b/drivers/net/tap/rte_eth_tap.h @@ -14,6 +14,7 @@ #include #include +#include #include #include #include "tap_log.h" @@ -32,22 +33,13 @@ enum rte_tuntap_type { ETH_TUNTAP_TYPE_MAX, }; -struct pkt_stats { - uint64_t opackets; /* Number of output packets */ - uint64_t ipackets; /* Number of input packets */ - uint64_t obytes; /* Number of bytes on output */ - uint64_t ibytes; /* Number of bytes on input */ - uint64_t errs; /* Number of TX error packets */ - uint64_t ierrors; /* Number of RX error packets */ - uint64_t rx_nombuf; /* Nb of RX mbuf alloc failures */ -}; - struct rx_queue { + struct rte_eth_qsw_stats stats; /* MUST BE FIRST */ + struct rte_mempool *mp; /* Mempool for RX packets */ uint32_t trigger_seen; /* Last seen Rx trigger value */ uint16_t in_port; /* Port ID */ uint16_t queue_id; /* queue ID*/ - struct pkt_stats stats; /* Stats for this RX queue */ uint16_t nb_rx_desc; /* max number of mbufs available */ struct rte_eth_rxmode *rxmode; /* RX features */ struct rte_mbuf *pool; /* mbufs pool for this queue */ @@ -56,10 +48,11 @@ struct rx_queue { }; struct tx_queue { + struct rte_eth_qsw_stats stats; /* MUST BE FIRST */ + int type; /* Type field - TUN|TAP */ uint16_t *mtu; /* Pointer to MTU from dev_data */ uint16_t csum:1; /* Enable checksum offloading */ - struct pkt_stats stats; /* Stats for this TX queue */ struct rte_gso_ctx gso_ctx; /* GSO context */ uint16_t out_port; /* Port ID */ uint16_t queue_id; /* queue ID*/