From patchwork Mon May 13 18:52:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140040 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA9B44401D; Mon, 13 May 2024 20:55:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E6BF04067D; Mon, 13 May 2024 20:55:01 +0200 (CEST) Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by mails.dpdk.org (Postfix) with ESMTP id 29052402CD for ; Mon, 13 May 2024 20:55:00 +0200 (CEST) Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-6f450f43971so3938501b3a.3 for ; Mon, 13 May 2024 11:55:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715626499; x=1716231299; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qDTur2+NCw1uV0OtsmGZpp6yEEmoZEWlPL0svpl1uRo=; b=D/OmNW/AiYgGc7zsNdVnJPaZAhMJHMk6WxDTt7bFoelmjuaF2K2EOL9/yeq83wsg0F pN4M9PX/ca1oXRqxl2RPACO2y/BAC2fX19HHaJyVtc4LpqsfP03usjVeojeH64qjBs1j uXFtd6mRcNzXJFjcZeGP106pfLStYemG77xMmP2esgYDskU2fzIuL3BQhN+slKFHzT0Z N4OopgxFZXgAuTCQGXIIkCtnvM4A4P3Fw5pEa3yLbdUUDIGfgUacmnBIvaGy2lkUvjHN vXBFfvN6RiS9G072L0f4e8k8/hfeXTGyi6hH9oJ1fTPxsqbaRTsB5xC93h9/PtAZGJc/ MpGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715626499; x=1716231299; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qDTur2+NCw1uV0OtsmGZpp6yEEmoZEWlPL0svpl1uRo=; b=Y/cN657htZXr6QK24+dWSg/uyusEv4BaKTPIyYjhvyc3wWHeko4reWNtBKp4cp/h8l h8r2GwzNFEInpZwnLkZO5Xlz9k5lYUUFsW7fjf6I0XsfDCgOT8hT/LWqs9W7nxSNMs4q osetA1SYOzSfCKgTNhG70Vky97+8N76ZjHl2e2RL/2AfaVTRHgBnGcNm/g+Cs8Yh+Syl ILMGN0gALvOk8P8JPOUdLtTFnadlF5QwHYiGzxxC+dP0VQx1huw1MDW2TlRxfIiX13p5 0bvW4yVzTGbXLIQxZXC3GSOnOHhhY8c9USUkzSg6dWfPnPrydQZajTikP4ATms2YrEli YtSg== X-Gm-Message-State: AOJu0YynopHbrTH/ckrMycQiAzO/LTbES5CU8HehfCRShar5uuyNRjxR 8PRyDK8595L3g+EccUt5v+srFIfz4CPLdAJo2QyW5SvzbXOy4vw+PhjWYgiwtgFK1VlZbNk9TuU 9FEcHiQ== X-Google-Smtp-Source: AGHT+IHJTlBPu8/azLI1lpZGJL/9aq7qJd+d5nqvUXk/lOh3D8Phjo8yh+rzFNw7Jh8DkL96xcXQEg== X-Received: by 2002:a05:6a00:1397:b0:6e6:89ad:1233 with SMTP id d2e1a72fcca58-6f4e02a6150mr14387366b3a.2.1715626499277; Mon, 13 May 2024 11:54:59 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-634103f7237sm8154680a12.71.2024.05.13.11.54.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:54:58 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [RFC v2 1/7] eal: generic 64 bit counter Date: Mon, 13 May 2024 11:52:11 -0700 Message-ID: <20240513185448.120356-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240513185448.120356-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240513185448.120356-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This header implements 64 bit counters that are NOT atomic but are safe against load/store splits on 32 bit platforms. Signed-off-by: Stephen Hemminger Acked-by: Morten Brørup --- lib/eal/include/meson.build | 1 + lib/eal/include/rte_counter.h | 91 +++++++++++++++++++++++++++++++++++ 2 files changed, 92 insertions(+) create mode 100644 lib/eal/include/rte_counter.h diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index e94b056d46..c070dd0079 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -12,6 +12,7 @@ headers += files( 'rte_class.h', 'rte_common.h', 'rte_compat.h', + 'rte_counter.h', 'rte_debug.h', 'rte_dev.h', 'rte_devargs.h', diff --git a/lib/eal/include/rte_counter.h b/lib/eal/include/rte_counter.h new file mode 100644 index 0000000000..1c1c34c2fb --- /dev/null +++ b/lib/eal/include/rte_counter.h @@ -0,0 +1,91 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_COUNTER_H_ +#define _RTE_COUNTER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * RTE Counter + * + * A counter is 64 bit value that is safe from split read/write + * on 32 bit platforms. It assumes that only one cpu at a time + * will update the counter, and another CPU may want to read it. + * + * This is a much weaker guarantee than @rte_atomic but is faster + * since no locked operations are required for update. + */ + +#include + +#ifdef RTE_ARCH_64 +/* + * On a platform that can support native 64 bit type, no special handling. + * These are just wrapper around 64 bit value. + */ +typedef uint64_t rte_counter64_t; + +/** + * Add value to counter. + */ +__rte_experimental +static inline void +rte_counter64_add(rte_counter64_t *counter, uint32_t val) +{ + *counter += val; +} + +__rte_experimental +static inline uint64_t +rte_counter64_fetch(const rte_counter64_t *counter) +{ + return *counter; +} + +__rte_experimental +static inline void +rte_counter64_reset(rte_counter64_t *counter) +{ + *counter = 0; +} + +#else +/* + * On a 32 bit platform need to use atomic to force the compler to not + * split 64 bit read/write. + */ +typedef RTE_ATOMIC(uint64_t) rte_counter64_t; + +__rte_experimental +static inline void +rte_counter64_add(rte_counter64_t *counter, uint32_t val) +{ + rte_atomic_fetch_add_explicit(counter, val, rte_memory_order_relaxed); +} + +__rte_experimental +static inline uint64_t +rte_counter64_fetch(rte_counter64_t *counter) +{ + return rte_atomic_load_explicit(counter, rte_memory_order_relaxed); +} + +__rte_experimental +static inline void +rte_counter64_reset(rte_counter64_t *counter) +{ + rte_atomic_store_explicit(counter, 0, rte_memory_order_relaxed); +} +#endif + + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_COUNTER_H_ */ From patchwork Mon May 13 18:52:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140041 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B9D424401D; Mon, 13 May 2024 20:55:13 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 023244068A; Mon, 13 May 2024 20:55:03 +0200 (CEST) Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by mails.dpdk.org (Postfix) with ESMTP id 2CE38402F1 for ; Mon, 13 May 2024 20:55:01 +0200 (CEST) Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-6f44d2b3130so3946869b3a.2 for ; Mon, 13 May 2024 11:55:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715626500; x=1716231300; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1d2s+oymQF/Cac0yZxkI9dG4QhCl8fi/IJkLq497mW8=; b=ugHW8o3vz8WWYEX6mB8jwvOLbBqSd7nmI9rwUJndJYSmZE8xkEMnr3/U5iNI13+eGb ltuo2H5ikrbHvChSsHcBQIkZo04IGuF0nOLNd9JmB96ntbX6BJ8rYBSHdWwdC/elj2qy V+F0Wut0MPzbclQuW3vpW/tFphOOivrsFHqnD6QwvnBFdLXBe6m+EFU2njI6LGMhRnJe lXpC7kfKNuA4MEUADkt32MMWFhBbjNrH0fFyUWbugeIcBTFjqKsNJ/+D26LO+e+ylnqZ puN3kGLSr6ipZcRjWsk8xdQ5+BAbwRpxf04GVHiC0JQb5HgsilRYBrRRs5TpOnjezr42 AQYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715626500; x=1716231300; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1d2s+oymQF/Cac0yZxkI9dG4QhCl8fi/IJkLq497mW8=; b=LAdztUHGpw7OgjvoYJKPG23v+3nflxztbOHkGps4HN0gir3MkrB7E/w5XHnXPpDM1l xpEFld4XfL3xW315NDBy4WHvGEOxI0eITyws8GHryFeHJrsKqQoiZMk7+0WNlY7JhoEp 1kpaPMSR11ClNK7dYTKgev9r+hr1+jCMLkrysaVsz0RipVIM+8YC++btOYDGHusZ1zUc mQAOknCP5nTy0eF/smmjafNVxjfjj4iNpQU+TMH/lfvAXJSytxJh4alH9EFNmRdDiNgj dyv/odpSAtHgTHXMPg7HGVEJoYGRl3MKMlWqHMBQaahL4h/Nz9UMjTgcFkN4NrJ+NPDb E2gA== X-Gm-Message-State: AOJu0Yw7WIydFc6b5NkNCPbH3JjR/HT1yjbRUTXHS9G7RN0dBt+y5t3+ UR1EetnLUxYxJijlFw2kANWkhF+cdO3t6wZWg14mASi0ElTm4E8EbOqaPP8Nmz/C79w+QEQZCXQ Q7TmjfA== X-Google-Smtp-Source: AGHT+IHEElRMgpaL+SCQiEkpR6uSJEiK/RwkZY4Te0erUtHbMaK6BXVWIgwlc2Sne85KPWDkuDvTcA== X-Received: by 2002:a05:6a21:788e:b0:1af:a5b1:290a with SMTP id adf61e73a8af0-1afde0a994amr12052328637.13.1715626500154; Mon, 13 May 2024 11:55:00 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-634103f7237sm8154680a12.71.2024.05.13.11.54.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:54:59 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [RFC v2 2/7] ethdev: add internal helper of SW driver statistics Date: Mon, 13 May 2024 11:52:12 -0700 Message-ID: <20240513185448.120356-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240513185448.120356-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240513185448.120356-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This clones the staistic update code from virtio for use by other drivers. It also uses native uint64_t on 64 bit platform but atomic operations on 32 bit platforms. Signed-off-by: Stephen Hemminger ethdev: use atomic on 32 --- lib/ethdev/ethdev_swstats.c | 270 ++++++++++++++++++++++++++++++++++++ lib/ethdev/ethdev_swstats.h | 54 ++++++++ lib/ethdev/meson.build | 2 + lib/ethdev/version.map | 8 ++ 4 files changed, 334 insertions(+) create mode 100644 lib/ethdev/ethdev_swstats.c create mode 100644 lib/ethdev/ethdev_swstats.h diff --git a/lib/ethdev/ethdev_swstats.c b/lib/ethdev/ethdev_swstats.c new file mode 100644 index 0000000000..4c0fa36ac3 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.c @@ -0,0 +1,270 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#include + +#include +#include + +#include "rte_ethdev.h" +#include "ethdev_swstats.h" + +static void +eth_counters_reset(struct rte_eth_counters *counters) +{ + unsigned int i; + + rte_counter64_reset(&counters->packets); + rte_counter64_reset(&counters->bytes); + rte_counter64_reset(&counters->multicast); + rte_counter64_reset(&counters->broadcast); + + for (i = 0; i < RTE_DIM(counters->size_bins); i++) + rte_counter64_reset(&counters->size_bins[i]); +} + +void +rte_eth_count_packet(struct rte_eth_counters *counters, uint32_t sz) +{ + uint32_t bin; + + if (sz == 64) { + bin = 1; + } else if (sz > 64 && sz < 1024) { + /* count zeros, and offset into correct bin */ + bin = (sizeof(sz) * 8) - rte_clz32(sz) - 5; + } else if (sz < 64) { + bin = 0; + } else if (sz < 1519) { + bin = 6; + } else { + bin = 7; + } + + rte_counter64_add(&counters->packets, 1); + rte_counter64_add(&counters->bytes, sz); + rte_counter64_add(&counters->size_bins[bin], 1); +} + +void +rte_eth_count_mbuf(struct rte_eth_counters *counters, const struct rte_mbuf *mbuf) +{ + const struct rte_ether_addr *ea; + + rte_eth_count_packet(counters, rte_pktmbuf_pkt_len(mbuf)); + + ea = rte_pktmbuf_mtod(mbuf, const struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + rte_counter64_add(&counters->broadcast, 1); + else + rte_counter64_add(&counters->multicast, 1); + } +} + +void +rte_eth_count_error(struct rte_eth_counters *counters) +{ + rte_counter64_add(&counters->errors, 1); +} + +int +rte_eth_counters_stats_get(const struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_stats *stats) +{ + unsigned int i; + uint64_t packets, bytes, errors; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const void *txq = dev->data->tx_queues[i]; + const struct rte_eth_counters *counters; + + if (txq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)txq + tx_offset); + packets = rte_counter64_fetch(&counters->packets); + bytes = rte_counter64_fetch(&counters->bytes); + errors = rte_counter64_fetch(&counters->errors); + + stats->opackets += packets; + stats->obytes += bytes; + stats->oerrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_opackets[i] = packets; + stats->q_obytes[i] = bytes; + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const void *rxq = dev->data->rx_queues[i]; + const struct rte_eth_counters *counters; + + if (rxq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)rxq + rx_offset); + packets = rte_counter64_fetch(&counters->packets); + bytes = rte_counter64_fetch(&counters->bytes); + errors = rte_counter64_fetch(&counters->errors); + + stats->ipackets += packets; + stats->ibytes += bytes; + stats->ierrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_ipackets[i] = packets; + stats->q_ibytes[i] = bytes; + } + } + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + return 0; +} + +int +rte_eth_counters_reset(struct rte_eth_dev *dev, size_t tx_offset, size_t rx_offset) +{ + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + void *txq = dev->data->tx_queues[i]; + struct rte_eth_counters *counters; + + if (txq == NULL) + continue; + + counters = (struct rte_eth_counters *)((char *)txq + tx_offset); + eth_counters_reset(counters); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + void *rxq = dev->data->rx_queues[i]; + struct rte_eth_counters *counters; + + if (rxq == NULL) + continue; + + counters = (struct rte_eth_counters *)((char *)rxq + rx_offset); + eth_counters_reset(counters); + } + + return 0; +} + +struct xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + size_t offset; +}; + +/* [rt]x_qX_ is prepended to the name string here */ +static const struct xstats_name_off eth_swstats_strings[] = { + {"good_packets", offsetof(struct rte_eth_counters, packets)}, + {"good_bytes", offsetof(struct rte_eth_counters, bytes)}, + {"errors", offsetof(struct rte_eth_counters, errors)}, + {"multicast_packets", offsetof(struct rte_eth_counters, multicast)}, + {"broadcast_packets", offsetof(struct rte_eth_counters, broadcast)}, + {"undersize_packets", offsetof(struct rte_eth_counters, size_bins[0])}, + {"size_64_packets", offsetof(struct rte_eth_counters, size_bins[1])}, + {"size_65_127_packets", offsetof(struct rte_eth_counters, size_bins[2])}, + {"size_128_255_packets", offsetof(struct rte_eth_counters, size_bins[3])}, + {"size_256_511_packets", offsetof(struct rte_eth_counters, size_bins[4])}, + {"size_512_1023_packets", offsetof(struct rte_eth_counters, size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct rte_eth_counters, size_bins[6])}, + {"size_1519_max_packets", offsetof(struct rte_eth_counters, size_bins[7])}, +}; +#define NUM_SWSTATS_XSTATS RTE_DIM(eth_swstats_strings) + + +int +rte_eth_counters_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names) +{ + unsigned int i, t, count = 0; + + if (xstats_names == NULL) + return (dev->data->nb_tx_queues + dev->data->nb_rx_queues) * NUM_SWSTATS_XSTATS; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const void *rxq = dev->data->rx_queues[i]; + + if (rxq == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + snprintf(xstats_names[count].name, sizeof(xstats_names[count].name), + "rx_q%u_%s", i, eth_swstats_strings[t].name); + count++; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const void *txq = dev->data->tx_queues[i]; + + if (txq == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + snprintf(xstats_names[count].name, sizeof(xstats_names[count].name), + "tx_q%u_%s", i, eth_swstats_strings[t].name); + count++; + } + } + return count; +} + +int +rte_eth_counters_xstats_get(struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_xstat *xstats, unsigned int n) +{ + unsigned int i, t, count = 0; + const unsigned int nstats + = (dev->data->nb_tx_queues + dev->data->nb_rx_queues) * NUM_SWSTATS_XSTATS; + + if (n < nstats) + return nstats; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const void *rxq = dev->data->rx_queues[i]; + const struct rte_eth_counters *counters; + + if (rxq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)rxq + rx_offset); + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + const uint64_t *valuep + = (const uint64_t *)((const char *)counters + + eth_swstats_strings[t].offset); + + xstats[count].value = *valuep; + xstats[count].id = count; + ++count; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const void *txq = dev->data->tx_queues[i]; + const struct rte_eth_counters *counters; + + if (txq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)txq + tx_offset); + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + const uint64_t *valuep + = (const uint64_t *)((const char *)counters + + eth_swstats_strings[t].offset); + + xstats[count].value = *valuep; + xstats[count].id = count; + ++count; + } + } + + return count; +} diff --git a/lib/ethdev/ethdev_swstats.h b/lib/ethdev/ethdev_swstats.h new file mode 100644 index 0000000000..45b419b887 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_ETHDEV_SWSTATS_H_ +#define _RTE_ETHDEV_SWSTATS_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +struct rte_eth_counters { + rte_counter64_t packets; + rte_counter64_t bytes; + rte_counter64_t errors; + rte_counter64_t multicast; + rte_counter64_t broadcast; + /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ + rte_counter64_t size_bins[8]; +}; + +__rte_internal +void rte_eth_count_packet(struct rte_eth_counters *counters, uint32_t size); + +__rte_internal +void rte_eth_count_mbuf(struct rte_eth_counters *counters, const struct rte_mbuf *mbuf); + +__rte_internal +void rte_eth_count_error(struct rte_eth_counters *stats); + +__rte_internal +int rte_eth_counters_stats_get(const struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_stats *stats); + +__rte_internal +int rte_eth_counters_reset(struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset); + +__rte_internal +int rte_eth_counters_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names); +__rte_internal +int rte_eth_counters_xstats_get(struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_xstat *xstats, unsigned int n); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_ETHDEV_SWSTATS_H_ */ diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index f1d2586591..7ce29a46d4 100644 --- a/lib/ethdev/meson.build +++ b/lib/ethdev/meson.build @@ -3,6 +3,7 @@ sources = files( 'ethdev_driver.c', + 'ethdev_swstats.c', 'ethdev_private.c', 'ethdev_profile.c', 'ethdev_trace_points.c', @@ -42,6 +43,7 @@ driver_sdk_headers += files( 'ethdev_driver.h', 'ethdev_pci.h', 'ethdev_vdev.h', + 'ethdev_swstats.h', ) if is_linux diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 79f6f5293b..1ca53e2c5d 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -358,4 +358,12 @@ INTERNAL { rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; rte_flow_fp_default_ops; + + rte_eth_count_error; + rte_eth_count_mbuf; + rte_eth_count_packet; + rte_eth_counters_reset; + rte_eth_counters_stats_get; + rte_eth_counters_xstats_get; + rte_eth_counters_xstats_get_names; }; From patchwork Mon May 13 18:52:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140042 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E68144401D; Mon, 13 May 2024 20:55:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3969340696; Mon, 13 May 2024 20:55:04 +0200 (CEST) Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by mails.dpdk.org (Postfix) with ESMTP id C764B4064A for ; Mon, 13 May 2024 20:55:01 +0200 (CEST) Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-6f447976de7so4076503b3a.1 for ; Mon, 13 May 2024 11:55:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715626501; x=1716231301; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2QCTpYwAu179GL+rVw8N4xXklohXO+UbPs7wdo43KG0=; b=t4zzWKHuAk+wE99SdhbHgJpjLhoSEmy8uzXNuCnqFst07yUYm3vGwSVihWeKAkm/J4 HERLRacFeBA+g60qlAc3T9diltVE54lch/z3nLBAI+c2+5bv9EpeR6IwxXqvzssMeBCS K09Tr+ojuyftXgW3hVXPWbj1Q6rI+PpfqvEIqH+2VNrQxPi9EDVYYca+Gfbnp5g7V3oD jAFD7geceBfFTbGhzcUtgzsRoLY2x0YwlFySheCSAE/Ic/Uh3dA6JkVOilCX2v2pYdLW wM+IH9bb7bNbcujVtUtoBrYroN/GDkd1SvuaVpZe4eGELSQC/bvzOIW4x9ALG5SR4+4j 360g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715626501; x=1716231301; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2QCTpYwAu179GL+rVw8N4xXklohXO+UbPs7wdo43KG0=; b=dDpWHGFRLwKVVpkegM3UfCDI5t69vHA+6S2eTlFzeE++6yeq0DIsbK9IJVtKFJFjL9 CP/p8KjJR2EpruXj08KaosOd9+460vHGBz98KwJY4qUy7A5P74ijbwSpWhTVl+ZTLabg cwXWTjwzSqCyrK/OSaBMAhrFDYj58hw4PqqydE9+Isy3j25Z3ij7Bh9JdNNla5yIwk29 owK0wb8FTyPt6/mxCDyFiWAls+FcY4nl/7Hw0vDEjG0dzvXxptyVv/R7wSt8yW8uNJLj 1iKiEpHvpj4BpR+JEzzBwRk6N9iWG3IiPTW2kilfL/Wj6KO3nFkwJLLw+eBynOIHBSLy xfAw== X-Gm-Message-State: AOJu0YyL2WvJG5ReqkCjKfo8JU6vx8AuPdfh0R61HqsWGayOBC3VWDMY ERAa/idknrEpI8VQ/TTT0OOXgCjzeY+usgnF4mu/wUR2nyhthzYtYsj34SLwc7ach11wdt4rKPo syp74BQ== X-Google-Smtp-Source: AGHT+IHN7HzJTa2VvG2B7s77HoOGhO4TL2u3udcQH0m0qRr8ep4XmHpFSRb+nh44MiRpbxiBXZKR5w== X-Received: by 2002:aa7:8881:0:b0:6f3:ecdc:220e with SMTP id d2e1a72fcca58-6f4e03590ddmr11238283b3a.24.1715626500978; Mon, 13 May 2024 11:55:00 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-634103f7237sm8154680a12.71.2024.05.13.11.55.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:55:00 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , "John W. Linville" Subject: [RFC v2 3/7] net/af_packet: use SW stats helper Date: Mon, 13 May 2024 11:52:13 -0700 Message-ID: <20240513185448.120356-4-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240513185448.120356-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240513185448.120356-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use the new generic SW stats. Signed-off-by: Stephen Hemminger --- drivers/net/af_packet/rte_eth_af_packet.c | 95 +++++++---------------- 1 file changed, 29 insertions(+), 66 deletions(-) diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c index 397a32db58..2d42f3e723 100644 --- a/drivers/net/af_packet/rte_eth_af_packet.c +++ b/drivers/net/af_packet/rte_eth_af_packet.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -29,6 +30,7 @@ #include #include + #define ETH_AF_PACKET_IFACE_ARG "iface" #define ETH_AF_PACKET_NUM_Q_ARG "qpairs" #define ETH_AF_PACKET_BLOCKSIZE_ARG "blocksz" @@ -51,8 +53,7 @@ struct pkt_rx_queue { uint16_t in_port; uint8_t vlan_strip; - volatile unsigned long rx_pkts; - volatile unsigned long rx_bytes; + struct rte_eth_counters stats; }; struct pkt_tx_queue { @@ -64,11 +65,10 @@ struct pkt_tx_queue { unsigned int framecount; unsigned int framenum; - volatile unsigned long tx_pkts; - volatile unsigned long err_pkts; - volatile unsigned long tx_bytes; + struct rte_eth_counters stats; }; + struct pmd_internals { unsigned nb_queues; @@ -118,8 +118,6 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; uint8_t *pbuf; struct pkt_rx_queue *pkt_q = queue; - uint16_t num_rx = 0; - unsigned long num_rx_bytes = 0; unsigned int framecount, framenum; if (unlikely(nb_pkts == 0)) @@ -164,13 +162,11 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* account for the receive frame */ bufs[i] = mbuf; - num_rx++; - num_rx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&pkt_q->stats, mbuf); } pkt_q->framenum = framenum; - pkt_q->rx_pkts += num_rx; - pkt_q->rx_bytes += num_rx_bytes; - return num_rx; + + return i; } /* @@ -205,8 +201,6 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) unsigned int framecount, framenum; struct pollfd pfd; struct pkt_tx_queue *pkt_q = queue; - uint16_t num_tx = 0; - unsigned long num_tx_bytes = 0; int i; if (unlikely(nb_pkts == 0)) @@ -285,8 +279,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) framenum = 0; ppd = (struct tpacket2_hdr *) pkt_q->rd[framenum].iov_base; - num_tx++; - num_tx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&pkt_q->stats, mbuf); rte_pktmbuf_free(mbuf); } @@ -298,15 +291,9 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * packets will be considered successful even though only some * are sent. */ - - num_tx = 0; - num_tx_bytes = 0; } pkt_q->framenum = framenum; - pkt_q->tx_pkts += num_tx; - pkt_q->err_pkts += i - num_tx; - pkt_q->tx_bytes += num_tx_bytes; return i; } @@ -386,58 +373,31 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) } static int -eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *igb_stats) +eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_ipackets[i] = internal->rx_queue[i].rx_pkts; - igb_stats->q_ibytes[i] = internal->rx_queue[i].rx_bytes; - rx_total += igb_stats->q_ipackets[i]; - rx_bytes_total += igb_stats->q_ibytes[i]; - } - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_opackets[i] = internal->tx_queue[i].tx_pkts; - igb_stats->q_obytes[i] = internal->tx_queue[i].tx_bytes; - tx_total += igb_stats->q_opackets[i]; - tx_err_total += internal->tx_queue[i].err_pkts; - tx_bytes_total += igb_stats->q_obytes[i]; - } - - igb_stats->ipackets = rx_total; - igb_stats->ibytes = rx_bytes_total; - igb_stats->opackets = tx_total; - igb_stats->oerrors = tx_err_total; - igb_stats->obytes = tx_bytes_total; - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned i; - struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < internal->nb_queues; i++) { - internal->rx_queue[i].rx_pkts = 0; - internal->rx_queue[i].rx_bytes = 0; - } + return rte_eth_counters_reset(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats)); +} - for (i = 0; i < internal->nb_queues; i++) { - internal->tx_queue[i].tx_pkts = 0; - internal->tx_queue[i].err_pkts = 0; - internal->tx_queue[i].tx_bytes = 0; - } +static int eth_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *names, + __rte_unused unsigned int limit) +{ + return rte_eth_counters_xstats_get_names(dev, names); +} - return 0; +static int +eth_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) +{ + return rte_eth_counters_xstats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), xstats, n); } static int @@ -636,6 +596,9 @@ static const struct eth_dev_ops ops = { .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, + .xstats_get = eth_xstats_get, + .xstats_get_names = eth_xstats_get_names, + .xstats_reset = eth_stats_reset, }; /* From patchwork Mon May 13 18:52:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140043 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 755E54401D; Mon, 13 May 2024 20:55:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 733874069D; Mon, 13 May 2024 20:55:05 +0200 (CEST) Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by mails.dpdk.org (Postfix) with ESMTP id 8621D40685 for ; Mon, 13 May 2024 20:55:02 +0200 (CEST) Received: by mail-pg1-f175.google.com with SMTP id 41be03b00d2f7-53fbf2c42bfso3525291a12.3 for ; Mon, 13 May 2024 11:55:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715626502; x=1716231302; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sLW788+dv0ewNWwe8e9Bp3XNLvfLTrGeJ2Aby7rQctM=; b=mAAiBQVztZ8Dp75XI40LK5gkqHAFsBBZkhCi2jKnR53DjjgzOn0EO5RNhl5avBuYc3 Ysw5/WpFR3zTSNn6U0tMkR5J8Y86XGJyGPWj9V16UknjoAq6wHqJ3ZoUQsfXVhfApGVK MXqv8ZcCQj76ulzjwmoPm035+16H/bOym14pph+zPqMvsTVjRQP8TUCauL45Q9b3wlwH 6ZvKDutjrb9DSn8N7FN++DSWlDW1m1kg6fYkqAhO7NyOXHIhQBp4symYE+JQa9hrBy7K 1By1KfNrVrdkL+t81ufa1VEg5wzrIiCYLVPr0QYuFAs9P4dVykQ/R1i0UdRz0nVcl+j5 gFSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715626502; x=1716231302; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sLW788+dv0ewNWwe8e9Bp3XNLvfLTrGeJ2Aby7rQctM=; b=KhScT+6BclMBzvxlhxvviM6srFgVS4FqzFWjFfvaVMYD/f/eZW7IRlEAwhnZ4hOpV3 MKfhBHJ7XcL5ne8KKRkZXPiD9QTLHXoYhkO9Jaexd+Vf1R/X8ynn0p5WSPquvwRNYfWF J0pc66a/U33SG+1ZteXpLYvQkwShbW/XdocwGQiyD/CB+kY8905C8+x/KYWYVd99/fH6 WHvPONurG08jK88kyCsA3VscyLwu2OuEFJtwOx9LZYmOXSREw8TqN8izndTxQqPfNf+h NP/P2Vvik6qvZghYKFCIgqctu3V0FqzO7CdvDyx0mc5mXcMlrc14TlP1CUyrMyWB7uDC 6i6w== X-Gm-Message-State: AOJu0YwQFqVJ9Ex70cbvku+T2j0dAZGb42h2h/N7nv0Sudn9TXneob8f A/dcAvK4aIiMSGUFtNHPxx2nWX8OUCh1t4J84lFjF6XDM7aFMjcs3xGvWDfKnXjy2yMb3YbRn7N vZxmfhg== X-Google-Smtp-Source: AGHT+IGEa40IumNEvVFXcJEPK8ryBgCcvijzmw+5Vdk2Hc0Hhcc0SBTv80TN3DJ69WovTLPlADLr/A== X-Received: by 2002:a05:6a20:dc95:b0:1a7:60d8:a6dd with SMTP id adf61e73a8af0-1afde1df3b4mr12769015637.53.1715626501721; Mon, 13 May 2024 11:55:01 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-634103f7237sm8154680a12.71.2024.05.13.11.55.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:55:01 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [RFC v2 4/7] net/tap: use generic SW stats Date: Mon, 13 May 2024 11:52:14 -0700 Message-ID: <20240513185448.120356-5-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240513185448.120356-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240513185448.120356-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use new common sw statistics. Signed-off-by: Stephen Hemminger --- drivers/net/tap/rte_eth_tap.c | 102 +++++++++++----------------------- drivers/net/tap/rte_eth_tap.h | 15 +---- 2 files changed, 34 insertions(+), 83 deletions(-) diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index 69d9da695b..ae1000a088 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -432,7 +432,6 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rx_queue *rxq = queue; struct pmd_process_private *process_private; uint16_t num_rx; - unsigned long num_rx_bytes = 0; uint32_t trigger = tap_trigger; if (trigger == rxq->trigger_seen) @@ -455,7 +454,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* Packet couldn't fit in the provided mbuf */ if (unlikely(rxq->pi.flags & TUN_PKT_STRIP)) { - rxq->stats.ierrors++; + rte_eth_count_error(&rxq->stats); continue; } @@ -467,7 +466,9 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *buf = rte_pktmbuf_alloc(rxq->mp); if (unlikely(!buf)) { - rxq->stats.rx_nombuf++; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->in_port]; + ++dev->data->rx_mbuf_alloc_failed; + /* No new buf has been allocated: do nothing */ if (!new_tail || !seg) goto end; @@ -509,11 +510,9 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* account for the receive frame */ bufs[num_rx++] = mbuf; - num_rx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&rxq->stats, mbuf); } end: - rxq->stats.ipackets += num_rx; - rxq->stats.ibytes += num_rx_bytes; if (trigger && num_rx < nb_pkts) rxq->trigger_seen = trigger; @@ -523,8 +522,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) static inline int tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs, - struct rte_mbuf **pmbufs, - uint16_t *num_packets, unsigned long *num_tx_bytes) + struct rte_mbuf **pmbufs) { struct pmd_process_private *process_private; int i; @@ -647,8 +645,7 @@ tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs, if (n <= 0) return -1; - (*num_packets)++; - (*num_tx_bytes) += rte_pktmbuf_pkt_len(mbuf); + rte_eth_count_mbuf(&txq->stats, mbuf); } return 0; } @@ -660,8 +657,6 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { struct tx_queue *txq = queue; uint16_t num_tx = 0; - uint16_t num_packets = 0; - unsigned long num_tx_bytes = 0; uint32_t max_size; int i; @@ -693,7 +688,7 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) tso_segsz = mbuf_in->tso_segsz + hdrs_len; if (unlikely(tso_segsz == hdrs_len) || tso_segsz > *txq->mtu) { - txq->stats.errs++; + rte_eth_count_error(&txq->stats); break; } gso_ctx->gso_size = tso_segsz; @@ -728,10 +723,10 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) num_mbufs = 1; } - ret = tap_write_mbufs(txq, num_mbufs, mbuf, - &num_packets, &num_tx_bytes); + ret = tap_write_mbufs(txq, num_mbufs, mbuf); if (ret == -1) { - txq->stats.errs++; + rte_eth_count_error(&txq->stats); + /* free tso mbufs */ if (num_tso_mbufs > 0) rte_pktmbuf_free_bulk(mbuf, num_tso_mbufs); @@ -749,10 +744,6 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } } - txq->stats.opackets += num_packets; - txq->stats.errs += nb_pkts - num_tx; - txq->stats.obytes += num_tx_bytes; - return num_tx; } @@ -1055,64 +1046,30 @@ tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int tap_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *tap_stats) { - unsigned int i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - unsigned long rx_nombuf = 0, ierrors = 0; - const struct pmd_internals *pmd = dev->data->dev_private; - - /* rx queue statistics */ - imax = (dev->data->nb_rx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_rx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - for (i = 0; i < imax; i++) { - tap_stats->q_ipackets[i] = pmd->rxq[i].stats.ipackets; - tap_stats->q_ibytes[i] = pmd->rxq[i].stats.ibytes; - rx_total += tap_stats->q_ipackets[i]; - rx_bytes_total += tap_stats->q_ibytes[i]; - rx_nombuf += pmd->rxq[i].stats.rx_nombuf; - ierrors += pmd->rxq[i].stats.ierrors; - } - - /* tx queue statistics */ - imax = (dev->data->nb_tx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_tx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - - for (i = 0; i < imax; i++) { - tap_stats->q_opackets[i] = pmd->txq[i].stats.opackets; - tap_stats->q_obytes[i] = pmd->txq[i].stats.obytes; - tx_total += tap_stats->q_opackets[i]; - tx_err_total += pmd->txq[i].stats.errs; - tx_bytes_total += tap_stats->q_obytes[i]; - } - - tap_stats->ipackets = rx_total; - tap_stats->ibytes = rx_bytes_total; - tap_stats->ierrors = ierrors; - tap_stats->rx_nombuf = rx_nombuf; - tap_stats->opackets = tx_total; - tap_stats->oerrors = tx_err_total; - tap_stats->obytes = tx_bytes_total; - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct tx_queue, stats), + offsetof(struct rx_queue, stats), tap_stats); } static int tap_stats_reset(struct rte_eth_dev *dev) { - int i; - struct pmd_internals *pmd = dev->data->dev_private; - - for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { - pmd->rxq[i].stats.ipackets = 0; - pmd->rxq[i].stats.ibytes = 0; - pmd->rxq[i].stats.ierrors = 0; - pmd->rxq[i].stats.rx_nombuf = 0; + return rte_eth_counters_reset(dev, offsetof(struct tx_queue, stats), + offsetof(struct rx_queue, stats)); +} - pmd->txq[i].stats.opackets = 0; - pmd->txq[i].stats.errs = 0; - pmd->txq[i].stats.obytes = 0; - } +static int +tap_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *names, + __rte_unused unsigned int limit) +{ + return rte_eth_counters_xstats_get_names(dev, names); +} - return 0; +static int +tap_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) +{ + return rte_eth_counters_xstats_get(dev, offsetof(struct tx_queue, stats), + offsetof(struct rx_queue, stats), xstats, n); } static int @@ -1919,6 +1876,9 @@ static const struct eth_dev_ops ops = { .set_mc_addr_list = tap_set_mc_addr_list, .stats_get = tap_stats_get, .stats_reset = tap_stats_reset, + .xstats_get_names = tap_xstats_get_names, + .xstats_get = tap_xstats_get, + .xstats_reset = tap_stats_reset, .dev_supported_ptypes_get = tap_dev_supported_ptypes_get, .rss_hash_update = tap_rss_hash_update, .flow_ops_get = tap_dev_flow_ops_get, diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h index 5ac93f93e9..8cba9ea410 100644 --- a/drivers/net/tap/rte_eth_tap.h +++ b/drivers/net/tap/rte_eth_tap.h @@ -14,6 +14,7 @@ #include #include +#include #include #include #include "tap_log.h" @@ -32,23 +33,13 @@ enum rte_tuntap_type { ETH_TUNTAP_TYPE_MAX, }; -struct pkt_stats { - uint64_t opackets; /* Number of output packets */ - uint64_t ipackets; /* Number of input packets */ - uint64_t obytes; /* Number of bytes on output */ - uint64_t ibytes; /* Number of bytes on input */ - uint64_t errs; /* Number of TX error packets */ - uint64_t ierrors; /* Number of RX error packets */ - uint64_t rx_nombuf; /* Nb of RX mbuf alloc failures */ -}; - struct rx_queue { struct rte_mempool *mp; /* Mempool for RX packets */ uint32_t trigger_seen; /* Last seen Rx trigger value */ uint16_t in_port; /* Port ID */ uint16_t queue_id; /* queue ID*/ - struct pkt_stats stats; /* Stats for this RX queue */ uint16_t nb_rx_desc; /* max number of mbufs available */ + struct rte_eth_counters stats; /* Stats for this RX queue */ struct rte_eth_rxmode *rxmode; /* RX features */ struct rte_mbuf *pool; /* mbufs pool for this queue */ struct iovec (*iovecs)[]; /* descriptors for this queue */ @@ -59,7 +50,7 @@ struct tx_queue { int type; /* Type field - TUN|TAP */ uint16_t *mtu; /* Pointer to MTU from dev_data */ uint16_t csum:1; /* Enable checksum offloading */ - struct pkt_stats stats; /* Stats for this TX queue */ + struct rte_eth_counters stats; /* Stats for this TX queue */ struct rte_gso_ctx gso_ctx; /* GSO context */ uint16_t out_port; /* Port ID */ uint16_t queue_id; /* queue ID*/ From patchwork Mon May 13 18:52:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140044 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06A364401D; Mon, 13 May 2024 20:55:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4828640A6B; Mon, 13 May 2024 20:55:07 +0200 (CEST) Received: from mail-oo1-f50.google.com (mail-oo1-f50.google.com [209.85.161.50]) by mails.dpdk.org (Postfix) with ESMTP id 7F5A540695 for ; Mon, 13 May 2024 20:55:03 +0200 (CEST) Received: by mail-oo1-f50.google.com with SMTP id 006d021491bc7-5b2baa24c2bso867442eaf.0 for ; Mon, 13 May 2024 11:55:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715626503; x=1716231303; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zQ+ooLtUkL2XITzIc3SCHv9JSV9L31lszmoiqikUo6c=; b=0fm+uWLEgmUGZboZa2FqIiiS6s5iQLrCib6Hu7xSWzjCS24MkrOVY2+LfCplDWwZ7W pjXHqAfn480WvCJn7Re8+8F10U4UtCNWseoNNIpkPCVFX1ty6a1MZBOl3oD2Ca1OmR7q Va8y/1bAMo6D29eoGeqqrLZbLzrHVfC40t4sZ2Z2jIqCA3Uc1fub1utp0pT6FTHXfb2L DwlS5RmECSxkHFuC3PsItcfca3hR02wbmK1UdkNL89d3yT1osGn/EBrn3BhGk8FaYLBc pxkCd7ag2LoishXruspQdbFc0wsB7p+N5iDB5mZsXlrdoHtT5uX+xz4X8k81xINXkAvz RMuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715626503; x=1716231303; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zQ+ooLtUkL2XITzIc3SCHv9JSV9L31lszmoiqikUo6c=; b=nb0TM5k+8D5AT0pd8hnmKTTnT5RhDXVTIp1UwVVXVX1duoxT7NQ7XyA/BZBZ1NJCYs uoplAVyvrHy7zeJQ3HjglXWKKGMEss3K47ckA8SC+UhNcSdnZZ6Ko0uYrd0M6eDvKRbL WFY9k+YdiAI0XoTfc7BDW4juZdKTd7ns8hneedOQZ5/qBGUgWZhWeNss9gUpv8+tixZw BQ7DJcwBikRQJFNfMy3r3OXkMPgCT11qLI30RuUUIrNydST/vjyK/3dkww2hmx6C3Jqj fPKuCq88dGp9HYZsoBGwI0pi+RElJllab/DJXWidLYkgcAJY5re+QRqLp296O4SGzMaV g/hw== X-Gm-Message-State: AOJu0YwxBF1RKK13WV0ACLZqnoXUeWoTnGyhIUNtoV1tyhOV5CkfOtq7 Wfou07sxbndcOKO4qGWMZtmYGBsrD/dSUOKncMPzeJ7LKBnHmhX1DgmxrmfXavE9WBTMmAMUxJA 2YMs7lA== X-Google-Smtp-Source: AGHT+IEiXpt2s7FWSddOmelAWywZnHlbKXjaiS1c2LR4JrXIAvUNixObC6BK8cEqFIyNTPbUQx9mkg== X-Received: by 2002:a05:6358:418b:b0:186:1066:7aa0 with SMTP id e5c5f4694b2df-193bd00844emr1117508355d.29.1715626502516; Mon, 13 May 2024 11:55:02 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-634103f7237sm8154680a12.71.2024.05.13.11.55.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:55:02 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [RFC v2 5/7] net/pcap: use generic SW stats Date: Mon, 13 May 2024 11:52:15 -0700 Message-ID: <20240513185448.120356-6-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240513185448.120356-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240513185448.120356-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use common statistics for SW drivers. Signed-off-by: Stephen Hemminger --- drivers/net/pcap/pcap_ethdev.c | 146 +++++++++++---------------------- 1 file changed, 47 insertions(+), 99 deletions(-) diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c index bfec085045..872a3ed9a4 100644 --- a/drivers/net/pcap/pcap_ethdev.c +++ b/drivers/net/pcap/pcap_ethdev.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -48,13 +49,6 @@ static uint8_t iface_idx; static uint64_t timestamp_rx_dynflag; static int timestamp_dynfield_offset = -1; -struct queue_stat { - volatile unsigned long pkts; - volatile unsigned long bytes; - volatile unsigned long err_pkts; - volatile unsigned long rx_nombuf; -}; - struct queue_missed_stat { /* last value retrieved from pcap */ unsigned int pcap; @@ -68,7 +62,7 @@ struct pcap_rx_queue { uint16_t port_id; uint16_t queue_id; struct rte_mempool *mb_pool; - struct queue_stat rx_stat; + struct rte_eth_counters rx_stat; struct queue_missed_stat missed_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; @@ -80,7 +74,7 @@ struct pcap_rx_queue { struct pcap_tx_queue { uint16_t port_id; uint16_t queue_id; - struct queue_stat tx_stat; + struct rte_eth_counters tx_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; }; @@ -238,7 +232,6 @@ eth_pcap_rx_infinite(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { int i; struct pcap_rx_queue *pcap_q = queue; - uint32_t rx_bytes = 0; if (unlikely(nb_pkts == 0)) return 0; @@ -252,39 +245,35 @@ eth_pcap_rx_infinite(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (err) return i; + rte_eth_count_mbuf(&pcap_q->rx_stat, pcap_buf); + rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), rte_pktmbuf_mtod(pcap_buf, void *), pcap_buf->data_len); bufs[i]->data_len = pcap_buf->data_len; bufs[i]->pkt_len = pcap_buf->pkt_len; bufs[i]->port = pcap_q->port_id; - rx_bytes += pcap_buf->data_len; + /* Enqueue packet back on ring to allow infinite rx. */ rte_ring_enqueue(pcap_q->pkts, pcap_buf); } - pcap_q->rx_stat.pkts += i; - pcap_q->rx_stat.bytes += rx_bytes; - return i; } static uint16_t eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { + struct pcap_rx_queue *pcap_q = queue; + struct rte_eth_dev *dev = &rte_eth_devices[pcap_q->port_id]; + struct pmd_process_private *pp = dev->process_private; + pcap_t *pcap = pp->rx_pcap[pcap_q->queue_id]; unsigned int i; struct pcap_pkthdr header; - struct pmd_process_private *pp; const u_char *packet; struct rte_mbuf *mbuf; - struct pcap_rx_queue *pcap_q = queue; uint16_t num_rx = 0; - uint32_t rx_bytes = 0; - pcap_t *pcap; - - pp = rte_eth_devices[pcap_q->port_id].process_private; - pcap = pp->rx_pcap[pcap_q->queue_id]; if (unlikely(pcap == NULL || nb_pkts == 0)) return 0; @@ -300,7 +289,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool); if (unlikely(mbuf == NULL)) { - pcap_q->rx_stat.rx_nombuf++; + ++dev->data->rx_mbuf_alloc_failed; break; } @@ -315,7 +304,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf, packet, header.caplen) == -1)) { - pcap_q->rx_stat.err_pkts++; + rte_eth_count_error(&pcap_q->rx_stat); rte_pktmbuf_free(mbuf); break; } @@ -329,11 +318,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf->ol_flags |= timestamp_rx_dynflag; mbuf->port = pcap_q->port_id; bufs[num_rx] = mbuf; + + rte_eth_count_mbuf(&pcap_q->rx_stat, mbuf); num_rx++; - rx_bytes += header.caplen; } - pcap_q->rx_stat.pkts += num_rx; - pcap_q->rx_stat.bytes += rx_bytes; return num_rx; } @@ -379,8 +367,6 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; struct pmd_process_private *pp; struct pcap_tx_queue *dumper_q = queue; - uint16_t num_tx = 0; - uint32_t tx_bytes = 0; struct pcap_pkthdr header; pcap_dumper_t *dumper; unsigned char temp_data[RTE_ETH_PCAP_SNAPLEN]; @@ -412,8 +398,7 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) pcap_dump((u_char *)dumper, &header, rte_pktmbuf_read(mbuf, 0, caplen, temp_data)); - num_tx++; - tx_bytes += caplen; + rte_eth_count_mbuf(&dumper_q->tx_stat, mbuf); rte_pktmbuf_free(mbuf); } @@ -423,9 +408,6 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * we flush the pcap dumper within each burst. */ pcap_dump_flush(dumper); - dumper_q->tx_stat.pkts += num_tx; - dumper_q->tx_stat.bytes += tx_bytes; - dumper_q->tx_stat.err_pkts += nb_pkts - num_tx; return nb_pkts; } @@ -437,20 +419,16 @@ static uint16_t eth_tx_drop(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { unsigned int i; - uint32_t tx_bytes = 0; struct pcap_tx_queue *tx_queue = queue; if (unlikely(nb_pkts == 0)) return 0; for (i = 0; i < nb_pkts; i++) { - tx_bytes += bufs[i]->pkt_len; + rte_eth_count_mbuf(&tx_queue->tx_stat, bufs[i]); rte_pktmbuf_free(bufs[i]); } - tx_queue->tx_stat.pkts += nb_pkts; - tx_queue->tx_stat.bytes += tx_bytes; - return i; } @@ -465,8 +443,6 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; struct pmd_process_private *pp; struct pcap_tx_queue *tx_queue = queue; - uint16_t num_tx = 0; - uint32_t tx_bytes = 0; pcap_t *pcap; unsigned char temp_data[RTE_ETH_PCAP_SNAPLEN]; size_t len; @@ -497,15 +473,11 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_pktmbuf_read(mbuf, 0, len, temp_data), len); if (unlikely(ret != 0)) break; - num_tx++; - tx_bytes += len; + + rte_eth_count_mbuf(&tx_queue->tx_stat, mbuf); rte_pktmbuf_free(mbuf); } - tx_queue->tx_stat.pkts += num_tx; - tx_queue->tx_stat.bytes += tx_bytes; - tx_queue->tx_stat.err_pkts += i - num_tx; - return i; } @@ -746,41 +718,12 @@ static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { unsigned int i; - unsigned long rx_packets_total = 0, rx_bytes_total = 0; - unsigned long rx_missed_total = 0; - unsigned long rx_nombuf_total = 0, rx_err_total = 0; - unsigned long tx_packets_total = 0, tx_bytes_total = 0; - unsigned long tx_packets_err_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_rx_queues; i++) { - stats->q_ipackets[i] = internal->rx_queue[i].rx_stat.pkts; - stats->q_ibytes[i] = internal->rx_queue[i].rx_stat.bytes; - rx_nombuf_total += internal->rx_queue[i].rx_stat.rx_nombuf; - rx_err_total += internal->rx_queue[i].rx_stat.err_pkts; - rx_packets_total += stats->q_ipackets[i]; - rx_bytes_total += stats->q_ibytes[i]; - rx_missed_total += queue_missed_stat_get(dev, i); - } - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_tx_queues; i++) { - stats->q_opackets[i] = internal->tx_queue[i].tx_stat.pkts; - stats->q_obytes[i] = internal->tx_queue[i].tx_stat.bytes; - tx_packets_total += stats->q_opackets[i]; - tx_bytes_total += stats->q_obytes[i]; - tx_packets_err_total += internal->tx_queue[i].tx_stat.err_pkts; - } + rte_eth_counters_stats_get(dev, offsetof(struct pcap_tx_queue, tx_stat), + offsetof(struct pcap_rx_queue, rx_stat), stats); - stats->ipackets = rx_packets_total; - stats->ibytes = rx_bytes_total; - stats->imissed = rx_missed_total; - stats->ierrors = rx_err_total; - stats->rx_nombuf = rx_nombuf_total; - stats->opackets = tx_packets_total; - stats->obytes = tx_bytes_total; - stats->oerrors = tx_packets_err_total; + for (i = 0; i < dev->data->nb_rx_queues; i++) + stats->imissed += queue_missed_stat_get(dev, i); return 0; } @@ -789,25 +732,34 @@ static int eth_stats_reset(struct rte_eth_dev *dev) { unsigned int i; - struct pmd_internals *internal = dev->data->dev_private; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - internal->rx_queue[i].rx_stat.pkts = 0; - internal->rx_queue[i].rx_stat.bytes = 0; - internal->rx_queue[i].rx_stat.err_pkts = 0; - internal->rx_queue[i].rx_stat.rx_nombuf = 0; - queue_missed_stat_reset(dev, i); - } + rte_eth_counters_reset(dev, offsetof(struct pcap_tx_queue, tx_stat), + offsetof(struct pcap_rx_queue, rx_stat)); - for (i = 0; i < dev->data->nb_tx_queues; i++) { - internal->tx_queue[i].tx_stat.pkts = 0; - internal->tx_queue[i].tx_stat.bytes = 0; - internal->tx_queue[i].tx_stat.err_pkts = 0; - } + for (i = 0; i < dev->data->nb_rx_queues; i++) + queue_missed_stat_reset(dev, i); return 0; } +static int +eth_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *names, + __rte_unused unsigned int limit) +{ + return rte_eth_counters_xstats_get_names(dev, names); +} + +static int +eth_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, + unsigned int n) +{ + return rte_eth_counters_xstats_get(dev, offsetof(struct pcap_tx_queue, tx_stat), + offsetof(struct pcap_rx_queue, rx_stat), + xstats, n); +} + + static inline void infinite_rx_ring_free(struct rte_ring *pkts) { @@ -929,13 +881,6 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, pcap_pkt_count); return -EINVAL; } - - /* - * Reset the stats for this queue since eth_pcap_rx calls above - * didn't result in the application receiving packets. - */ - pcap_q->rx_stat.pkts = 0; - pcap_q->rx_stat.bytes = 0; } return 0; @@ -1005,6 +950,9 @@ static const struct eth_dev_ops ops = { .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, + .xstats_get_names = eth_xstats_get_names, + .xstats_get = eth_xstats_get, + .xstats_reset = eth_stats_reset, }; static int From patchwork Mon May 13 18:52:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140045 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5723A4401D; Mon, 13 May 2024 20:55:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7990540A70; Mon, 13 May 2024 20:55:08 +0200 (CEST) Received: from mail-oo1-f48.google.com (mail-oo1-f48.google.com [209.85.161.48]) by mails.dpdk.org (Postfix) with ESMTP id 25C36402F1 for ; Mon, 13 May 2024 20:55:04 +0200 (CEST) Received: by mail-oo1-f48.google.com with SMTP id 006d021491bc7-5b27c5603ddso2226094eaf.1 for ; Mon, 13 May 2024 11:55:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715626503; x=1716231303; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LU+R+s5lrkS/n7Y7WnI42Hrf1kiafOz4s5fGKJUBiq0=; b=smDOVn2gs6MBy/+dC6S+pogDAhQt+/2qiF1w8V4ssvQH4sqFvwE0qNakuMp1Xn8R3Q 70Zm0TS8Hb89smx8q+8zfKJS0o5gb5kQ1da9thXsOOpbESwMwG7zPyK7rLuR6RTomiU1 spNlKsyq9f3lbMC5MtyaItoNX7HrQGon70nC9GA/AcArIzsee6svZ10KRFFXvupoi9Xw zcHFL6CwjxSzgjVcotCx/TPtr2a9OagxamMiWq3/NT9270SGTD8ItI2T0vlrFKEaf6FD 5TIYQEbld5wtG0lcwTpQ1lyz8mFUx+BNUvKsgu/Gytjt2+HBf72N4j1JgGwNm3da8V/r NE6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715626503; x=1716231303; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LU+R+s5lrkS/n7Y7WnI42Hrf1kiafOz4s5fGKJUBiq0=; b=e/v2dxxem1g5RdBF5TEOHzm4OxqJZ8IZkxS5Y/hSk6U05vkEeNNXO6HkFUJHG8fdAO 3gzbSqbpHtIhBHUlXIdaQrUDQ+sfumIi4RubpzHWAwp81QmMGSsFk8auQ2jOQU8ZpRAX /jhrgLZlxwLVRqR80ITstI+/ELNyaGgVTvQ57PSAd3mfEp9x2vSMkdGNydYkDnqqq34y A00X9WZEesahRBrYxaElCVEWQBIYWLNiQDBhW2AbiUGkpr79mLBgx7dxuMG3iG+8I/1Q ovAdWKG85gos20kBqu/BirBzN8vPd+bwR0XtHnaojwVc/rJgICkSZxi1KYvC3ky2t/5M wdTA== X-Gm-Message-State: AOJu0YxmcGwuefqN4lN5GNZ7qOItR69ELEXmgmKPT05DsRmX7ONAMuty T+5qcY/UNHDzwQbGxnrxSbsJbbq6yWUFFizmDBZj74kpwH8o83+mGm7dlMokdVmPz8oCtFXy5Rt WHKdA7A== X-Google-Smtp-Source: AGHT+IF4xUgV2Tk1FK+wG/N68uCXFWS4Pey03sD1t6IMGFPbSezMJO1lCPhSs7teAcbQKvbaNY6k+Q== X-Received: by 2002:a05:6358:9386:b0:186:119d:8c16 with SMTP id e5c5f4694b2df-193bcfe3671mr1165636355d.23.1715626503459; Mon, 13 May 2024 11:55:03 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-634103f7237sm8154680a12.71.2024.05.13.11.55.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:55:03 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Ciara Loftus Subject: [RFC v2 6/7] net/af_xdp: use generic SW stats Date: Mon, 13 May 2024 11:52:16 -0700 Message-ID: <20240513185448.120356-7-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240513185448.120356-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240513185448.120356-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use common code for all SW stats. Signed-off-by: Stephen Hemminger --- drivers/net/af_xdp/rte_eth_af_xdp.c | 115 +++++++++++----------------- 1 file changed, 44 insertions(+), 71 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 268a130c49..9420420aa4 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -120,19 +121,13 @@ struct xsk_umem_info { uint32_t max_xsks; }; -struct rx_stats { - uint64_t rx_pkts; - uint64_t rx_bytes; - uint64_t rx_dropped; -}; - struct pkt_rx_queue { struct xsk_ring_cons rx; struct xsk_umem_info *umem; struct xsk_socket *xsk; struct rte_mempool *mb_pool; - struct rx_stats stats; + struct rte_eth_counters stats; struct xsk_ring_prod fq; struct xsk_ring_cons cq; @@ -143,17 +138,11 @@ struct pkt_rx_queue { int busy_budget; }; -struct tx_stats { - uint64_t tx_pkts; - uint64_t tx_bytes; - uint64_t tx_dropped; -}; - struct pkt_tx_queue { struct xsk_ring_prod tx; struct xsk_umem_info *umem; - struct tx_stats stats; + struct rte_eth_counters stats; struct pkt_rx_queue *pair; int xsk_queue_idx; @@ -308,7 +297,6 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_ring_prod *fq = &rxq->fq; struct xsk_umem_info *umem = rxq->umem; uint32_t idx_rx = 0; - unsigned long rx_bytes = 0; int i; struct rte_mbuf *fq_bufs[ETH_AF_XDP_RX_BATCH_SIZE]; @@ -363,16 +351,13 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_pktmbuf_pkt_len(bufs[i]) = len; rte_pktmbuf_data_len(bufs[i]) = len; - rx_bytes += len; + + rte_eth_count_mbuf(&rxq->stats, bufs[i]); } xsk_ring_cons__release(rx, nb_pkts); (void)reserve_fill_queue(umem, nb_pkts, fq_bufs, fq); - /* statistics */ - rxq->stats.rx_pkts += nb_pkts; - rxq->stats.rx_bytes += rx_bytes; - return nb_pkts; } #else @@ -384,7 +369,6 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_umem_info *umem = rxq->umem; struct xsk_ring_prod *fq = &rxq->fq; uint32_t idx_rx = 0; - unsigned long rx_bytes = 0; int i; uint32_t free_thresh = fq->size >> 1; struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE]; @@ -424,16 +408,13 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_ring_enqueue(umem->buf_ring, (void *)addr); rte_pktmbuf_pkt_len(mbufs[i]) = len; rte_pktmbuf_data_len(mbufs[i]) = len; - rx_bytes += len; + rte_eth_count_mbuf(&rxq->stats, mbufs[i]); + bufs[i] = mbufs[i]; } xsk_ring_cons__release(rx, nb_pkts); - /* statistics */ - rxq->stats.rx_pkts += nb_pkts; - rxq->stats.rx_bytes += rx_bytes; - return nb_pkts; } #endif @@ -527,9 +508,8 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct pkt_tx_queue *txq = queue; struct xsk_umem_info *umem = txq->umem; struct rte_mbuf *mbuf; - unsigned long tx_bytes = 0; int i; - uint32_t idx_tx; + uint32_t idx_tx, pkt_len; uint16_t count = 0; struct xdp_desc *desc; uint64_t addr, offset; @@ -541,6 +521,7 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) for (i = 0; i < nb_pkts; i++) { mbuf = bufs[i]; + pkt_len = rte_pktmbuf_pkt_len(mbuf); if (mbuf->pool == umem->mb_pool) { if (!xsk_ring_prod__reserve(&txq->tx, 1, &idx_tx)) { @@ -589,17 +570,13 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) count++; } - tx_bytes += mbuf->pkt_len; + rte_eth_count_packet(&txq->stats, pkt_len); } out: xsk_ring_prod__submit(&txq->tx, count); kick_tx(txq, cq); - txq->stats.tx_pkts += count; - txq->stats.tx_bytes += tx_bytes; - txq->stats.tx_dropped += nb_pkts - count; - return count; } #else @@ -610,7 +587,6 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_umem_info *umem = txq->umem; struct rte_mbuf *mbuf; void *addrs[ETH_AF_XDP_TX_BATCH_SIZE]; - unsigned long tx_bytes = 0; int i; uint32_t idx_tx; struct xsk_ring_cons *cq = &txq->pair->cq; @@ -640,7 +616,8 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) pkt = xsk_umem__get_data(umem->mz->addr, desc->addr); rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *), desc->len); - tx_bytes += mbuf->pkt_len; + rte_eth_qsw_update(&txq->stats, mbuf); + rte_pktmbuf_free(mbuf); } @@ -648,9 +625,6 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) kick_tx(txq, cq); - txq->stats.tx_pkts += nb_pkts; - txq->stats.tx_bytes += tx_bytes; - return nb_pkts; } @@ -847,39 +821,26 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct pmd_internals *internals = dev->data->dev_private; struct pmd_process_private *process_private = dev->process_private; - struct xdp_statistics xdp_stats; - struct pkt_rx_queue *rxq; - struct pkt_tx_queue *txq; - socklen_t optlen; - int i, ret, fd; + unsigned int i; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - optlen = sizeof(struct xdp_statistics); - rxq = &internals->rx_queues[i]; - txq = rxq->pair; - stats->q_ipackets[i] = rxq->stats.rx_pkts; - stats->q_ibytes[i] = rxq->stats.rx_bytes; + rte_eth_counters_stats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), stats); - stats->q_opackets[i] = txq->stats.tx_pkts; - stats->q_obytes[i] = txq->stats.tx_bytes; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct xdp_statistics xdp_stats; + socklen_t optlen = sizeof(xdp_stats); + int fd; - stats->ipackets += stats->q_ipackets[i]; - stats->ibytes += stats->q_ibytes[i]; - stats->imissed += rxq->stats.rx_dropped; - stats->oerrors += txq->stats.tx_dropped; fd = process_private->rxq_xsk_fds[i]; - ret = fd >= 0 ? getsockopt(fd, SOL_XDP, XDP_STATISTICS, - &xdp_stats, &optlen) : -1; - if (ret != 0) { + if (fd < 0) + continue; + if (getsockopt(fd, SOL_XDP, XDP_STATISTICS, + &xdp_stats, &optlen) < 0) { AF_XDP_LOG(ERR, "getsockopt() failed for XDP_STATISTICS.\n"); return -1; } stats->imissed += xdp_stats.rx_dropped; - - stats->opackets += stats->q_opackets[i]; - stats->obytes += stats->q_obytes[i]; } return 0; @@ -888,19 +849,28 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) static int eth_stats_reset(struct rte_eth_dev *dev) { - struct pmd_internals *internals = dev->data->dev_private; - int i; + return rte_eth_counters_reset(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats)); +} - for (i = 0; i < internals->queue_cnt; i++) { - memset(&internals->rx_queues[i].stats, 0, - sizeof(struct rx_stats)); - memset(&internals->tx_queues[i].stats, 0, - sizeof(struct tx_stats)); - } +static int +eth_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *names, + __rte_unused unsigned int limit) +{ + return rte_eth_counters_xstats_get_names(dev, names); +} - return 0; +static int +eth_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, + unsigned int n) +{ + return rte_eth_counters_xstats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), + xstats, n); } + #ifdef RTE_NET_AF_XDP_LIBBPF_XDP_ATTACH static int link_xdp_prog_with_dev(int ifindex, int fd, __u32 flags) @@ -1899,6 +1869,9 @@ static const struct eth_dev_ops ops_cni = { .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, + .xstats_get_names = eth_xstats_get_names, + .xstats_get = eth_xstats_get, + .xstats_reset = eth_stats_reset, .get_monitor_addr = eth_get_monitor_addr, }; From patchwork Mon May 13 18:52:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140046 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B175D4401D; Mon, 13 May 2024 20:55:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A456A40A76; Mon, 13 May 2024 20:55:09 +0200 (CEST) Received: from mail-oo1-f44.google.com (mail-oo1-f44.google.com [209.85.161.44]) by mails.dpdk.org (Postfix) with ESMTP id 035A84067C for ; Mon, 13 May 2024 20:55:05 +0200 (CEST) Received: by mail-oo1-f44.google.com with SMTP id 006d021491bc7-5b2761611e8so2731614eaf.2 for ; Mon, 13 May 2024 11:55:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715626504; x=1716231304; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NO4DVKUdt+v+339Dm/0NF9uOVdDeR7qGw415YuijwCU=; b=zMzIT++lEq2OTXAJbjJvIKPC38AWwj28+u7ZbHulkxenHfmPyQZ05ENcxylkAxgXJW Xk+hLJV0SePpjLe/C7ZdwxMmKniAhZu3qIlrb9qsJ8zehoAoEFu6YKxfI5ec55qxZuYB uT8PB6ZG7i4C8YkID4prPLmj0zrd5qP4Vlj3qdAntPm5yaSVI2W8lDPbmvq4k/mMJrrO IeEKG+yyVgRFmEX+CqcqvgfLHrzzuxFCyAClxL8Vu5a+O0BweCoo7hq1RY+aG4bI6RId UXK/eypSM9yVLWba7HC1JMWiQgXLMrQRvC9HXg1p8vkKUCEOJWXXQGnbwCHVIv6DfDGu jYdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715626504; x=1716231304; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NO4DVKUdt+v+339Dm/0NF9uOVdDeR7qGw415YuijwCU=; b=P0lBPwY2w2i0yxU6Xz5huc7oh4FXoW4PByHsamf9CnD1gWynsLTnqZSqtIbAjZ9Z7g 8xdiQBnMsGK1ggBYOYqiNHCZpr+zX33MAT7+shTQIiwWdBefWZYbfVfup07qPV0xNErz y/eJH9dyq4EQTnthZN6pFTeeL0/W4HOHARNF89PDwgzUrf/CndUxg/SG06HgKMfa0qZ/ duOutMsyNqATVgCkRaM+VIMJv5P9Vrh9WbQ4gTTYi3ozAjy7keHZ1t7sySwAcQTkd2Xm gVTkqSsfO9as2tswUBxSgri/YjjV5w8MNlVhybqWmsmYbFsjc6D9fo1kBmThA75lJTma aXaA== X-Gm-Message-State: AOJu0YzM1E6zZ4pTQxKjK5QXrgrdf6eryti2A7KLmqHQjSp7OvRNwEz4 xVykVJTf7wP1syfGeZOiSJ7J2WI/5XEjjaZA0U4TNuiuK/9fxmVIEL9xuekhk3P43H3CVx4C3K1 8ftohFg== X-Google-Smtp-Source: AGHT+IGLB9jRvCFk9WOaps3IU2RQMYqIkzzs/+NipiQ3rib2soW6VDjrh0RECnnGrwIWzuhGE2enmw== X-Received: by 2002:a05:6359:a3a5:b0:18d:e328:7e7f with SMTP id e5c5f4694b2df-193bd000215mr1146584255d.23.1715626504282; Mon, 13 May 2024 11:55:04 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-634103f7237sm8154680a12.71.2024.05.13.11.55.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:55:03 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Bruce Richardson Subject: [RFC v2 7/7] net/ring: use generic SW stats Date: Mon, 13 May 2024 11:52:17 -0700 Message-ID: <20240513185448.120356-8-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240513185448.120356-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240513185448.120356-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use generic per-queue infrastructure. Signed-off-by: Stephen Hemminger --- drivers/net/ring/rte_eth_ring.c | 85 +++++++++++++++++---------------- 1 file changed, 44 insertions(+), 41 deletions(-) diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index 48953dd7a0..550b927392 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -7,6 +7,7 @@ #include "rte_eth_ring.h" #include #include +#include #include #include #include @@ -44,8 +45,8 @@ enum dev_action { struct ring_queue { struct rte_ring *rng; - uint64_t rx_pkts; - uint64_t tx_pkts; + + struct rte_eth_counters stats; }; struct pmd_internals { @@ -77,12 +78,13 @@ eth_ring_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { void **ptrs = (void *)&bufs[0]; struct ring_queue *r = q; - const uint16_t nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng, - ptrs, nb_bufs, NULL); - if (r->rng->flags & RING_F_SC_DEQ) - r->rx_pkts += nb_rx; - else - __atomic_fetch_add(&r->rx_pkts, nb_rx, __ATOMIC_RELAXED); + uint16_t i, nb_rx; + + nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng, ptrs, nb_bufs, NULL); + + for (i = 0; i < nb_rx; i++) + rte_eth_count_mbuf(&r->stats, bufs[i]); + return nb_rx; } @@ -90,13 +92,20 @@ static uint16_t eth_ring_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { void **ptrs = (void *)&bufs[0]; + uint32_t *sizes; struct ring_queue *r = q; - const uint16_t nb_tx = (uint16_t)rte_ring_enqueue_burst(r->rng, - ptrs, nb_bufs, NULL); - if (r->rng->flags & RING_F_SP_ENQ) - r->tx_pkts += nb_tx; - else - __atomic_fetch_add(&r->tx_pkts, nb_tx, __ATOMIC_RELAXED); + uint16_t i, nb_tx; + + sizes = alloca(sizeof(uint32_t) * nb_bufs); + + for (i = 0; i < nb_bufs; i++) + sizes[i] = rte_pktmbuf_pkt_len(bufs[i]); + + nb_tx = (uint16_t)rte_ring_enqueue_burst(r->rng, ptrs, nb_bufs, NULL); + + for (i = 0; i < nb_tx; i++) + rte_eth_count_packet(&r->stats, sizes[i]); + return nb_tx; } @@ -193,42 +202,33 @@ eth_dev_info(struct rte_eth_dev *dev, static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned int i; - unsigned long rx_total = 0, tx_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_rx_queues; i++) { - stats->q_ipackets[i] = internal->rx_ring_queues[i].rx_pkts; - rx_total += stats->q_ipackets[i]; - } - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_tx_queues; i++) { - stats->q_opackets[i] = internal->tx_ring_queues[i].tx_pkts; - tx_total += stats->q_opackets[i]; - } - - stats->ipackets = rx_total; - stats->opackets = tx_total; - - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct ring_queue, stats), + offsetof(struct ring_queue, stats), + stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned int i; - struct pmd_internals *internal = dev->data->dev_private; + return rte_eth_counters_reset(dev, offsetof(struct ring_queue, stats), + offsetof(struct ring_queue, stats)); +} - for (i = 0; i < dev->data->nb_rx_queues; i++) - internal->rx_ring_queues[i].rx_pkts = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) - internal->tx_ring_queues[i].tx_pkts = 0; +static int +eth_xstats_get_names(struct rte_eth_dev *dev, struct rte_eth_xstat_name *names, + __rte_unused unsigned int limit) +{ + return rte_eth_counters_xstats_get_names(dev, names); +} - return 0; +static int +eth_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) +{ + return rte_eth_counters_xstats_get(dev, offsetof(struct ring_queue, stats), + offsetof(struct ring_queue, stats), xstats, n); } + static void eth_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t index __rte_unused) @@ -339,6 +339,9 @@ static const struct eth_dev_ops ops = { .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, + .xstats_get_names = eth_xstats_get_names, + .xstats_get = eth_xstats_get, + .xstats_reset = eth_stats_reset, .mac_addr_remove = eth_mac_addr_remove, .mac_addr_add = eth_mac_addr_add, .promiscuous_enable = eth_promiscuous_enable,