From patchwork Fri May 17 17:35:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140173 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3D39044052; Fri, 17 May 2024 19:40:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31E0C4064C; Fri, 17 May 2024 19:40:51 +0200 (CEST) Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by mails.dpdk.org (Postfix) with ESMTP id 7A8A040268 for ; Fri, 17 May 2024 19:40:48 +0200 (CEST) Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1ed012c1afbso7156715ad.1 for ; Fri, 17 May 2024 10:40:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715967647; x=1716572447; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ALuFLJILP6C+V/9ZzQqYB1D5U2Yok5jTHH8UBj4O/kg=; b=z8Q83rp8Vei7ZHTBTOFYKkepcFxemu6KiNjGTlonvCPzkJkGOHjRjRteIvlOqMeLWt Pt9xpaZav3P+ADePb3hjvjbIxekg09hbMutmSnGPTVjmJopdskUyjzn9XvvwBlq+OEHo 9BnOW3aMMohBfdVVYH5ND1W5SHu4RO8HKqqqszuYKA1SG3rGIBb4gxahu24dVbJI++OO V8w2fnEy/YAlwZKPRNqWH9bYNdIT4T5cONp26JtF8qiHB4cH6GJb9al80/G6arJee/de AfjlyHzolhwSu0eNFcsMN+Y/1GL4suxcFD0SS+oGbtVW4q/LWLwYDNRIHtkiP5J3g1DJ SviQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715967647; x=1716572447; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ALuFLJILP6C+V/9ZzQqYB1D5U2Yok5jTHH8UBj4O/kg=; b=vQoZwbL32biSyRWofj5U3NScsSoTZQJ+Ct16d+6pZqcZjlJSd4LqLffKZ2njeOJ3M/ LQiwW5eNUvIOCVPW8gP3Au+Hl45qTbtbmKjxrN28XshmkrbhDUxk2ep3mwhQxS+WlCY7 Ty6jOXH7UuaThNncW5FkhpxpdBTiceiqfWWJ+Wste38Rk3954OQCJplGDegD4R10XJvq gwe13SpJ7uSzb4gx+KAh03wWpI+qL5m8FbpB6Elshzv0JtVmSGTVoFfAz4SGT9Evg5gb QyDL3pBFVgoRsIW9f2eu9bUGUW/t0czUatB4X0L7HNaMsOvYwIg2lIabotWoxu3E1jXo RoNg== X-Gm-Message-State: AOJu0YwC4gD/ChH+R51qb/H9ZvNWO6wy9dIOHEm9SdFlaEhWf9BQTY4Q YX0hWmPlJjzH0jPP3zulmD+mNgc/zNHBjsx0nRv2JS0ujcEi2PAT6QcdSInY8k9Vl7d4GXGjiTH BMDM= X-Google-Smtp-Source: AGHT+IGif2we2gSsQJ0+0rfsL4n8sVVXQgN/CxEn+6TtrDT70d5ouX632On6SMjHyO6CTsVtVNPV6g== X-Received: by 2002:a17:902:d507:b0:1ec:c6ba:f2c3 with SMTP id d9443c01a7336-1ef42d69b5amr362755805ad.2.1715967647453; Fri, 17 May 2024 10:40:47 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bf31032sm158830485ad.131.2024.05.17.10.40.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 10:40:46 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , =?utf-8?q?Morten_Br?= =?utf-8?q?=C3=B8rup?= , Tyler Retzlaff Subject: [PATCH v7 1/9] eal: generic 64 bit counter Date: Fri, 17 May 2024 10:35:08 -0700 Message-ID: <20240517174044.90952-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517174044.90952-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517174044.90952-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This header implements 64 bit counters using atomic operations but with a weak memory ordering so that they are safe against load/store splits on 32 bit platforms. Signed-off-by: Stephen Hemminger Acked-by: Morten Brørup --- lib/eal/include/meson.build | 1 + lib/eal/include/rte_counter.h | 116 ++++++++++++++++++++++++++++++++++ 2 files changed, 117 insertions(+) create mode 100644 lib/eal/include/rte_counter.h diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index e94b056d46..c070dd0079 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -12,6 +12,7 @@ headers += files( 'rte_class.h', 'rte_common.h', 'rte_compat.h', + 'rte_counter.h', 'rte_debug.h', 'rte_dev.h', 'rte_devargs.h', diff --git a/lib/eal/include/rte_counter.h b/lib/eal/include/rte_counter.h new file mode 100644 index 0000000000..cdaa426e12 --- /dev/null +++ b/lib/eal/include/rte_counter.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_COUNTER_H_ +#define _RTE_COUNTER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include + +/** + * @file + * RTE Counter + * + * A counter is 64 bit value that is safe from split read/write. + * It assumes that only one cpu at a time will update the counter, + * and another CPU may want to read it. + * + * This is a weaker subset of full atomic variables. + * + * The counters are subject to the restrictions of atomic variables + * in packed structures or unaligned. + */ + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * The RTE counter type. + */ +typedef RTE_ATOMIC(uint64_t) rte_counter64_t; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Add value to counter. + * Assumes this operation is only done by one thread on the object. + * + * @param counter + * A pointer to the counter. + * @param val + * The value to add to the counter. + */ +__rte_experimental +static inline void +rte_counter64_add(rte_counter64_t *counter, uint32_t val) +{ + rte_atomic_fetch_add_explicit(counter, val, rte_memory_order_relaxed); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Read a counter. + * This operation can be done by any thread. + * + * @param counter + * A pointer to the counter. + * @return + * The current value of the counter. + */ +__rte_experimental +static inline uint64_t +rte_counter64_fetch(const rte_counter64_t *counter) +{ + return rte_atomic_load_explicit(counter, rte_memory_order_consume); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Set a counter. + * This operation can be done by any thread. + * + * @param counter + * A pointer to the counter. + * @param val + * Value to set counter to. + */ +__rte_experimental +static inline void +rte_counter64_set(rte_counter64_t *counter, uint64_t val) +{ + rte_atomic_store_explicit(counter, val, rte_memory_order_release); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Reset a counter to zero. + * This operation can be done by any thread. + * + * @param counter + * A pointer to the counter. + */ +__rte_experimental +static inline void +rte_counter64_reset(rte_counter64_t *counter) +{ + rte_counter64_set(counter, 0); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_COUNTER_H_ */ From patchwork Fri May 17 17:35:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140174 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6896844052; Fri, 17 May 2024 19:41:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7C72F4067D; Fri, 17 May 2024 19:40:52 +0200 (CEST) Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by mails.dpdk.org (Postfix) with ESMTP id 7D5D1402EA for ; Fri, 17 May 2024 19:40:49 +0200 (CEST) Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-1ed0abbf706so15053605ad.2 for ; Fri, 17 May 2024 10:40:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715967649; x=1716572449; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xRZQrNXQUreKDEDs6M/6eRJlXdlE233midjgJb3mMNc=; b=VYwTLEjiSgOVHYjkUphnx3dOdQeoe04ao4prAqLSVJaBnINOeYaFM1Ri/9O1h9k/LP NA3SYp0lf8IXVlFsfy1AlNFmoMfK7G1PpWSMFbu0xoYFX74c94wD+fTqxAxxZRtl3vfr +TcYN6WcZelwvQSUeSYmCE76W3arqUZvHN+nTwwHUCNezentxtT5eyDXeBL6mda/R8ZG g1Llh7kDHZ9pgYuwTSWQ1szIcDs5lcgNxl+e4f6a9XRn5Yx4pw09cG5mFe/ZCJYdeDfC n/4TCK3iYfLeSyQHCk9/DbgrLDemPL1Qz+2aV/NtXVX2RQEzVoEnzNr1rSodAjqIV8wj fsDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715967649; x=1716572449; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xRZQrNXQUreKDEDs6M/6eRJlXdlE233midjgJb3mMNc=; b=RAr8NsygsqEowJ0fDsx7FiNUcTJfg/QvMNQehCpH4PtrcQWuFkFAyX/3+2Tpw+14D4 mcmNc4UnAaLZyqjHp43uyS4K5CJgjEaoT6zo86R7+Hz3JDaKAcZQ69OT7bGzzX3TKJQK GNsmE3+1eBAKX9J7VbgfUnYYO9ULMEOPKGfU9tzQvyzH0M0iCzHhGf1Zfo5pVHKyFreY KurEnSO6u00I7SW/KIFmPsVjXabTmCJ/6xNZU+ZTSTcz2IKXDmXpT+bRd4B1ncsmLUBt ktNQZKTHe3NPeKBI0Mez9myy6Iaz/oiJ/9CG+a7nzHdDXSwI/fO923E3NzFGrYDy+yQF P3UA== X-Gm-Message-State: AOJu0YzZY6pGn79CmFZkh6mPlWi2nysQipjRE+BppcEg5TKo60VWgQHw 2XkVPnkZOSdCHivIM7NHh+iLrLEayloPRxNOuZiywKm9EgvqjWK/2giC05HqKSpmvEaPIYQ6xOE jld4= X-Google-Smtp-Source: AGHT+IFGbd99RGrSBDoik4Z6DcALqHpAb7EXeYGa0f+AVZvXdGkjnB1q71EDOsS6CY5CvlAHfve0DA== X-Received: by 2002:a17:902:b402:b0:1e4:2d13:cf68 with SMTP id d9443c01a7336-1ef43d2e900mr224453085ad.17.1715967648554; Fri, 17 May 2024 10:40:48 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bf31032sm158830485ad.131.2024.05.17.10.40.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 10:40:48 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [PATCH v7 2/9] ethdev: add common counters for statistics Date: Fri, 17 May 2024 10:35:09 -0700 Message-ID: <20240517174044.90952-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517174044.90952-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517174044.90952-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce common helper routines for keeping track of per-queue statistics in SW PMD's. The code in several drivers had copy/pasted the same code for this, but had common issues with 64 bit counters on 32 bit platforms. Signed-off-by: Stephen Hemminger --- lib/ethdev/ethdev_swstats.c | 101 +++++++++++++++++++++++++++++ lib/ethdev/ethdev_swstats.h | 124 ++++++++++++++++++++++++++++++++++++ lib/ethdev/meson.build | 2 + lib/ethdev/version.map | 3 + 4 files changed, 230 insertions(+) create mode 100644 lib/ethdev/ethdev_swstats.c create mode 100644 lib/ethdev/ethdev_swstats.h diff --git a/lib/ethdev/ethdev_swstats.c b/lib/ethdev/ethdev_swstats.c new file mode 100644 index 0000000000..555f5f592b --- /dev/null +++ b/lib/ethdev/ethdev_swstats.c @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + + +#include +#include +#include + +#include "rte_ethdev.h" +#include "ethdev_driver.h" +#include "ethdev_swstats.h" + +int +rte_eth_counters_stats_get(const struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_stats *stats) +{ + uint64_t packets, bytes, errors; + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const void *txq = dev->data->tx_queues[i]; + const struct rte_eth_counters *counters; + + if (txq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)txq + tx_offset); + packets = rte_counter64_fetch(&counters->packets); + bytes = rte_counter64_fetch(&counters->bytes); + errors = rte_counter64_fetch(&counters->errors); + + stats->opackets += packets; + stats->obytes += bytes; + stats->oerrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_opackets[i] = packets; + stats->q_obytes[i] = bytes; + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const void *rxq = dev->data->rx_queues[i]; + const struct rte_eth_counters *counters; + + if (rxq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)rxq + rx_offset); + packets = rte_counter64_fetch(&counters->packets); + bytes = rte_counter64_fetch(&counters->bytes); + errors = rte_counter64_fetch(&counters->errors); + + stats->ipackets += packets; + stats->ibytes += bytes; + stats->ierrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_ipackets[i] = packets; + stats->q_ibytes[i] = bytes; + } + } + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + return 0; +} + +int +rte_eth_counters_reset(struct rte_eth_dev *dev, size_t tx_offset, size_t rx_offset) +{ + struct rte_eth_counters *counters; + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + void *txq = dev->data->tx_queues[i]; + + if (txq == NULL) + continue; + + counters = (struct rte_eth_counters *)((char *)txq + tx_offset); + rte_counter64_reset(&counters->packets); + rte_counter64_reset(&counters->bytes); + rte_counter64_reset(&counters->errors); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + void *rxq = dev->data->rx_queues[i]; + + if (rxq == NULL) + continue; + + counters = (struct rte_eth_counters *)((char *)rxq + rx_offset); + rte_counter64_reset(&counters->packets); + rte_counter64_reset(&counters->bytes); + rte_counter64_reset(&counters->errors); + } + + return 0; +} diff --git a/lib/ethdev/ethdev_swstats.h b/lib/ethdev/ethdev_swstats.h new file mode 100644 index 0000000000..808c540640 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.h @@ -0,0 +1,124 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_ETHDEV_SWSTATS_H_ +#define _RTE_ETHDEV_SWSTATS_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * + * Internal statistics counters for software based devices. + * Hardware PMD's should use the hardware counters instead. + * + * This provides a library for PMD's to keep track of packets and bytes. + * It is assumed that this will be used per queue and queues are not + * shared by lcores. + */ + +#include + +/** + * A structure to be embedded in the device driver per-queue data. + */ +struct rte_eth_counters { + rte_counter64_t packets; /**< Total number of packets. */ + rte_counter64_t bytes; /**< Total number of bytes. */ + rte_counter64_t errors; /**< Total number of packets with errors. */ +}; + +/** + * @internal + * Increment counters for a single packet. + * + * @param counters + * Pointer to queue structure containing counters. + * @param sz + * Size of the packet in bytes. + */ +__rte_internal +static inline void +rte_eth_count_packet(struct rte_eth_counters *counters, uint32_t sz) +{ + rte_counter64_add(&counters->packets, 1); + rte_counter64_add(&counters->bytes, sz); +} + +/** + * @internal + * Increment counters based on mbuf. + * + * @param counters + * Pointer to queue structure containing counters. + * @param mbuf + * Received or transmitted mbuf. + */ +__rte_internal +static inline void +rte_eth_count_mbuf(struct rte_eth_counters *counters, const struct rte_mbuf *mbuf) +{ + rte_eth_count_packet(counters, rte_pktmbuf_pkt_len(mbuf)); +} + +/** + * @internal + * Increment error counter. + * + * @param counters + * Pointer to queue structure containing counters. + */ +__rte_internal +static inline void +rte_eth_count_error(struct rte_eth_counters *counters) +{ + rte_counter64_add(&counters->errors, 1); +} + +/** + * @internal + * Retrieve the general statistics for all queues. + * @see rte_eth_stats_get. + * + * @param dev + * Pointer to the Ethernet device structure. + * @param tx_offset + * Offset from the tx_queue structure where stats are located. + * @param rx_offset + * Offset from the rx_queue structure where stats are located. + * @param stats + * A pointer to a structure of type *rte_eth_stats* to be filled + * @return + * Zero if successful. Non-zero otherwise. + */ +__rte_internal +int rte_eth_counters_stats_get(const struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_stats *stats); + +/** + * @internal + * Reset the statistics for all queues. + * @see rte_eth_stats_reset. + * + * @param dev + * Pointer to the Ethernet device structure. + * @param tx_offset + * Offset from the tx_queue structure where stats are located. + * @param rx_offset + * Offset from the rx_queue structure where stats are located. + * @return + * Zero if successful. Non-zero otherwise. + */ +__rte_internal +int rte_eth_counters_reset(struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_ETHDEV_SWSTATS_H_ */ diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index f1d2586591..7ce29a46d4 100644 --- a/lib/ethdev/meson.build +++ b/lib/ethdev/meson.build @@ -3,6 +3,7 @@ sources = files( 'ethdev_driver.c', + 'ethdev_swstats.c', 'ethdev_private.c', 'ethdev_profile.c', 'ethdev_trace_points.c', @@ -42,6 +43,7 @@ driver_sdk_headers += files( 'ethdev_driver.h', 'ethdev_pci.h', 'ethdev_vdev.h', + 'ethdev_swstats.h', ) if is_linux diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 79f6f5293b..fc595be278 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -358,4 +358,7 @@ INTERNAL { rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; rte_flow_fp_default_ops; + + rte_eth_counters_reset; + rte_eth_counters_stats_get; }; From patchwork Fri May 17 17:35:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140175 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B31DB44052; Fri, 17 May 2024 19:41:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B043140691; Fri, 17 May 2024 19:40:53 +0200 (CEST) Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by mails.dpdk.org (Postfix) with ESMTP id 3DFD540649 for ; Fri, 17 May 2024 19:40:50 +0200 (CEST) Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-1ee12baa01cso17609165ad.0 for ; Fri, 17 May 2024 10:40:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715967649; x=1716572449; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SUE9fO9/yU42C4wqedNCLbGs+mpPzKdD4hF5j4IP+8c=; b=HdyyJMdg3adzvwMelmEAZuyboS9FoKNd3rWdbvnN71GjxltQlKZMnzw1BGXbjrbSSf VzBU6OO3giRhk/JCaskFq63fRD3tQR06tvX17WMmAip8g97XefUUMOqgrnNbW+RjhUbi JFe2xl+nbERKHlsoOhP2aP/waZbByo42CeLGi4qblkPkPorNrM1WUaUINcFejKw8fbrf QBztNaWEAApOT2CxgprY3AJ1q1sA1QYKd0QXBdz0m/r/C5UXmFVrl5+fb62a+XTcaHMY x9rR5/y3wmJnfZPFand09WH6gLxEx0XXROjlT4SGYOGv6tVd9oPGTiWykYNV/OJAYrCw iYvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715967649; x=1716572449; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SUE9fO9/yU42C4wqedNCLbGs+mpPzKdD4hF5j4IP+8c=; b=uHyOmgJXDAlWRpDhsbpK1SedVvgOaLbgnTb0XOWxxwxXUOC/qSlwHWuwqIMGoX4FGZ YW8Ho5obtszMEeMUC4iw2vv2YXiDLIdEcPEhCGMD5VdeJrZlcPeBFEFh2AscdV0/mCOc iOwE41U88CIkdbnUbg0fAR2XZuFAHDcJx+yWFmL+oCZocCA1Ib63PXK4b3/4C1pqByMz f9j4czQNG0GVCQQ2disKlfAjMmz5JUIv4GZJigHA85tf2fAuWZguXxe04BEkwNuFME94 oPFOJxackDcu20gayzHcQoqpuGr2cxjtDMu74idy37XS1GAOcVn0NzzUnfpIp45qZxor E55g== X-Gm-Message-State: AOJu0Yy2OzAmpHWD6K1F5Fcm8J53gpvUAE273z626+1/IxswP7aq+p9f q9ZzMm543aTPHj79rbNI0pD+YYDR8rqECC7y4oCXMiIuSlh5oCCJaDjmeFcxuxz9AqfP8RdC0KT QfcA= X-Google-Smtp-Source: AGHT+IEXDkfzAQ7VJCJ871jp7ftWuPqfNnHQq/MQb/1f05w080jE7xI8v9yeFFuMb3KG4gEktm6KMQ== X-Received: by 2002:a17:902:d4c9:b0:1f2:ec63:6018 with SMTP id d9443c01a7336-1f2ec6361a2mr1113105ad.51.1715967649377; Fri, 17 May 2024 10:40:49 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bf31032sm158830485ad.131.2024.05.17.10.40.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 10:40:49 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , "John W. Linville" Subject: [PATCH v7 3/9] net/af_packet: use generic SW stats Date: Fri, 17 May 2024 10:35:10 -0700 Message-ID: <20240517174044.90952-4-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517174044.90952-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517174044.90952-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use the new generic SW stats. Signed-off-by: Stephen Hemminger --- drivers/net/af_packet/rte_eth_af_packet.c | 82 ++++------------------- 1 file changed, 14 insertions(+), 68 deletions(-) diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c index 397a32db58..89b737e7dc 100644 --- a/drivers/net/af_packet/rte_eth_af_packet.c +++ b/drivers/net/af_packet/rte_eth_af_packet.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -29,6 +30,7 @@ #include #include + #define ETH_AF_PACKET_IFACE_ARG "iface" #define ETH_AF_PACKET_NUM_Q_ARG "qpairs" #define ETH_AF_PACKET_BLOCKSIZE_ARG "blocksz" @@ -51,8 +53,7 @@ struct pkt_rx_queue { uint16_t in_port; uint8_t vlan_strip; - volatile unsigned long rx_pkts; - volatile unsigned long rx_bytes; + struct rte_eth_counters stats; }; struct pkt_tx_queue { @@ -64,11 +65,10 @@ struct pkt_tx_queue { unsigned int framecount; unsigned int framenum; - volatile unsigned long tx_pkts; - volatile unsigned long err_pkts; - volatile unsigned long tx_bytes; + struct rte_eth_counters stats; }; + struct pmd_internals { unsigned nb_queues; @@ -118,8 +118,6 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; uint8_t *pbuf; struct pkt_rx_queue *pkt_q = queue; - uint16_t num_rx = 0; - unsigned long num_rx_bytes = 0; unsigned int framecount, framenum; if (unlikely(nb_pkts == 0)) @@ -164,13 +162,11 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* account for the receive frame */ bufs[i] = mbuf; - num_rx++; - num_rx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&pkt_q->stats, mbuf); } pkt_q->framenum = framenum; - pkt_q->rx_pkts += num_rx; - pkt_q->rx_bytes += num_rx_bytes; - return num_rx; + + return i; } /* @@ -205,8 +201,6 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) unsigned int framecount, framenum; struct pollfd pfd; struct pkt_tx_queue *pkt_q = queue; - uint16_t num_tx = 0; - unsigned long num_tx_bytes = 0; int i; if (unlikely(nb_pkts == 0)) @@ -285,8 +279,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) framenum = 0; ppd = (struct tpacket2_hdr *) pkt_q->rd[framenum].iov_base; - num_tx++; - num_tx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&pkt_q->stats, mbuf); rte_pktmbuf_free(mbuf); } @@ -298,15 +291,9 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * packets will be considered successful even though only some * are sent. */ - - num_tx = 0; - num_tx_bytes = 0; } pkt_q->framenum = framenum; - pkt_q->tx_pkts += num_tx; - pkt_q->err_pkts += i - num_tx; - pkt_q->tx_bytes += num_tx_bytes; return i; } @@ -386,58 +373,17 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) } static int -eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *igb_stats) +eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_ipackets[i] = internal->rx_queue[i].rx_pkts; - igb_stats->q_ibytes[i] = internal->rx_queue[i].rx_bytes; - rx_total += igb_stats->q_ipackets[i]; - rx_bytes_total += igb_stats->q_ibytes[i]; - } - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_opackets[i] = internal->tx_queue[i].tx_pkts; - igb_stats->q_obytes[i] = internal->tx_queue[i].tx_bytes; - tx_total += igb_stats->q_opackets[i]; - tx_err_total += internal->tx_queue[i].err_pkts; - tx_bytes_total += igb_stats->q_obytes[i]; - } - - igb_stats->ipackets = rx_total; - igb_stats->ibytes = rx_bytes_total; - igb_stats->opackets = tx_total; - igb_stats->oerrors = tx_err_total; - igb_stats->obytes = tx_bytes_total; - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned i; - struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < internal->nb_queues; i++) { - internal->rx_queue[i].rx_pkts = 0; - internal->rx_queue[i].rx_bytes = 0; - } - - for (i = 0; i < internal->nb_queues; i++) { - internal->tx_queue[i].tx_pkts = 0; - internal->tx_queue[i].err_pkts = 0; - internal->tx_queue[i].tx_bytes = 0; - } - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats)); } static int From patchwork Fri May 17 17:35:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140176 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A214044052; Fri, 17 May 2024 19:41:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5F7F2406BA; Fri, 17 May 2024 19:40:55 +0200 (CEST) Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by mails.dpdk.org (Postfix) with ESMTP id 1E0BE40649 for ; Fri, 17 May 2024 19:40:51 +0200 (CEST) Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1ee42b97b32so16713655ad.2 for ; Fri, 17 May 2024 10:40:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715967650; x=1716572450; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=M92z2FKHJ+0H0kLF2iGoaoslWHmfp5DjMjDw1u/F2fI=; b=1CdK8wER0orH7RQ7aCOwjs7DM+RZTz7Aa7tU3FOoqkU55YcCnF7HzBa4NX6m7ing5/ Y9Oo0/8jDa+RwNwc9HKSdEuLEvVUj5s+P4blzEH3F6/d5ahjxz0lZAT3yOFhqPUbjsJl zOqUckLYRowZcuxZsinV7l5xoq9Gp64ytv4kTcovV6JC3jxiHqruuIbEhQosGvCRIPw9 f6tlZPChNiC/VG7nIVPZeol/L0CLomCKw7X4n2Xh4AINVef56HEb+elTlwEZTWTqEq9a A53DJpc7+pJu79bqlYxbfL7aYML1ePanOH1hTNBXL0wFOQFJtAT9cwpTTAK7rNfgG3lh mJkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715967650; x=1716572450; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=M92z2FKHJ+0H0kLF2iGoaoslWHmfp5DjMjDw1u/F2fI=; b=GCnfMbhNhRJs7iPSXtVND+Dju3MVJLYanNksB8p8+rlEmRg31i6J/ABj+bKrfrfDfx R1i2ZsUgtpQ2P4bXQpFlnM7NC1BVYkmudfS580ZKHflaOdhS5DM2p9SJ0CtSZia5JW3Y YzpBn8VBaU7zj8wqlo7ptiWWHq33ztltTG6cUcT/00oUy/ten6/z/TIdg1pBufmictPa TXDdEMsTMmXf0rTxxYSVj5O4dOLOzcUrKClbtHPIhqU63PVPCmBoCKH14Srqf1uL+3if 9bjkSlJnWQXje3tLEpaMlDu0drLaZDzQPKCA8LX4O53Ct+C/pB/0PWHYgRWHXg+imG7g AbFg== X-Gm-Message-State: AOJu0Ywc57ujpTYNR7QVyqzYnqKCUVozlQaTijKXDGh8KdYAAMgWXMZ5 Y2G//yoKTJV6SHsG8ViT4wTNJhfLuFaBpQtQANYElo6kfMHjm3N0S+gVporvzl2MT5WBwSldEph VyME= X-Google-Smtp-Source: AGHT+IG78XxxW/PkfES21go0+AQuorzk+4tuMixpSfimJW0JqGpSmn6rZHkWbdh54/sdIbKfdCtBsg== X-Received: by 2002:a17:902:ce10:b0:1f2:ecb5:f24a with SMTP id d9443c01a7336-1f2ecb5f339mr238815ad.17.1715967650215; Fri, 17 May 2024 10:40:50 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bf31032sm158830485ad.131.2024.05.17.10.40.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 10:40:49 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Ciara Loftus Subject: [PATCH v7 4/9] net/af_xdp: use generic SW stats Date: Fri, 17 May 2024 10:35:11 -0700 Message-ID: <20240517174044.90952-5-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517174044.90952-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517174044.90952-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use common code for all SW stats. Signed-off-by: Stephen Hemminger --- drivers/net/af_xdp/rte_eth_af_xdp.c | 97 +++++++---------------------- 1 file changed, 24 insertions(+), 73 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 6ba455bb9b..e5228a1dc1 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -120,19 +121,13 @@ struct xsk_umem_info { uint32_t max_xsks; }; -struct rx_stats { - uint64_t rx_pkts; - uint64_t rx_bytes; - uint64_t rx_dropped; -}; - struct pkt_rx_queue { struct xsk_ring_cons rx; struct xsk_umem_info *umem; struct xsk_socket *xsk; struct rte_mempool *mb_pool; - struct rx_stats stats; + struct rte_eth_counters stats; struct xsk_ring_prod fq; struct xsk_ring_cons cq; @@ -143,17 +138,11 @@ struct pkt_rx_queue { int busy_budget; }; -struct tx_stats { - uint64_t tx_pkts; - uint64_t tx_bytes; - uint64_t tx_dropped; -}; - struct pkt_tx_queue { struct xsk_ring_prod tx; struct xsk_umem_info *umem; - struct tx_stats stats; + struct rte_eth_counters stats; struct pkt_rx_queue *pair; int xsk_queue_idx; @@ -308,7 +297,6 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_ring_prod *fq = &rxq->fq; struct xsk_umem_info *umem = rxq->umem; uint32_t idx_rx = 0; - unsigned long rx_bytes = 0; int i; struct rte_mbuf *fq_bufs[ETH_AF_XDP_RX_BATCH_SIZE]; @@ -363,16 +351,13 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_pktmbuf_pkt_len(bufs[i]) = len; rte_pktmbuf_data_len(bufs[i]) = len; - rx_bytes += len; + + rte_eth_count_mbuf(&rxq->stats, bufs[i]); } xsk_ring_cons__release(rx, nb_pkts); (void)reserve_fill_queue(umem, nb_pkts, fq_bufs, fq); - /* statistics */ - rxq->stats.rx_pkts += nb_pkts; - rxq->stats.rx_bytes += rx_bytes; - return nb_pkts; } #else @@ -384,7 +369,6 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_umem_info *umem = rxq->umem; struct xsk_ring_prod *fq = &rxq->fq; uint32_t idx_rx = 0; - unsigned long rx_bytes = 0; int i; uint32_t free_thresh = fq->size >> 1; struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE]; @@ -424,16 +408,13 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_ring_enqueue(umem->buf_ring, (void *)addr); rte_pktmbuf_pkt_len(mbufs[i]) = len; rte_pktmbuf_data_len(mbufs[i]) = len; - rx_bytes += len; + rte_eth_count_mbuf(&rxq->stats, mbufs[i]); + bufs[i] = mbufs[i]; } xsk_ring_cons__release(rx, nb_pkts); - /* statistics */ - rxq->stats.rx_pkts += nb_pkts; - rxq->stats.rx_bytes += rx_bytes; - return nb_pkts; } #endif @@ -527,9 +508,8 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct pkt_tx_queue *txq = queue; struct xsk_umem_info *umem = txq->umem; struct rte_mbuf *mbuf; - unsigned long tx_bytes = 0; int i; - uint32_t idx_tx; + uint32_t idx_tx, pkt_len; uint16_t count = 0; struct xdp_desc *desc; uint64_t addr, offset; @@ -541,6 +521,7 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) for (i = 0; i < nb_pkts; i++) { mbuf = bufs[i]; + pkt_len = rte_pktmbuf_pkt_len(mbuf); if (mbuf->pool == umem->mb_pool) { if (!xsk_ring_prod__reserve(&txq->tx, 1, &idx_tx)) { @@ -589,17 +570,13 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) count++; } - tx_bytes += mbuf->pkt_len; + rte_eth_count_packet(&txq->stats, pkt_len); } out: xsk_ring_prod__submit(&txq->tx, count); kick_tx(txq, cq); - txq->stats.tx_pkts += count; - txq->stats.tx_bytes += tx_bytes; - txq->stats.tx_dropped += nb_pkts - count; - return count; } #else @@ -610,7 +587,6 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_umem_info *umem = txq->umem; struct rte_mbuf *mbuf; void *addrs[ETH_AF_XDP_TX_BATCH_SIZE]; - unsigned long tx_bytes = 0; int i; uint32_t idx_tx; struct xsk_ring_cons *cq = &txq->pair->cq; @@ -640,7 +616,7 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) pkt = xsk_umem__get_data(umem->mz->addr, desc->addr); rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *), desc->len); - tx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&txq->stats, mbuf); rte_pktmbuf_free(mbuf); } @@ -648,9 +624,6 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) kick_tx(txq, cq); - txq->stats.tx_pkts += nb_pkts; - txq->stats.tx_bytes += tx_bytes; - return nb_pkts; } @@ -847,39 +820,26 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct pmd_internals *internals = dev->data->dev_private; struct pmd_process_private *process_private = dev->process_private; - struct xdp_statistics xdp_stats; - struct pkt_rx_queue *rxq; - struct pkt_tx_queue *txq; - socklen_t optlen; - int i, ret, fd; + unsigned int i; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - optlen = sizeof(struct xdp_statistics); - rxq = &internals->rx_queues[i]; - txq = rxq->pair; - stats->q_ipackets[i] = rxq->stats.rx_pkts; - stats->q_ibytes[i] = rxq->stats.rx_bytes; + rte_eth_counters_stats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), stats); - stats->q_opackets[i] = txq->stats.tx_pkts; - stats->q_obytes[i] = txq->stats.tx_bytes; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct xdp_statistics xdp_stats; + socklen_t optlen = sizeof(xdp_stats); + int fd; - stats->ipackets += stats->q_ipackets[i]; - stats->ibytes += stats->q_ibytes[i]; - stats->imissed += rxq->stats.rx_dropped; - stats->oerrors += txq->stats.tx_dropped; fd = process_private->rxq_xsk_fds[i]; - ret = fd >= 0 ? getsockopt(fd, SOL_XDP, XDP_STATISTICS, - &xdp_stats, &optlen) : -1; - if (ret != 0) { + if (fd < 0) + continue; + if (getsockopt(fd, SOL_XDP, XDP_STATISTICS, + &xdp_stats, &optlen) < 0) { AF_XDP_LOG(ERR, "getsockopt() failed for XDP_STATISTICS.\n"); return -1; } stats->imissed += xdp_stats.rx_dropped; - - stats->opackets += stats->q_opackets[i]; - stats->obytes += stats->q_obytes[i]; } return 0; @@ -888,17 +848,8 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) static int eth_stats_reset(struct rte_eth_dev *dev) { - struct pmd_internals *internals = dev->data->dev_private; - int i; - - for (i = 0; i < internals->queue_cnt; i++) { - memset(&internals->rx_queues[i].stats, 0, - sizeof(struct rx_stats)); - memset(&internals->tx_queues[i].stats, 0, - sizeof(struct tx_stats)); - } - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats)); } #ifdef RTE_NET_AF_XDP_LIBBPF_XDP_ATTACH From patchwork Fri May 17 17:35:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140177 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6E5844052; Fri, 17 May 2024 19:41:24 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 98A6540A67; Fri, 17 May 2024 19:40:56 +0200 (CEST) Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by mails.dpdk.org (Postfix) with ESMTP id F09474067D for ; Fri, 17 May 2024 19:40:51 +0200 (CEST) Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-1edc696df2bso18594165ad.0 for ; Fri, 17 May 2024 10:40:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715967651; x=1716572451; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AxNG1Ahfsh5mUGzqqQYD8sMjy9Y6CWI7ENjvPxua/xw=; b=EfwAgs26YQQNlDUZd4tAmYSmORo8XUeqMyPgg9XfR++SbC7orD9v2VB3O0+CaWgJC6 YpdSQDgdET0j/RVa5Q/SngMPhyqkDvjkczJWNhIB5g8sP+vWbi4Qnn2wMomDK8lNgX2c Bjuq5G+CQL//GJHSB2Vpk3bIwG5T8jdDDSjvqov7o+lw/gQQ/MZRYNS03yza6LMhEBZ4 +Nc8sLajA+l+lNDhRPRnmLgx3LhUht8jk6CGH7kKsaFy9l4xsdLbY3QvmrS5jRv+dEqi +NKzRn71pPdzsUjkF7Km2LKosMCwU5jnFy5ifEUFcdn5QZf0ag2Ca0B+ZHQZ/iZGImbE oAwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715967651; x=1716572451; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AxNG1Ahfsh5mUGzqqQYD8sMjy9Y6CWI7ENjvPxua/xw=; b=rAkCTsvls+WH5eS4Pu5CZu33Kk4VtwrQfY+knACVILk/YG/wGB1zElqzAy1kDOug7Y vjr4U/zB1cuQ6lJ0JUYUYsonINroEcUZMX5kFgQ8LRS2D7826r1Eab9Qf6rvxC88S2RH INniM2x8uX0CSKCRHjUp3Ro9MbcKpYk0Z5guX4kHfOXZCIpH7dChy5wi91C5rwZ3G2nB 6AyXl5iKtEPix5kBQkY3tRHj8lEGjxlzvMWivNdTgSaArsoEJ4zgY5KIz9BlAjy9gIqj UcN76lhx9xqVcEtxDCeOp4N6ZqA25v2bCbS8JeKL33lkKVJu/dyFLDL/gvXUfeAbY+YV B2bw== X-Gm-Message-State: AOJu0Yx4ctzqQR9xmzRgl/1CqvaX5SX8FDvHgSeTpEGugSPVsZDEZk3p 3LMgDte+jQYWLogOxRo50x4o7bmlVBIr525AXGErSzdTYm8s1cugxQVIMHlrSkrZE7BZys9fNP+ xwIg= X-Google-Smtp-Source: AGHT+IH+/Ih+zc50huRapov21J8WLDEhgqUaXEPJWIZKXKd+kjdBgWJbVp3mUpJ0guVZp5juch6vqA== X-Received: by 2002:a17:902:e84e:b0:1f2:dd00:17f5 with SMTP id d9443c01a7336-1f2dd0019e4mr52951145ad.62.1715967651078; Fri, 17 May 2024 10:40:51 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bf31032sm158830485ad.131.2024.05.17.10.40.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 10:40:50 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [PATCH v7 5/9] net/pcap: use generic SW stats Date: Fri, 17 May 2024 10:35:12 -0700 Message-ID: <20240517174044.90952-6-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517174044.90952-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517174044.90952-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use common statistics for SW drivers. Signed-off-by: Stephen Hemminger --- drivers/net/pcap/pcap_ethdev.c | 125 +++++++-------------------------- 1 file changed, 26 insertions(+), 99 deletions(-) diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c index bfec085045..b1a983f871 100644 --- a/drivers/net/pcap/pcap_ethdev.c +++ b/drivers/net/pcap/pcap_ethdev.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -48,13 +49,6 @@ static uint8_t iface_idx; static uint64_t timestamp_rx_dynflag; static int timestamp_dynfield_offset = -1; -struct queue_stat { - volatile unsigned long pkts; - volatile unsigned long bytes; - volatile unsigned long err_pkts; - volatile unsigned long rx_nombuf; -}; - struct queue_missed_stat { /* last value retrieved from pcap */ unsigned int pcap; @@ -68,7 +62,7 @@ struct pcap_rx_queue { uint16_t port_id; uint16_t queue_id; struct rte_mempool *mb_pool; - struct queue_stat rx_stat; + struct rte_eth_counters rx_stat; struct queue_missed_stat missed_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; @@ -80,7 +74,7 @@ struct pcap_rx_queue { struct pcap_tx_queue { uint16_t port_id; uint16_t queue_id; - struct queue_stat tx_stat; + struct rte_eth_counters tx_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; }; @@ -238,7 +232,6 @@ eth_pcap_rx_infinite(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { int i; struct pcap_rx_queue *pcap_q = queue; - uint32_t rx_bytes = 0; if (unlikely(nb_pkts == 0)) return 0; @@ -252,39 +245,35 @@ eth_pcap_rx_infinite(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (err) return i; + rte_eth_count_mbuf(&pcap_q->rx_stat, pcap_buf); + rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), rte_pktmbuf_mtod(pcap_buf, void *), pcap_buf->data_len); bufs[i]->data_len = pcap_buf->data_len; bufs[i]->pkt_len = pcap_buf->pkt_len; bufs[i]->port = pcap_q->port_id; - rx_bytes += pcap_buf->data_len; + /* Enqueue packet back on ring to allow infinite rx. */ rte_ring_enqueue(pcap_q->pkts, pcap_buf); } - pcap_q->rx_stat.pkts += i; - pcap_q->rx_stat.bytes += rx_bytes; - return i; } static uint16_t eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { + struct pcap_rx_queue *pcap_q = queue; + struct rte_eth_dev *dev = &rte_eth_devices[pcap_q->port_id]; + struct pmd_process_private *pp = dev->process_private; + pcap_t *pcap = pp->rx_pcap[pcap_q->queue_id]; unsigned int i; struct pcap_pkthdr header; - struct pmd_process_private *pp; const u_char *packet; struct rte_mbuf *mbuf; - struct pcap_rx_queue *pcap_q = queue; uint16_t num_rx = 0; - uint32_t rx_bytes = 0; - pcap_t *pcap; - - pp = rte_eth_devices[pcap_q->port_id].process_private; - pcap = pp->rx_pcap[pcap_q->queue_id]; if (unlikely(pcap == NULL || nb_pkts == 0)) return 0; @@ -300,7 +289,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool); if (unlikely(mbuf == NULL)) { - pcap_q->rx_stat.rx_nombuf++; + ++dev->data->rx_mbuf_alloc_failed; break; } @@ -315,7 +304,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf, packet, header.caplen) == -1)) { - pcap_q->rx_stat.err_pkts++; + rte_eth_count_error(&pcap_q->rx_stat); rte_pktmbuf_free(mbuf); break; } @@ -329,11 +318,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf->ol_flags |= timestamp_rx_dynflag; mbuf->port = pcap_q->port_id; bufs[num_rx] = mbuf; + + rte_eth_count_mbuf(&pcap_q->rx_stat, mbuf); num_rx++; - rx_bytes += header.caplen; } - pcap_q->rx_stat.pkts += num_rx; - pcap_q->rx_stat.bytes += rx_bytes; return num_rx; } @@ -379,8 +367,6 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; struct pmd_process_private *pp; struct pcap_tx_queue *dumper_q = queue; - uint16_t num_tx = 0; - uint32_t tx_bytes = 0; struct pcap_pkthdr header; pcap_dumper_t *dumper; unsigned char temp_data[RTE_ETH_PCAP_SNAPLEN]; @@ -412,8 +398,7 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) pcap_dump((u_char *)dumper, &header, rte_pktmbuf_read(mbuf, 0, caplen, temp_data)); - num_tx++; - tx_bytes += caplen; + rte_eth_count_mbuf(&dumper_q->tx_stat, mbuf); rte_pktmbuf_free(mbuf); } @@ -423,9 +408,6 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * we flush the pcap dumper within each burst. */ pcap_dump_flush(dumper); - dumper_q->tx_stat.pkts += num_tx; - dumper_q->tx_stat.bytes += tx_bytes; - dumper_q->tx_stat.err_pkts += nb_pkts - num_tx; return nb_pkts; } @@ -437,20 +419,16 @@ static uint16_t eth_tx_drop(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { unsigned int i; - uint32_t tx_bytes = 0; struct pcap_tx_queue *tx_queue = queue; if (unlikely(nb_pkts == 0)) return 0; for (i = 0; i < nb_pkts; i++) { - tx_bytes += bufs[i]->pkt_len; + rte_eth_count_mbuf(&tx_queue->tx_stat, bufs[i]); rte_pktmbuf_free(bufs[i]); } - tx_queue->tx_stat.pkts += nb_pkts; - tx_queue->tx_stat.bytes += tx_bytes; - return i; } @@ -465,8 +443,6 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; struct pmd_process_private *pp; struct pcap_tx_queue *tx_queue = queue; - uint16_t num_tx = 0; - uint32_t tx_bytes = 0; pcap_t *pcap; unsigned char temp_data[RTE_ETH_PCAP_SNAPLEN]; size_t len; @@ -497,15 +473,11 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_pktmbuf_read(mbuf, 0, len, temp_data), len); if (unlikely(ret != 0)) break; - num_tx++; - tx_bytes += len; + + rte_eth_count_mbuf(&tx_queue->tx_stat, mbuf); rte_pktmbuf_free(mbuf); } - tx_queue->tx_stat.pkts += num_tx; - tx_queue->tx_stat.bytes += tx_bytes; - tx_queue->tx_stat.err_pkts += i - num_tx; - return i; } @@ -746,41 +718,12 @@ static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { unsigned int i; - unsigned long rx_packets_total = 0, rx_bytes_total = 0; - unsigned long rx_missed_total = 0; - unsigned long rx_nombuf_total = 0, rx_err_total = 0; - unsigned long tx_packets_total = 0, tx_bytes_total = 0; - unsigned long tx_packets_err_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_rx_queues; i++) { - stats->q_ipackets[i] = internal->rx_queue[i].rx_stat.pkts; - stats->q_ibytes[i] = internal->rx_queue[i].rx_stat.bytes; - rx_nombuf_total += internal->rx_queue[i].rx_stat.rx_nombuf; - rx_err_total += internal->rx_queue[i].rx_stat.err_pkts; - rx_packets_total += stats->q_ipackets[i]; - rx_bytes_total += stats->q_ibytes[i]; - rx_missed_total += queue_missed_stat_get(dev, i); - } - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_tx_queues; i++) { - stats->q_opackets[i] = internal->tx_queue[i].tx_stat.pkts; - stats->q_obytes[i] = internal->tx_queue[i].tx_stat.bytes; - tx_packets_total += stats->q_opackets[i]; - tx_bytes_total += stats->q_obytes[i]; - tx_packets_err_total += internal->tx_queue[i].tx_stat.err_pkts; - } + rte_eth_counters_stats_get(dev, offsetof(struct pcap_tx_queue, tx_stat), + offsetof(struct pcap_rx_queue, rx_stat), stats); - stats->ipackets = rx_packets_total; - stats->ibytes = rx_bytes_total; - stats->imissed = rx_missed_total; - stats->ierrors = rx_err_total; - stats->rx_nombuf = rx_nombuf_total; - stats->opackets = tx_packets_total; - stats->obytes = tx_bytes_total; - stats->oerrors = tx_packets_err_total; + for (i = 0; i < dev->data->nb_rx_queues; i++) + stats->imissed += queue_missed_stat_get(dev, i); return 0; } @@ -789,21 +732,12 @@ static int eth_stats_reset(struct rte_eth_dev *dev) { unsigned int i; - struct pmd_internals *internal = dev->data->dev_private; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - internal->rx_queue[i].rx_stat.pkts = 0; - internal->rx_queue[i].rx_stat.bytes = 0; - internal->rx_queue[i].rx_stat.err_pkts = 0; - internal->rx_queue[i].rx_stat.rx_nombuf = 0; - queue_missed_stat_reset(dev, i); - } + rte_eth_counters_reset(dev, offsetof(struct pcap_tx_queue, tx_stat), + offsetof(struct pcap_rx_queue, rx_stat)); - for (i = 0; i < dev->data->nb_tx_queues; i++) { - internal->tx_queue[i].tx_stat.pkts = 0; - internal->tx_queue[i].tx_stat.bytes = 0; - internal->tx_queue[i].tx_stat.err_pkts = 0; - } + for (i = 0; i < dev->data->nb_rx_queues; i++) + queue_missed_stat_reset(dev, i); return 0; } @@ -929,13 +863,6 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, pcap_pkt_count); return -EINVAL; } - - /* - * Reset the stats for this queue since eth_pcap_rx calls above - * didn't result in the application receiving packets. - */ - pcap_q->rx_stat.pkts = 0; - pcap_q->rx_stat.bytes = 0; } return 0; From patchwork Fri May 17 17:35:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140178 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A4B9E44052; Fri, 17 May 2024 19:41:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 075F74069F; Fri, 17 May 2024 19:40:58 +0200 (CEST) Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by mails.dpdk.org (Postfix) with ESMTP id 11E5740689 for ; Fri, 17 May 2024 19:40:53 +0200 (CEST) Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1ec41d82b8bso19064605ad.2 for ; Fri, 17 May 2024 10:40:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715967652; x=1716572452; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=emfW8pqQWo/kkBypu+P+JiAY/UdpbCFtvVnSZa+FuiE=; b=jsAf8g5s7eKgn/8mR5FUy6CUUz+y4fV0gRYZmusX1bm9twicnVchk4zqeqA6TFFaK+ 8pKb9cLZysB7sfzUCCSvuMVcWqCb2mli6zjllFxHryoHJSGjlQIqlDHxVvvpXDfjNMx+ dKBh9uwnVtFvOKsdLMTH4xUeeY/HlOb7Bt0JWeTTIXL5IK402MFATvY5JH3zz6OgeCV+ bc5Ks3+BZBcjLyH8d4Dchou0oJ90s7soYkjXWoUmL43Z60jfryjsr6arCxI5ZovbT13e 0iqPUoz9igQ9s18JMJqEvCaVpvoApSfxvxESuYh2RFsADSywGk+Wea0uhjQfTVa4PqOX kHIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715967652; x=1716572452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=emfW8pqQWo/kkBypu+P+JiAY/UdpbCFtvVnSZa+FuiE=; b=fKvSFbEYfmuZl1TY62sWsF084gM2C/bsTPDP/uqYfIcDoqiP1vqK+x/+nwDhvmuIVa Bi2lwH0RRH7YrQwsSd/KC+2rZK2FdqbbO3hawkveuDJo/S0kI1LsgKUG9NKyS8q6WMBI TjeCHz6+7yPEc5lnErPEYqoBv6T5UGnXOuBkoqn+o8DIsontTWpid4OTLhKUdz+45zp9 3ZUbVGyRKWVOmGCt4GWM9xL7ZtugG48LbSnEageFqbFkv3/WNN7BjITkwUXB8pck9nQT EE7q5arsW166cmbMwYPbJXFJh+Ugt2gvMuc2XxsA6Bd5N9uQYeRfrqPLiOwNSfal0rJq f2+A== X-Gm-Message-State: AOJu0YzGV3Yq7gDqRn9KRTx3LP57y4hM4BSBiuY1oIPNeqa2wBfiGJN6 MtNgtKf/ofTtcHtLlB7DZClATKmdSN92RSrDpkcEfqiXJ1ME3hMMxUjyc8b3+37FrHXgewMI9lW TW5I= X-Google-Smtp-Source: AGHT+IGZKEzGHAkzv2n28S9NoaKcdrhnPjLayp8QyPL66E+m1yDhyOERcs9eeh0CYjhOj0tvfRf2EA== X-Received: by 2002:a17:902:e80d:b0:1e3:d4a2:3882 with SMTP id d9443c01a7336-1ef43d0ad97mr296814685ad.2.1715967652181; Fri, 17 May 2024 10:40:52 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bf31032sm158830485ad.131.2024.05.17.10.40.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 10:40:51 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Bruce Richardson Subject: [PATCH v7 6/9] test/pmd_ring: initialize mbufs Date: Fri, 17 May 2024 10:35:13 -0700 Message-ID: <20240517174044.90952-7-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517174044.90952-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517174044.90952-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Do not pass uninitialized data into the ring PMD. The mbufs should be initialized first so that length is zero. Signed-off-by: Stephen Hemminger --- app/test/test_pmd_ring.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/app/test/test_pmd_ring.c b/app/test/test_pmd_ring.c index e83b9dd6b8..55455ece7f 100644 --- a/app/test/test_pmd_ring.c +++ b/app/test/test_pmd_ring.c @@ -19,6 +19,14 @@ static struct rte_mempool *mp; struct rte_ring *rxtx[NUM_RINGS]; static int tx_porta, rx_portb, rxtx_portc, rxtx_portd, rxtx_porte; +/* make a valid zero sized mbuf */ +static void +test_mbuf_init(struct rte_mbuf *mbuf) +{ + memset(mbuf, 0, sizeof(*mbuf)); + rte_pktmbuf_reset(mbuf); +} + static int test_ethdev_configure_port(int port) { @@ -68,14 +76,16 @@ test_ethdev_configure_port(int port) static int test_send_basic_packets(void) { - struct rte_mbuf bufs[RING_SIZE]; + struct rte_mbuf bufs[RING_SIZE]; struct rte_mbuf *pbufs[RING_SIZE]; int i; printf("Testing send and receive RING_SIZE/2 packets (tx_porta -> rx_portb)\n"); - for (i = 0; i < RING_SIZE/2; i++) + for (i = 0; i < RING_SIZE / 2; i++) { + test_mbuf_init(&bufs[i]); pbufs[i] = &bufs[i]; + } if (rte_eth_tx_burst(tx_porta, 0, pbufs, RING_SIZE/2) < RING_SIZE/2) { printf("Failed to transmit packet burst port %d\n", tx_porta); @@ -99,14 +109,16 @@ test_send_basic_packets(void) static int test_send_basic_packets_port(int port) { - struct rte_mbuf bufs[RING_SIZE]; + struct rte_mbuf bufs[RING_SIZE]; struct rte_mbuf *pbufs[RING_SIZE]; int i; printf("Testing send and receive RING_SIZE/2 packets (cmdl_port0 -> cmdl_port0)\n"); - for (i = 0; i < RING_SIZE/2; i++) + for (i = 0; i < RING_SIZE / 2; i++) { + test_mbuf_init(&bufs[i]); pbufs[i] = &bufs[i]; + } if (rte_eth_tx_burst(port, 0, pbufs, RING_SIZE/2) < RING_SIZE/2) { printf("Failed to transmit packet burst port %d\n", port); @@ -134,10 +146,11 @@ test_get_stats(int port) struct rte_eth_stats stats; struct rte_mbuf buf, *pbuf = &buf; + test_mbuf_init(&buf); + printf("Testing ring PMD stats_get port %d\n", port); /* check stats of RXTX port, should all be zero */ - rte_eth_stats_get(port, &stats); if (stats.ipackets != 0 || stats.opackets != 0 || stats.ibytes != 0 || stats.obytes != 0 || @@ -173,6 +186,8 @@ test_stats_reset(int port) struct rte_eth_stats stats; struct rte_mbuf buf, *pbuf = &buf; + test_mbuf_init(&buf); + printf("Testing ring PMD stats_reset port %d\n", port); rte_eth_stats_reset(port); @@ -228,6 +243,7 @@ test_pmd_ring_pair_create_attach(void) int ret; memset(&null_conf, 0, sizeof(struct rte_eth_conf)); + test_mbuf_init(&buf); if ((rte_eth_dev_configure(rxtx_portd, 1, 1, &null_conf) < 0) || (rte_eth_dev_configure(rxtx_porte, 1, 1, From patchwork Fri May 17 17:35:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140179 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0FB344052; Fri, 17 May 2024 19:41:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 33FD440A75; Fri, 17 May 2024 19:40:59 +0200 (CEST) Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by mails.dpdk.org (Postfix) with ESMTP id 0192940695 for ; Fri, 17 May 2024 19:40:54 +0200 (CEST) Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-1e651a9f3ffso15447975ad.1 for ; Fri, 17 May 2024 10:40:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715967653; x=1716572453; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RQHsK1gWBPW+bM15aASK3G4n/oOoE48zyy1OE2N7qD4=; b=X7fs9+NLvBiGZk1zHreO/HIDYst1qVA3uR0jK3Md8DgBoQCD0VxoVcHsOhnMeusymr drX8O4Xqo1JXTl5SDaWo4YV3s504hi24YINMs+htn4UD6ILIOVnBN8vmGIzDqsCtz0eq 5t2m0AgUBX/GLURb7EISEGOG4/yOyApwAYg1Q8EXQnuisurgYaDZ11zPzgL9hLfP0E+Y y+IT+e0OzQl1wm5JWSbMj76KGTHLdGYEYGg9jrpZksAhvoUVwESVItLEG7yDvqn2BQl9 xRYYO1VNShNHoSS+/IgtCrRZcoznhQfyJZ/Z8x+/Px+3AeUsZeKqHnPnlVAYB05KS3yj I+tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715967653; x=1716572453; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RQHsK1gWBPW+bM15aASK3G4n/oOoE48zyy1OE2N7qD4=; b=aZ4/60++bUZ7Uufove1JIIpsRHHAcqCzn+67Vtgo1Cf2y5IFtiDEzudicfESuKrXe2 FoWEFOJGwdt5gqcFeXpXQqT1eRGlMHL7/4/ludnPh1OV+zx2sC6XGV0nB2OYuWkwH7Or ZiTGyawlz7+GU97/dSkcfvog4mhWWtaVJqOpSyVYFuRMuvUdyqvzM6msQxYVyKjUVqIO qLOUo7RecvnYob2xskTFwCmRY+xs9TK4cO11SOGqQiKEI6qzHfJ57vZ4bzoSgs8GjL/l 3guNFFk27pdf1cJIqgucgCO9s9JirstwBGgP8/erN58ok8dHN3ViwDt1zvu5o/c4hvRM bgYQ== X-Gm-Message-State: AOJu0YwBc+DQKBTdMyR8nNVt4fWpTPfwYrbpN4j0ppGOeNyahUt7v5R8 I1Sz9hDblWAP9q+fTwBpHZKwX++UQRuqFd4rj5m4TpxObj61IxfbQMdRPMZ55DSykq07fOLkAXM IX4Y= X-Google-Smtp-Source: AGHT+IHeicD0+gB2h1RrAvFG+4HmSFdW4C6Abft2mLPrceuNGDVEdfuBWat4QUY8CRmXZviSAhkkBw== X-Received: by 2002:a17:902:fc4f:b0:1e8:c962:4f6e with SMTP id d9443c01a7336-1ef43d27f6fmr254015385ad.20.1715967653073; Fri, 17 May 2024 10:40:53 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bf31032sm158830485ad.131.2024.05.17.10.40.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 10:40:52 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Bruce Richardson Subject: [PATCH v7 7/9] net/ring: use generic SW stats Date: Fri, 17 May 2024 10:35:14 -0700 Message-ID: <20240517174044.90952-8-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517174044.90952-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517174044.90952-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use generic per-queue infrastructure. This also fixes bug where ring code was not accounting for bytes. Signed-off-by: Stephen Hemminger --- drivers/net/ring/rte_eth_ring.c | 71 +++++++++++++-------------------- 1 file changed, 28 insertions(+), 43 deletions(-) diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index b16f5d55f2..85f14dd679 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -7,6 +7,7 @@ #include "rte_eth_ring.h" #include #include +#include #include #include #include @@ -44,8 +45,8 @@ enum dev_action { struct ring_queue { struct rte_ring *rng; - RTE_ATOMIC(uint64_t) rx_pkts; - RTE_ATOMIC(uint64_t) tx_pkts; + + struct rte_eth_counters stats; }; struct pmd_internals { @@ -77,12 +78,13 @@ eth_ring_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { void **ptrs = (void *)&bufs[0]; struct ring_queue *r = q; - const uint16_t nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng, - ptrs, nb_bufs, NULL); - if (r->rng->flags & RING_F_SC_DEQ) - r->rx_pkts += nb_rx; - else - rte_atomic_fetch_add_explicit(&r->rx_pkts, nb_rx, rte_memory_order_relaxed); + uint16_t i, nb_rx; + + nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng, ptrs, nb_bufs, NULL); + + for (i = 0; i < nb_rx; i++) + rte_eth_count_mbuf(&r->stats, bufs[i]); + return nb_rx; } @@ -90,13 +92,20 @@ static uint16_t eth_ring_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { void **ptrs = (void *)&bufs[0]; + uint32_t *sizes; struct ring_queue *r = q; - const uint16_t nb_tx = (uint16_t)rte_ring_enqueue_burst(r->rng, - ptrs, nb_bufs, NULL); - if (r->rng->flags & RING_F_SP_ENQ) - r->tx_pkts += nb_tx; - else - rte_atomic_fetch_add_explicit(&r->tx_pkts, nb_tx, rte_memory_order_relaxed); + uint16_t i, nb_tx; + + sizes = alloca(sizeof(uint32_t) * nb_bufs); + + for (i = 0; i < nb_bufs; i++) + sizes[i] = rte_pktmbuf_pkt_len(bufs[i]); + + nb_tx = (uint16_t)rte_ring_enqueue_burst(r->rng, ptrs, nb_bufs, NULL); + + for (i = 0; i < nb_tx; i++) + rte_eth_count_packet(&r->stats, sizes[i]); + return nb_tx; } @@ -193,40 +202,16 @@ eth_dev_info(struct rte_eth_dev *dev, static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned int i; - unsigned long rx_total = 0, tx_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_rx_queues; i++) { - stats->q_ipackets[i] = internal->rx_ring_queues[i].rx_pkts; - rx_total += stats->q_ipackets[i]; - } - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_tx_queues; i++) { - stats->q_opackets[i] = internal->tx_ring_queues[i].tx_pkts; - tx_total += stats->q_opackets[i]; - } - - stats->ipackets = rx_total; - stats->opackets = tx_total; - - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct ring_queue, stats), + offsetof(struct ring_queue, stats), + stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned int i; - struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < dev->data->nb_rx_queues; i++) - internal->rx_ring_queues[i].rx_pkts = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) - internal->tx_ring_queues[i].tx_pkts = 0; - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct ring_queue, stats), + offsetof(struct ring_queue, stats)); } static void From patchwork Fri May 17 17:35:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140180 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F6AE44052; Fri, 17 May 2024 19:41:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 74D3A40A7D; Fri, 17 May 2024 19:41:00 +0200 (CEST) Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by mails.dpdk.org (Postfix) with ESMTP id F0D01406B4 for ; Fri, 17 May 2024 19:40:54 +0200 (CEST) Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1ec69e3dbcfso17735395ad.0 for ; Fri, 17 May 2024 10:40:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715967654; x=1716572454; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KwEt8rCLbyDQEc29jCoUGhiylEpiJNl7eErC5B0q55M=; b=T3zwHz7EoBOiklnTcxLBe2ndxAi3w2DqJOXJQKdqSC2T6GnlwMTl8QkRDOfQWNMJp1 UWeW7/BhXw4n0AjMimdHCxrb/yIDbcSVOHhpEDAd4hGEalNcTCuU6YBaJ9GvOY67LPc5 W+augSu8RsRxGH5XPrdxr7p2ouefBp5+heur/CqvfowZs8B4Bo19r4Vo7Z4ANx0eFshQ sSSQKV7IzsRyMin6Pbm9MgTC6sdpTJwB4uCo+eJnTJea8XBnWZ1pOK5Yrpz6kjlondQ7 ijb09bAQ3jyBMVTpetiNm9YdYr1IjgBhIRDK6xuPM63CQVXRxaUQqfmC2MjNEscF58JT sq2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715967654; x=1716572454; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KwEt8rCLbyDQEc29jCoUGhiylEpiJNl7eErC5B0q55M=; b=v7g4keJ0b0vctmuh6950x4kwfLzqf72VnyWB4DWDeyuJYPzsuSN0GUQsgeCuFMHFa2 jFny7U717amxzjiAQMT5phCi+rGcUEQKtgKH6yNt3KvK7GFTU4Ssm8LfqVIPTzPcpS9V 3PCQE9xHUDdsUSNRAbbgbv1TFArt/NUg6+Ng/qT9NK8tJxGzqTRyGUXZQKwCzd6jrzn/ qzNUNhx7v3L7PLSR6QLbB+EgiFqmuMGw42uGwAeICfdHShk2yuGjDrKaIpjGXlEWtf0b J+6FMxx4XcPKJlPOLRX1cFHJa1n7eO0QyjpuHP5mKlPCenE6O+iKRIXj0StoYkR8xcwH 0/Jw== X-Gm-Message-State: AOJu0YyDikjxG2UCC7WNm7qVkoPVPBQiRbeh1p0sk+7R75LfADnRiVRI icNrTmcrV/NX4OMiRtmICsaPVn49heKXkn3GlGi16EncKTsdJyGxDoqbn2jeeWwskUsiwD99Q1W nG2E= X-Google-Smtp-Source: AGHT+IHbMnJG0zjHWmRlyleBHzu7TseARWymPFe8QUhB+bNM0lzG/Kbk5Z2qxndUULzUrzT7GuFE8w== X-Received: by 2002:a17:903:245:b0:1eb:5682:1ec0 with SMTP id d9443c01a7336-1ef4404a22emr263450875ad.45.1715967653892; Fri, 17 May 2024 10:40:53 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bf31032sm158830485ad.131.2024.05.17.10.40.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 10:40:53 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [PATCH v7 8/9] net/tap: use generic SW stats Date: Fri, 17 May 2024 10:35:15 -0700 Message-ID: <20240517174044.90952-9-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517174044.90952-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517174044.90952-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use new common sw statistics. Signed-off-by: Stephen Hemminger --- drivers/net/tap/rte_eth_tap.c | 88 ++++++----------------------------- drivers/net/tap/rte_eth_tap.h | 15 ++---- 2 files changed, 18 insertions(+), 85 deletions(-) diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index 69d9da695b..f87979da4f 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -432,7 +432,6 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rx_queue *rxq = queue; struct pmd_process_private *process_private; uint16_t num_rx; - unsigned long num_rx_bytes = 0; uint32_t trigger = tap_trigger; if (trigger == rxq->trigger_seen) @@ -455,7 +454,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* Packet couldn't fit in the provided mbuf */ if (unlikely(rxq->pi.flags & TUN_PKT_STRIP)) { - rxq->stats.ierrors++; + rte_eth_count_error(&rxq->stats); continue; } @@ -467,7 +466,9 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *buf = rte_pktmbuf_alloc(rxq->mp); if (unlikely(!buf)) { - rxq->stats.rx_nombuf++; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->in_port]; + ++dev->data->rx_mbuf_alloc_failed; + /* No new buf has been allocated: do nothing */ if (!new_tail || !seg) goto end; @@ -509,11 +510,9 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* account for the receive frame */ bufs[num_rx++] = mbuf; - num_rx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&rxq->stats, mbuf); } end: - rxq->stats.ipackets += num_rx; - rxq->stats.ibytes += num_rx_bytes; if (trigger && num_rx < nb_pkts) rxq->trigger_seen = trigger; @@ -523,8 +522,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) static inline int tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs, - struct rte_mbuf **pmbufs, - uint16_t *num_packets, unsigned long *num_tx_bytes) + struct rte_mbuf **pmbufs) { struct pmd_process_private *process_private; int i; @@ -647,8 +645,7 @@ tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs, if (n <= 0) return -1; - (*num_packets)++; - (*num_tx_bytes) += rte_pktmbuf_pkt_len(mbuf); + rte_eth_count_mbuf(&txq->stats, mbuf); } return 0; } @@ -660,8 +657,6 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { struct tx_queue *txq = queue; uint16_t num_tx = 0; - uint16_t num_packets = 0; - unsigned long num_tx_bytes = 0; uint32_t max_size; int i; @@ -693,7 +688,7 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) tso_segsz = mbuf_in->tso_segsz + hdrs_len; if (unlikely(tso_segsz == hdrs_len) || tso_segsz > *txq->mtu) { - txq->stats.errs++; + rte_eth_count_error(&txq->stats); break; } gso_ctx->gso_size = tso_segsz; @@ -728,10 +723,10 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) num_mbufs = 1; } - ret = tap_write_mbufs(txq, num_mbufs, mbuf, - &num_packets, &num_tx_bytes); + ret = tap_write_mbufs(txq, num_mbufs, mbuf); if (ret == -1) { - txq->stats.errs++; + rte_eth_count_error(&txq->stats); + /* free tso mbufs */ if (num_tso_mbufs > 0) rte_pktmbuf_free_bulk(mbuf, num_tso_mbufs); @@ -749,10 +744,6 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } } - txq->stats.opackets += num_packets; - txq->stats.errs += nb_pkts - num_tx; - txq->stats.obytes += num_tx_bytes; - return num_tx; } @@ -1055,64 +1046,15 @@ tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int tap_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *tap_stats) { - unsigned int i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - unsigned long rx_nombuf = 0, ierrors = 0; - const struct pmd_internals *pmd = dev->data->dev_private; - - /* rx queue statistics */ - imax = (dev->data->nb_rx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_rx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - for (i = 0; i < imax; i++) { - tap_stats->q_ipackets[i] = pmd->rxq[i].stats.ipackets; - tap_stats->q_ibytes[i] = pmd->rxq[i].stats.ibytes; - rx_total += tap_stats->q_ipackets[i]; - rx_bytes_total += tap_stats->q_ibytes[i]; - rx_nombuf += pmd->rxq[i].stats.rx_nombuf; - ierrors += pmd->rxq[i].stats.ierrors; - } - - /* tx queue statistics */ - imax = (dev->data->nb_tx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_tx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - - for (i = 0; i < imax; i++) { - tap_stats->q_opackets[i] = pmd->txq[i].stats.opackets; - tap_stats->q_obytes[i] = pmd->txq[i].stats.obytes; - tx_total += tap_stats->q_opackets[i]; - tx_err_total += pmd->txq[i].stats.errs; - tx_bytes_total += tap_stats->q_obytes[i]; - } - - tap_stats->ipackets = rx_total; - tap_stats->ibytes = rx_bytes_total; - tap_stats->ierrors = ierrors; - tap_stats->rx_nombuf = rx_nombuf; - tap_stats->opackets = tx_total; - tap_stats->oerrors = tx_err_total; - tap_stats->obytes = tx_bytes_total; - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct tx_queue, stats), + offsetof(struct rx_queue, stats), tap_stats); } static int tap_stats_reset(struct rte_eth_dev *dev) { - int i; - struct pmd_internals *pmd = dev->data->dev_private; - - for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { - pmd->rxq[i].stats.ipackets = 0; - pmd->rxq[i].stats.ibytes = 0; - pmd->rxq[i].stats.ierrors = 0; - pmd->rxq[i].stats.rx_nombuf = 0; - - pmd->txq[i].stats.opackets = 0; - pmd->txq[i].stats.errs = 0; - pmd->txq[i].stats.obytes = 0; - } - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct tx_queue, stats), + offsetof(struct rx_queue, stats)); } static int diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h index 5ac93f93e9..8cba9ea410 100644 --- a/drivers/net/tap/rte_eth_tap.h +++ b/drivers/net/tap/rte_eth_tap.h @@ -14,6 +14,7 @@ #include #include +#include #include #include #include "tap_log.h" @@ -32,23 +33,13 @@ enum rte_tuntap_type { ETH_TUNTAP_TYPE_MAX, }; -struct pkt_stats { - uint64_t opackets; /* Number of output packets */ - uint64_t ipackets; /* Number of input packets */ - uint64_t obytes; /* Number of bytes on output */ - uint64_t ibytes; /* Number of bytes on input */ - uint64_t errs; /* Number of TX error packets */ - uint64_t ierrors; /* Number of RX error packets */ - uint64_t rx_nombuf; /* Nb of RX mbuf alloc failures */ -}; - struct rx_queue { struct rte_mempool *mp; /* Mempool for RX packets */ uint32_t trigger_seen; /* Last seen Rx trigger value */ uint16_t in_port; /* Port ID */ uint16_t queue_id; /* queue ID*/ - struct pkt_stats stats; /* Stats for this RX queue */ uint16_t nb_rx_desc; /* max number of mbufs available */ + struct rte_eth_counters stats; /* Stats for this RX queue */ struct rte_eth_rxmode *rxmode; /* RX features */ struct rte_mbuf *pool; /* mbufs pool for this queue */ struct iovec (*iovecs)[]; /* descriptors for this queue */ @@ -59,7 +50,7 @@ struct tx_queue { int type; /* Type field - TUN|TAP */ uint16_t *mtu; /* Pointer to MTU from dev_data */ uint16_t csum:1; /* Enable checksum offloading */ - struct pkt_stats stats; /* Stats for this TX queue */ + struct rte_eth_counters stats; /* Stats for this TX queue */ struct rte_gso_ctx gso_ctx; /* GSO context */ uint16_t out_port; /* Port ID */ uint16_t queue_id; /* queue ID*/ From patchwork Fri May 17 17:35:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140181 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9639F44052; Fri, 17 May 2024 19:41:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B76740DCB; Fri, 17 May 2024 19:41:02 +0200 (CEST) Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by mails.dpdk.org (Postfix) with ESMTP id A6FA7409FA for ; Fri, 17 May 2024 19:40:55 +0200 (CEST) Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-1ec41d82b8bso19065805ad.2 for ; Fri, 17 May 2024 10:40:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715967655; x=1716572455; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lYXDUrkpOH74UzCatXaIUkTb9X4azvwOXkFfc0kL1is=; b=q+PVZu7TD/WiE26AQ3ekZ+SFQR4PFNKz7LJV1+qCfvWOYsCD8Uk+xiA+STk7cTZEYd 1k43IMjw4jMboXH4vy+OdZjzTAXrgph0fjhfokcFIu4bvxQMT2iFXRfk0t9j515qYueu 6pBkjLZQK387/fWsGPraBvAqhOgnA0J/xkx49hkdGg3QLyU+yh3KjfvOlP1D3s7k/9J0 XQQaGQ0yw5CrAhoDrjCBCy9eUQsMOVyBWpE8phM1cAUyZm5u6mH2IbkNwnaFPsmtxmz9 4VLYAMc4h8CO/fIuM5nnxGf9c4fIySL2/ju7aKwGBxDIk1FT9vNmvd66g7Aex5K0bN7I OfQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715967655; x=1716572455; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lYXDUrkpOH74UzCatXaIUkTb9X4azvwOXkFfc0kL1is=; b=MSTf7Jn4iE664YP0IYVSN8amgrBBFL3cnwmHOm/IDHuC5G+MmxuKNtMwU0I7FWwNTq DlX3HauGRXsjU9mChL6Pn4hVZ1C0OWxxkvqmPKN2jd3/bMh20OjjzC3C23xweQ3jXYsJ Mds5NJNlOYz3LqRNyVq0kehmovDVjY+x21TOurhB0eNkO/Fj0hng+FY5umvGaLIlaalc AznLgVT9K6vG8mowoLrspbKURRP5JM8nMo0xd6O1RwmFiy+loAOrE961PtJf3escNX18 iT85IJxuLG+0AkPiVCzGpJUwAXXVn7ta2icHsJJfRbMpCcSvBZ416JZXOlr6LymgYlE1 OjYQ== X-Gm-Message-State: AOJu0Yy+0oYJR9GE4AzPDGVsa2hSGraO+VMVzUKLaiDbdmS/XGqxn2NZ DyOiSZOzglbwJCygvvkAVZMTOigL8TrDSBGR+6Q+Vo9/FdgvW9gQWlFM4rqUb2c2lojWzAvrYRV l75k= X-Google-Smtp-Source: AGHT+IEMx7COjsJticqmW2zTmO8G3HXjTTgUVM2jeYoNS2SkOmATLOAHpsy5lj9EQ9ziDXuSvloGxw== X-Received: by 2002:a17:902:e881:b0:1e3:dfdd:21bd with SMTP id d9443c01a7336-1ef43f51ff5mr217367815ad.55.1715967654849; Fri, 17 May 2024 10:40:54 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0bf31032sm158830485ad.131.2024.05.17.10.40.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 10:40:54 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Tetsuya Mukawa Subject: [PATCH v7 9/9] net/null: use generic SW stats Date: Fri, 17 May 2024 10:35:16 -0700 Message-ID: <20240517174044.90952-10-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517174044.90952-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517174044.90952-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use the new common code for statistics. This also fixes the bug that this driver was not accounting for bytes. Signed-off-by: Stephen Hemminger --- drivers/net/null/rte_eth_null.c | 80 +++++++-------------------------- 1 file changed, 17 insertions(+), 63 deletions(-) diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c index f4ed3b8a7f..7786982732 100644 --- a/drivers/net/null/rte_eth_null.c +++ b/drivers/net/null/rte_eth_null.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -37,8 +38,8 @@ struct null_queue { struct rte_mempool *mb_pool; struct rte_mbuf *dummy_packet; - RTE_ATOMIC(uint64_t) rx_pkts; - RTE_ATOMIC(uint64_t) tx_pkts; + struct rte_eth_counters tx_stats; + struct rte_eth_counters rx_stats; }; struct pmd_options { @@ -99,11 +100,9 @@ eth_null_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) bufs[i]->data_len = (uint16_t)packet_size; bufs[i]->pkt_len = packet_size; bufs[i]->port = h->internals->port_id; + rte_eth_count_mbuf(&h->rx_stats, bufs[i]); } - /* NOTE: review for potential ordering optimization */ - rte_atomic_fetch_add_explicit(&h->rx_pkts, i, rte_memory_order_seq_cst); - return i; } @@ -127,11 +126,9 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) bufs[i]->data_len = (uint16_t)packet_size; bufs[i]->pkt_len = packet_size; bufs[i]->port = h->internals->port_id; + rte_eth_count_mbuf(&h->rx_stats, bufs[i]); } - /* NOTE: review for potential ordering optimization */ - rte_atomic_fetch_add_explicit(&h->rx_pkts, i, rte_memory_order_seq_cst); - return i; } @@ -151,11 +148,10 @@ eth_null_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if ((q == NULL) || (bufs == NULL)) return 0; - for (i = 0; i < nb_bufs; i++) + for (i = 0; i < nb_bufs; i++) { + rte_eth_count_mbuf(&h->tx_stats, bufs[i]); rte_pktmbuf_free(bufs[i]); - - /* NOTE: review for potential ordering optimization */ - rte_atomic_fetch_add_explicit(&h->tx_pkts, i, rte_memory_order_seq_cst); + } return i; } @@ -174,12 +170,10 @@ eth_null_copy_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) for (i = 0; i < nb_bufs; i++) { rte_memcpy(h->dummy_packet, rte_pktmbuf_mtod(bufs[i], void *), packet_size); + rte_eth_count_mbuf(&h->tx_stats, bufs[i]); rte_pktmbuf_free(bufs[i]); } - /* NOTE: review for potential ordering optimization */ - rte_atomic_fetch_add_explicit(&h->tx_pkts, i, rte_memory_order_seq_cst); - return i; } @@ -322,60 +316,20 @@ eth_dev_info(struct rte_eth_dev *dev, } static int -eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *igb_stats) +eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned int i, num_stats; - unsigned long rx_total = 0, tx_total = 0; - const struct pmd_internals *internal; - - if ((dev == NULL) || (igb_stats == NULL)) - return -EINVAL; - - internal = dev->data->dev_private; - num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS, - RTE_MIN(dev->data->nb_rx_queues, - RTE_DIM(internal->rx_null_queues))); - for (i = 0; i < num_stats; i++) { - /* NOTE: review for atomic access */ - igb_stats->q_ipackets[i] = - internal->rx_null_queues[i].rx_pkts; - rx_total += igb_stats->q_ipackets[i]; - } - - num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS, - RTE_MIN(dev->data->nb_tx_queues, - RTE_DIM(internal->tx_null_queues))); - for (i = 0; i < num_stats; i++) { - /* NOTE: review for atomic access */ - igb_stats->q_opackets[i] = - internal->tx_null_queues[i].tx_pkts; - tx_total += igb_stats->q_opackets[i]; - } - - igb_stats->ipackets = rx_total; - igb_stats->opackets = tx_total; - - return 0; + return rte_eth_counters_stats_get(dev, + offsetof(struct null_queue, tx_stats), + offsetof(struct null_queue, rx_stats), + stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned int i; - struct pmd_internals *internal; - - if (dev == NULL) - return -EINVAL; - - internal = dev->data->dev_private; - for (i = 0; i < RTE_DIM(internal->rx_null_queues); i++) - /* NOTE: review for atomic access */ - internal->rx_null_queues[i].rx_pkts = 0; - for (i = 0; i < RTE_DIM(internal->tx_null_queues); i++) - /* NOTE: review for atomic access */ - internal->tx_null_queues[i].tx_pkts = 0; - - return 0; + return rte_eth_counters_reset(dev, + offsetof(struct null_queue, tx_stats), + offsetof(struct null_queue, rx_stats)); } static void