From patchwork Fri May 10 05:01:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140013 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E26C243FEC; Fri, 10 May 2024 07:05:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7ABA7402EC; Fri, 10 May 2024 07:05:14 +0200 (CEST) Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by mails.dpdk.org (Postfix) with ESMTP id AF7DD4025F for ; Fri, 10 May 2024 07:05:12 +0200 (CEST) Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1ed012c1afbso13634875ad.1 for ; Thu, 09 May 2024 22:05:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715317512; x=1715922312; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LQEvvo9hdsnNCJU05asnkzipDryml9gKV3XPEkyKQ1U=; b=PQgPYTbs30Ev+vhbvkd9+o6HrSRGeA/IP7XDm6eeaIZ3jz4repTPya+PI2Qf3z2l1E dH/1qiywsFk4G0Ye4kXMhOeIavGRa+xAQI3BVWkikmtzl2xSmpdSUCF50bZ5tLeT33PA oE69+7090Bhl/HoZIpu1Y84QxoJeLpYfoQO+M9MDnLUnUXH1a0TcIajySAMtnaZg/Py9 VkrK1MNyfOuPc1LRvKGZkDKH2cWTCPNSTu6KBb3XRCH1iwHDIKWOV1BsyOhfqhPnQwQb 0JRg2LCW6Qo3vUtUGb4BJ9gVY28jwnapn98H2ZXhLwYOW3Pdfzg34F6VUws15ElVFSBn kI6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715317512; x=1715922312; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LQEvvo9hdsnNCJU05asnkzipDryml9gKV3XPEkyKQ1U=; b=kDPJftFAthBd9ldUVhLiFT3aUF0AhnTQ33heIMJi7yVXGatv6/6BqO5Yt0ryQWxZkV iDpk/NiBsA/+Ab2vokqilyZVJ/FVfieRS41eI5GAjYUwYKVO3cxFC+ez+U5ElSANeAZr 8osPI+ECG7sHeOQuwPkHf3tH9l4jEoxl+7gb3E2Irq9hNqxKmqWdJtB2AY0NWSsW92Th kBt/loVfn+4fOfe07FLfz8uG4dVcTht5MqvxaI8UsJaXSzcvf3ogAjZWIk1Q6cgeh7Vh KetwrbQlWKBbUCVTJ1SqpHUpBb0FyLNgb8MLpGkVnpukoxAMUU4QA4YnW0KRIYl+wRlg la3A== X-Gm-Message-State: AOJu0Yz7EpsUmvfGB2wSfm627BNObxERSDT1JkYIdI5MLGQzeP5Hw/ml wX8b54XbwcV0demOP8OQ1H2gZANhj71UGVKNU/CnVDku72PwcgMnc4Q15iJ38z+4pJ5xT1C2kaY rqLk= X-Google-Smtp-Source: AGHT+IH7RdwNUUB24GQmblHQzJEWlZ4AZC52oTf5GbFN8x8H7loagzr2jzBvILB/L02V4qypb4C4WQ== X-Received: by 2002:a17:902:dad2:b0:1eb:828:9a71 with SMTP id d9443c01a7336-1ef432a0cbbmr25962525ad.31.1715317511799; Thu, 09 May 2024 22:05:11 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0c25566asm23007385ad.283.2024.05.09.22.05.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 May 2024 22:05:11 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [RFC 1/3] ethdev: add internal helper of SW driver statistics Date: Thu, 9 May 2024 22:01:21 -0700 Message-ID: <20240510050507.14381-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240510050507.14381-1-stephen@networkplumber.org> References: <20240425174617.2126159-1-ferruh.yigit@amd.com> <20240510050507.14381-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This clones the staistic update code from virtio for use by other drivers. It also uses native uint64_t on 64 bit platform but atomic operations on 32 bit platforms. Signed-off-by: Stephen Hemminger ethdev: use atomic on 32 --- lib/ethdev/ethdev_swstats.c | 294 ++++++++++++++++++++++++++++++++++++ lib/ethdev/ethdev_swstats.h | 60 ++++++++ lib/ethdev/meson.build | 2 + lib/ethdev/version.map | 7 + 4 files changed, 363 insertions(+) create mode 100644 lib/ethdev/ethdev_swstats.c create mode 100644 lib/ethdev/ethdev_swstats.h diff --git a/lib/ethdev/ethdev_swstats.c b/lib/ethdev/ethdev_swstats.c new file mode 100644 index 0000000000..81b9ac13b5 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.c @@ -0,0 +1,294 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#include + +#include +#include + +#include "rte_ethdev.h" +#include "ethdev_swstats.h" + +/* + * Handling of 64 bit counters to problems with load/store tearing on 32 bit. + * Store of aligned 64 bit never gets seperated on 64 bit platform. + * But on 32 bit need to use atomic. + */ +#ifdef RTE_ARCH_64 +typedef uint64_t eth_counter_t; + +static inline void +eth_counter_add(eth_counter_t *counter, uint32_t val) +{ + counter += val; +} + +static inline uint64_t +eth_counter_read(const eth_counter_t *counter) +{ + return *counter; +} + +static inline void +eth_counter_reset(eth_counter_t *counter) +{ + *counter = 0; +} +#else +static inline void +eth_counter_add(eth_counter_t *counter, uint32_t val) +{ + rte_atomic_fetch_add_explicit(counter, val, rte_memory_order_relaxed); +} + +static inline uint64_t +eth_counter_read(const eth_counter_t *counter) +{ + return rte_atomic_load_explicit(counter, rte_memory_order_relaxed); +} + +static inline void +eth_counter_reset(eth_counter_t *counter) +{ + rte_atomic_store_explicit(counter, 0, rte_memory_order_relaxed); +} + +#endif + +static void +eth_qsw_reset(struct rte_eth_qsw_stats *qstats) +{ + unsigned int i; + + eth_counter_reset(&qstats->packets); + eth_counter_reset(&qstats->bytes); + eth_counter_reset(&qstats->multicast); + eth_counter_reset(&qstats->broadcast); + + for (i = 0; i < RTE_DIM(qstats->size_bins); i++) + eth_counter_reset(&qstats->size_bins[i]); +} + +void +rte_eth_qsw_update(struct rte_eth_qsw_stats *qstats, const struct rte_mbuf *mbuf) +{ + uint32_t s = mbuf->pkt_len; + uint32_t bin; + const struct rte_ether_addr *ea; + + if (s == 64) { + bin = 1; + } else if (s > 64 && s < 1024) { + /* count zeros, and offset into correct bin */ + bin = (sizeof(s) * 8) - rte_clz32(s) - 5; + } else if (s < 64) { + bin = 0; + } else if (s < 1519) { + bin = 6; + } else { + bin = 7; + } + + eth_counter_add(&qstats->packets, 1); + eth_counter_add(&qstats->bytes, s); + eth_counter_add(&qstats->size_bins[bin], 1); + + ea = rte_pktmbuf_mtod(mbuf, const struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + eth_counter_add(&qstats->broadcast, 1); + else + eth_counter_add(&qstats->multicast, 1); + } +} + +void +rte_eth_qsw_error_inc(struct rte_eth_qsw_stats *qstats) +{ + eth_counter_add(&qstats->errors, 1); +} + +int +rte_eth_qsw_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + unsigned int i; + uint64_t packets, bytes, errors; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + /* assumes that rte_eth_qsw_stats is at start of the queue structure */ + const struct rte_eth_qsw_stats *qstats = dev->data->tx_queues[i]; + + if (qstats == NULL) + continue; + + packets = eth_counter_read(&qstats->packets); + bytes = eth_counter_read(&qstats->bytes); + errors = eth_counter_read(&qstats->errors); + + stats->opackets += packets; + stats->obytes += bytes; + stats->oerrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_opackets[i] = packets; + stats->q_obytes[i] = bytes; + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + /* assumes that rte_eth_qsw_stats is at start of the queue structure */ + const struct rte_eth_qsw_stats *qstats = dev->data->rx_queues[i]; + + if (qstats == NULL) + continue; + + packets = eth_counter_read(&qstats->packets); + bytes = eth_counter_read(&qstats->bytes); + errors = eth_counter_read(&qstats->errors); + + stats->ipackets += packets; + stats->ibytes += bytes; + stats->ierrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_ipackets[i] = packets; + stats->q_ibytes[i] = bytes; + } + } + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + return 0; +} + +int +rte_eth_qsw_stats_reset(struct rte_eth_dev *dev) +{ + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct rte_eth_qsw_stats *qstats = dev->data->tx_queues[i]; + + if (qstats != NULL) + eth_qsw_reset(qstats); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct rte_eth_qsw_stats *qstats = dev->data->rx_queues[i]; + + if (qstats != NULL) + eth_qsw_reset(qstats); + } + + return 0; +} + +struct xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + size_t offset; +}; + +/* [rt]x_qX_ is prepended to the name string here */ +static const struct xstats_name_off eth_swstats_strings[] = { + {"good_packets", offsetof(struct rte_eth_qsw_stats, packets)}, + {"good_bytes", offsetof(struct rte_eth_qsw_stats, bytes)}, + {"errors", offsetof(struct rte_eth_qsw_stats, errors)}, + {"multicast_packets", offsetof(struct rte_eth_qsw_stats, multicast)}, + {"broadcast_packets", offsetof(struct rte_eth_qsw_stats, broadcast)}, + {"undersize_packets", offsetof(struct rte_eth_qsw_stats, size_bins[0])}, + {"size_64_packets", offsetof(struct rte_eth_qsw_stats, size_bins[1])}, + {"size_65_127_packets", offsetof(struct rte_eth_qsw_stats, size_bins[2])}, + {"size_128_255_packets", offsetof(struct rte_eth_qsw_stats, size_bins[3])}, + {"size_256_511_packets", offsetof(struct rte_eth_qsw_stats, size_bins[4])}, + {"size_512_1023_packets", offsetof(struct rte_eth_qsw_stats, size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct rte_eth_qsw_stats, size_bins[6])}, + {"size_1519_max_packets", offsetof(struct rte_eth_qsw_stats, size_bins[7])}, +}; +#define NUM_SWSTATS_XSTATS RTE_DIM(eth_swstats_strings) + + +int +rte_eth_qsw_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned limit) +{ + unsigned int i, t, count = 0; + + if (xstats_names == NULL) + return (dev->data->nb_tx_queues + dev->data->nb_rx_queues) * NUM_SWSTATS_XSTATS; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const void *rxq = dev->data->rx_queues[i]; + + if (rxq == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + snprintf(xstats_names[count].name, sizeof(xstats_names[count].name), + "rx_q%u_%s", i, eth_swstats_strings[t].name); + count++; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const void *txq = dev->data->tx_queues[i]; + + if (txq == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + snprintf(xstats_names[count].name, sizeof(xstats_names[count].name), + "tx_q%u_%s", i, eth_swstats_strings[t].name); + count++; + } + } + return count; +} + +int +rte_eth_qsw_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) +{ + unsigned int i, t, count = 0; + const unsigned int nstats + = (dev->data->nb_tx_queues + dev->data->nb_rx_queues) * NUM_SWSTATS_XSTATS; + + if (n < nstats) + return nstats; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + /* assumes that rte_eth_qsw_stats is at start of the queue structure */ + const struct rte_eth_qsw_stats *qstats = dev->data->rx_queues[i]; + + if (qstats == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + const uint64_t *valuep + = (const uint64_t *)((const char *)qstats + + eth_swstats_strings[t].offset); + + xstats[count].value = *valuep; + xstats[count].id = count; + ++count; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const struct rte_eth_qsw_stats *qstats = dev->data->tx_queues[i]; + + if (qstats == NULL) + continue; + + for (t = 0; t < NUM_SWSTATS_XSTATS; t++) { + const uint64_t *valuep + = (const uint64_t *)((const char *)qstats + + eth_swstats_strings[t].offset); + + xstats[count].value = *valuep; + xstats[count].id = count; + ++count; + } + } + + return count; +} diff --git a/lib/ethdev/ethdev_swstats.h b/lib/ethdev/ethdev_swstats.h new file mode 100644 index 0000000000..6309107128 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_ETHDEV_SWSTATS_H_ +#define _RTE_ETHDEV_SWSTATS_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +#ifdef RTE_ARCH_64 +typedef uint64_t eth_counter_t; +#else +typedef RTE_ATOMIC(uint64_t) eth_counter_t; +#endif + +struct rte_eth_qsw_stats { + eth_counter_t packets; + eth_counter_t bytes; + eth_counter_t errors; + eth_counter_t multicast; + eth_counter_t broadcast; + /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ + eth_counter_t size_bins[8]; +}; + +__rte_internal +void +rte_eth_qsw_update(struct rte_eth_qsw_stats *stats, const struct rte_mbuf *mbuf); + +__rte_internal +void +rte_eth_qsw_error_inc(struct rte_eth_qsw_stats *stats); + +__rte_internal +int +rte_eth_qsw_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); + +__rte_internal +int +rte_eth_qsw_stats_reset(struct rte_eth_dev *dev); + +__rte_internal +int +rte_eth_qsw_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit); +__rte_internal +int +rte_eth_qsw_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, + unsigned int n); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_ETHDEV_SWSTATS_H_ */ diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index f1d2586591..7ce29a46d4 100644 --- a/lib/ethdev/meson.build +++ b/lib/ethdev/meson.build @@ -3,6 +3,7 @@ sources = files( 'ethdev_driver.c', + 'ethdev_swstats.c', 'ethdev_private.c', 'ethdev_profile.c', 'ethdev_trace_points.c', @@ -42,6 +43,7 @@ driver_sdk_headers += files( 'ethdev_driver.h', 'ethdev_pci.h', 'ethdev_vdev.h', + 'ethdev_swstats.h', ) if is_linux diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 79f6f5293b..32ebe5ea09 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -358,4 +358,11 @@ INTERNAL { rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; rte_flow_fp_default_ops; + + rte_eth_qsw_error_inc; + rte_eth_qsw_stats_get; + rte_eth_qsw_stats_reset; + rte_eth_qsw_update; + rte_eth_qsw_xstats_get; + rte_eth_qsw_xstats_get_names; };