From patchwork Thu Jan 27 14:56:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 106618 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A261EA04A6; Thu, 27 Jan 2022 15:58:15 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A25E942844; Thu, 27 Jan 2022 15:57:55 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 887F442844 for ; Thu, 27 Jan 2022 15:57:53 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1643295473; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sP5voxWXHEouyrUmE9Hl99Ep9/a3QdS5kMEmBuzJflw=; b=WRkz7xScT66ZynWgBMNenF/i3+1ZpqG+ZsUb0oQ+6VIrhQ+OacwXMwhCLxUZSQXoA643qJ qtUReJ/CHDCeUvUI9t1NIlyiDFIcIssP46ADgf8aj173WPGS9ga+NwI23YCluao2y82xKx SX1qcroWjaxPXIys9NNk9Cm6jhj1rms= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-195-ArPpDzu5PcCq5R9F5BmTHg-1; Thu, 27 Jan 2022 09:57:46 -0500 X-MC-Unique: ArPpDzu5PcCq5R9F5BmTHg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4ECDB18C8C2E; Thu, 27 Jan 2022 14:57:37 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.34]) by smtp.corp.redhat.com (Postfix) with ESMTP id 348D8798AD; Thu, 27 Jan 2022 14:57:36 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com Cc: Maxime Coquelin Subject: [PATCH 3/5] net/vhost: move to Vhost library stats API Date: Thu, 27 Jan 2022 15:56:53 +0100 Message-Id: <20220127145655.558029-4-maxime.coquelin@redhat.com> In-Reply-To: <20220127145655.558029-1-maxime.coquelin@redhat.com> References: <20220127145655.558029-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Now that we have Vhost statistics APIs, this patch replaces Vhost PMD extented statistics implementation with calls to the new API. It will enable getting more statistics for counters that cannot be implmented at the PMD level. Signed-off-by: Maxime Coquelin --- drivers/net/vhost/rte_eth_vhost.c | 348 +++++++++++------------------- 1 file changed, 120 insertions(+), 228 deletions(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index 070f0e6dfd..bac1c0acba 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -59,33 +59,10 @@ static struct rte_ether_addr base_eth_addr = { } }; -enum vhost_xstats_pkts { - VHOST_UNDERSIZE_PKT = 0, - VHOST_64_PKT, - VHOST_65_TO_127_PKT, - VHOST_128_TO_255_PKT, - VHOST_256_TO_511_PKT, - VHOST_512_TO_1023_PKT, - VHOST_1024_TO_1522_PKT, - VHOST_1523_TO_MAX_PKT, - VHOST_BROADCAST_PKT, - VHOST_MULTICAST_PKT, - VHOST_UNICAST_PKT, - VHOST_PKT, - VHOST_BYTE, - VHOST_MISSED_PKT, - VHOST_ERRORS_PKT, - VHOST_ERRORS_FRAGMENTED, - VHOST_ERRORS_JABBER, - VHOST_UNKNOWN_PROTOCOL, - VHOST_XSTATS_MAX, -}; - struct vhost_stats { uint64_t pkts; uint64_t bytes; uint64_t missed_pkts; - uint64_t xstats[VHOST_XSTATS_MAX]; }; struct vhost_queue { @@ -140,138 +117,92 @@ struct rte_vhost_vring_state { static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS]; -#define VHOST_XSTATS_NAME_SIZE 64 - -struct vhost_xstats_name_off { - char name[VHOST_XSTATS_NAME_SIZE]; - uint64_t offset; -}; - -/* [rx]_is prepended to the name string here */ -static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = { - {"good_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, - {"total_bytes", - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, - {"missed_pkts", - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, - {"broadcast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, - {"multicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, - {"unicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])}, - {"undersize_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])}, - {"size_64_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, - {"size_65_to_127_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, - {"size_128_to_255_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, - {"size_256_to_511_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, - {"size_512_to_1023_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, - {"size_1024_to_1522_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, - {"size_1523_to_max_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, - {"errors_with_bad_CRC", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, - {"fragmented_errors", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_FRAGMENTED])}, - {"jabber_errors", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_JABBER])}, - {"unknown_protos_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNKNOWN_PROTOCOL])}, -}; - -/* [tx]_ is prepended to the name string here */ -static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = { - {"good_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, - {"total_bytes", - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, - {"missed_pkts", - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, - {"broadcast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, - {"multicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, - {"unicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])}, - {"undersize_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])}, - {"size_64_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, - {"size_65_to_127_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, - {"size_128_to_255_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, - {"size_256_to_511_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, - {"size_512_to_1023_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, - {"size_1024_to_1522_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, - {"size_1523_to_max_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, - {"errors_with_bad_CRC", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, -}; - -#define VHOST_NB_XSTATS_RXPORT (sizeof(vhost_rxport_stat_strings) / \ - sizeof(vhost_rxport_stat_strings[0])) - -#define VHOST_NB_XSTATS_TXPORT (sizeof(vhost_txport_stat_strings) / \ - sizeof(vhost_txport_stat_strings[0])) - static int vhost_dev_xstats_reset(struct rte_eth_dev *dev) { - struct vhost_queue *vq = NULL; - unsigned int i = 0; + struct vhost_queue *vq; + int ret, i; for (i = 0; i < dev->data->nb_rx_queues; i++) { vq = dev->data->rx_queues[i]; - if (!vq) - continue; - memset(&vq->stats, 0, sizeof(vq->stats)); + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id); + if (ret < 0) + return ret; } + for (i = 0; i < dev->data->nb_tx_queues; i++) { vq = dev->data->tx_queues[i]; - if (!vq) - continue; - memset(&vq->stats, 0, sizeof(vq->stats)); + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id); + if (ret < 0) + return ret; } return 0; } static int -vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused, +vhost_dev_xstats_get_names(struct rte_eth_dev *dev, struct rte_eth_xstat_name *xstats_names, - unsigned int limit __rte_unused) + unsigned int limit) { - unsigned int t = 0; - int count = 0; - int nstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT; + struct rte_vhost_stat_name *name; + struct vhost_queue *vq; + int ret, i, count = 0, nstats = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; - if (!xstats_names) + nstats += ret; + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; + } + + if (!xstats_names || limit < (unsigned int)nstats) return nstats; - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "rx_%s", vhost_rxport_stat_strings[t].name); - count++; - } - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "tx_%s", vhost_txport_stat_strings[t].name); - count++; + + name = calloc(nstats, sizeof(*name)); + if (!name) + return -1; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, + name + count, nstats - count); + if (ret < 0) { + free(name); + return ret; + } + + count += ret; } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, + name + count, nstats - count); + if (ret < 0) { + free(name); + return ret; + } + + count += ret; + } + + for (i = 0; i < count; i++) + strncpy(xstats_names[i].name, name[i].name, RTE_ETH_XSTATS_NAME_SIZE); + + free(name); + return count; } @@ -279,86 +210,63 @@ static int vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - unsigned int i; - unsigned int t; - unsigned int count = 0; - struct vhost_queue *vq = NULL; - unsigned int nxstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT; - - if (n < nxstats) - return nxstats; - - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { - xstats[count].value = 0; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - vq = dev->data->rx_queues[i]; - if (!vq) - continue; - xstats[count].value += - *(uint64_t *)(((char *)vq) - + vhost_rxport_stat_strings[t].offset); - } - xstats[count].id = count; - count++; + struct rte_vhost_stat *stats; + struct vhost_queue *vq; + int ret, i, count = 0, nstats = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; } - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) { - xstats[count].value = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) { - vq = dev->data->tx_queues[i]; - if (!vq) - continue; - xstats[count].value += - *(uint64_t *)(((char *)vq) - + vhost_txport_stat_strings[t].offset); - } - xstats[count].id = count; - count++; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; } - return count; -} -static inline void -vhost_count_xcast_packets(struct vhost_queue *vq, - struct rte_mbuf *mbuf) -{ - struct rte_ether_addr *ea = NULL; - struct vhost_stats *pstats = &vq->stats; - - ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); - if (rte_is_multicast_ether_addr(ea)) { - if (rte_is_broadcast_ether_addr(ea)) - pstats->xstats[VHOST_BROADCAST_PKT]++; - else - pstats->xstats[VHOST_MULTICAST_PKT]++; - } else { - pstats->xstats[VHOST_UNICAST_PKT]++; + if (!xstats || n < (unsigned int)nstats) + return nstats; + + stats = calloc(nstats, sizeof(*stats)); + if (!stats) + return -1; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, + stats + count, nstats - count); + if (ret < 0) + return ret; + + count += ret; } -} -static __rte_always_inline void -vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf *buf) -{ - uint32_t pkt_len = 0; - uint64_t index; - struct vhost_stats *pstats = &vq->stats; - - pstats->xstats[VHOST_PKT]++; - pkt_len = buf->pkt_len; - if (pkt_len == 64) { - pstats->xstats[VHOST_64_PKT]++; - } else if (pkt_len > 64 && pkt_len < 1024) { - index = (sizeof(pkt_len) * 8) - - __builtin_clz(pkt_len) - 5; - pstats->xstats[index]++; - } else { - if (pkt_len < 64) - pstats->xstats[VHOST_UNDERSIZE_PKT]++; - else if (pkt_len <= 1522) - pstats->xstats[VHOST_1024_TO_1522_PKT]++; - else if (pkt_len > 1522) - pstats->xstats[VHOST_1523_TO_MAX_PKT]++; - } - vhost_count_xcast_packets(vq, buf); + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, + stats + count, nstats - count); + if (ret < 0) + return ret; + + count += ret; + } + + for (i = 0; i < count; i++) { + xstats[i].id = stats[i].id; + xstats[i].value = stats[i].value; + } + + free(stats); + + return nstats; } static uint16_t @@ -402,9 +310,6 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) rte_vlan_strip(bufs[i]); r->stats.bytes += bufs[i]->pkt_len; - r->stats.xstats[VHOST_BYTE] += bufs[i]->pkt_len; - - vhost_update_single_packet_xstats(r, bufs[i]); } out: @@ -461,10 +366,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) break; } - for (i = 0; likely(i < nb_tx); i++) { + for (i = 0; likely(i < nb_tx); i++) nb_bytes += bufs[i]->pkt_len; - vhost_update_single_packet_xstats(r, bufs[i]); - } nb_missed = nb_bufs - nb_tx; @@ -472,17 +375,6 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) r->stats.bytes += nb_bytes; r->stats.missed_pkts += nb_missed; - r->stats.xstats[VHOST_BYTE] += nb_bytes; - r->stats.xstats[VHOST_MISSED_PKT] += nb_missed; - r->stats.xstats[VHOST_UNICAST_PKT] += nb_missed; - - /* According to RFC2863, ifHCOutUcastPkts, ifHCOutMulticastPkts and - * ifHCOutBroadcastPkts counters are increased when packets are not - * transmitted successfully. - */ - for (i = nb_tx; i < nb_bufs; i++) - vhost_count_xcast_packets(r, bufs[i]); - for (i = 0; likely(i < nb_tx); i++) rte_pktmbuf_free(bufs[i]); out: @@ -1555,7 +1447,7 @@ rte_pmd_vhost_probe(struct rte_vdev_device *dev) int ret = 0; char *iface_name; uint16_t queues; - uint64_t flags = 0; + uint64_t flags = RTE_VHOST_USER_NET_STATS_ENABLE; uint64_t disable_flags = 0; int client_mode = 0; int iommu_support = 0;