From patchwork Tue May 10 20:17:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 111001 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B12CDA0093; Tue, 10 May 2022 22:17:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9922442835; Tue, 10 May 2022 22:17:30 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 6B00A406B4 for ; Tue, 10 May 2022 22:17:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1652213847; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8L8EkVWGXp434ZmgDMgRuxl57ndeF6h3z7H+9uqRFxM=; b=IcaApf73+11mN93Fb8O3AsFvtI+faT5fRTxm0oXfGkRo0oqeDdf6Q57ecukjPJ0YPXVRhu vQaqNh8o31JxVmobaMPTdLN9d2N5Z43L5REaW8sQQboXtB8QvcXKfYUvm7lADhaDHVSLv3 QgI6PYdKQfcEZuBdvDJcTidREduCQwc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-488-ITOdoEiwPvub8qloVFXo0Q-1; Tue, 10 May 2022 16:17:26 -0400 X-MC-Unique: ITOdoEiwPvub8qloVFXo0Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 684BF100BAAB; Tue, 10 May 2022 20:17:26 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6CE59400DFAB; Tue, 10 May 2022 20:17:25 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, i.maximets@ovn.org Cc: Maxime Coquelin Subject: [PATCH 2/5] net/vhost: move to Vhost library stats API Date: Tue, 10 May 2022 22:17:17 +0200 Message-Id: <20220510201720.1262368-3-maxime.coquelin@redhat.com> In-Reply-To: <20220510201720.1262368-1-maxime.coquelin@redhat.com> References: <20220510201720.1262368-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Now that we have Vhost statistics APIs, this patch replaces Vhost PMD extended statistics implementation with calls to the new API. It will enable getting more statistics for counters that cannot be implemented at the PMD level. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- drivers/net/vhost/rte_eth_vhost.c | 348 +++++++++++------------------- 1 file changed, 122 insertions(+), 226 deletions(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index a248a65df4..8dee629fb0 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -59,33 +59,10 @@ static struct rte_ether_addr base_eth_addr = { } }; -enum vhost_xstats_pkts { - VHOST_UNDERSIZE_PKT = 0, - VHOST_64_PKT, - VHOST_65_TO_127_PKT, - VHOST_128_TO_255_PKT, - VHOST_256_TO_511_PKT, - VHOST_512_TO_1023_PKT, - VHOST_1024_TO_1522_PKT, - VHOST_1523_TO_MAX_PKT, - VHOST_BROADCAST_PKT, - VHOST_MULTICAST_PKT, - VHOST_UNICAST_PKT, - VHOST_PKT, - VHOST_BYTE, - VHOST_MISSED_PKT, - VHOST_ERRORS_PKT, - VHOST_ERRORS_FRAGMENTED, - VHOST_ERRORS_JABBER, - VHOST_UNKNOWN_PROTOCOL, - VHOST_XSTATS_MAX, -}; - struct vhost_stats { uint64_t pkts; uint64_t bytes; uint64_t missed_pkts; - uint64_t xstats[VHOST_XSTATS_MAX]; }; struct vhost_queue { @@ -140,138 +117,92 @@ struct rte_vhost_vring_state { static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS]; -#define VHOST_XSTATS_NAME_SIZE 64 - -struct vhost_xstats_name_off { - char name[VHOST_XSTATS_NAME_SIZE]; - uint64_t offset; -}; - -/* [rx]_is prepended to the name string here */ -static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = { - {"good_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, - {"total_bytes", - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, - {"missed_pkts", - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, - {"broadcast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, - {"multicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, - {"unicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])}, - {"undersize_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])}, - {"size_64_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, - {"size_65_to_127_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, - {"size_128_to_255_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, - {"size_256_to_511_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, - {"size_512_to_1023_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, - {"size_1024_to_1522_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, - {"size_1523_to_max_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, - {"errors_with_bad_CRC", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, - {"fragmented_errors", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_FRAGMENTED])}, - {"jabber_errors", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_JABBER])}, - {"unknown_protos_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNKNOWN_PROTOCOL])}, -}; - -/* [tx]_ is prepended to the name string here */ -static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = { - {"good_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, - {"total_bytes", - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, - {"missed_pkts", - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, - {"broadcast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, - {"multicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, - {"unicast_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])}, - {"undersize_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])}, - {"size_64_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, - {"size_65_to_127_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, - {"size_128_to_255_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, - {"size_256_to_511_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, - {"size_512_to_1023_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, - {"size_1024_to_1522_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, - {"size_1523_to_max_packets", - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, - {"errors_with_bad_CRC", - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, -}; - -#define VHOST_NB_XSTATS_RXPORT (sizeof(vhost_rxport_stat_strings) / \ - sizeof(vhost_rxport_stat_strings[0])) - -#define VHOST_NB_XSTATS_TXPORT (sizeof(vhost_txport_stat_strings) / \ - sizeof(vhost_txport_stat_strings[0])) - static int vhost_dev_xstats_reset(struct rte_eth_dev *dev) { - struct vhost_queue *vq = NULL; - unsigned int i = 0; + struct vhost_queue *vq; + int ret, i; for (i = 0; i < dev->data->nb_rx_queues; i++) { vq = dev->data->rx_queues[i]; - if (!vq) - continue; - memset(&vq->stats, 0, sizeof(vq->stats)); + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id); + if (ret < 0) + return ret; } + for (i = 0; i < dev->data->nb_tx_queues; i++) { vq = dev->data->tx_queues[i]; - if (!vq) - continue; - memset(&vq->stats, 0, sizeof(vq->stats)); + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id); + if (ret < 0) + return ret; } return 0; } static int -vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused, +vhost_dev_xstats_get_names(struct rte_eth_dev *dev, struct rte_eth_xstat_name *xstats_names, - unsigned int limit __rte_unused) + unsigned int limit) { - unsigned int t = 0; - int count = 0; - int nstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT; + struct rte_vhost_stat_name *name; + struct vhost_queue *vq; + int ret, i, count = 0, nstats = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; - if (!xstats_names) + nstats += ret; + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; + } + + if (!xstats_names || limit < (unsigned int)nstats) return nstats; - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "rx_%s", vhost_rxport_stat_strings[t].name); - count++; - } - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) { - snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "tx_%s", vhost_txport_stat_strings[t].name); - count++; + + name = calloc(nstats, sizeof(*name)); + if (!name) + return -1; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, + name + count, nstats - count); + if (ret < 0) { + free(name); + return ret; + } + + count += ret; } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, + name + count, nstats - count); + if (ret < 0) { + free(name); + return ret; + } + + count += ret; + } + + for (i = 0; i < count; i++) + strncpy(xstats_names[i].name, name[i].name, RTE_ETH_XSTATS_NAME_SIZE); + + free(name); + return count; } @@ -279,86 +210,67 @@ static int vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - unsigned int i; - unsigned int t; - unsigned int count = 0; - struct vhost_queue *vq = NULL; - unsigned int nxstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT; - - if (n < nxstats) - return nxstats; - - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { - xstats[count].value = 0; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - vq = dev->data->rx_queues[i]; - if (!vq) - continue; - xstats[count].value += - *(uint64_t *)(((char *)vq) - + vhost_rxport_stat_strings[t].offset); - } - xstats[count].id = count; - count++; + struct rte_vhost_stat *stats; + struct vhost_queue *vq; + int ret, i, count = 0, nstats = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; } - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) { - xstats[count].value = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) { - vq = dev->data->tx_queues[i]; - if (!vq) - continue; - xstats[count].value += - *(uint64_t *)(((char *)vq) - + vhost_txport_stat_strings[t].offset); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0); + if (ret < 0) + return ret; + + nstats += ret; + } + + if (!xstats || n < (unsigned int)nstats) + return nstats; + + stats = calloc(nstats, sizeof(*stats)); + if (!stats) + return -1; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + vq = dev->data->rx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, + stats + count, nstats - count); + if (ret < 0) { + free(stats); + return ret; } - xstats[count].id = count; - count++; + + count += ret; } - return count; -} -static inline void -vhost_count_xcast_packets(struct vhost_queue *vq, - struct rte_mbuf *mbuf) -{ - struct rte_ether_addr *ea = NULL; - struct vhost_stats *pstats = &vq->stats; + for (i = 0; i < dev->data->nb_tx_queues; i++) { + vq = dev->data->tx_queues[i]; + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, + stats + count, nstats - count); + if (ret < 0) { + free(stats); + return ret; + } - ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); - if (rte_is_multicast_ether_addr(ea)) { - if (rte_is_broadcast_ether_addr(ea)) - pstats->xstats[VHOST_BROADCAST_PKT]++; - else - pstats->xstats[VHOST_MULTICAST_PKT]++; - } else { - pstats->xstats[VHOST_UNICAST_PKT]++; + count += ret; } -} -static __rte_always_inline void -vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf *buf) -{ - uint32_t pkt_len = 0; - uint64_t index; - struct vhost_stats *pstats = &vq->stats; - - pstats->xstats[VHOST_PKT]++; - pkt_len = buf->pkt_len; - if (pkt_len == 64) { - pstats->xstats[VHOST_64_PKT]++; - } else if (pkt_len > 64 && pkt_len < 1024) { - index = (sizeof(pkt_len) * 8) - - __builtin_clz(pkt_len) - 5; - pstats->xstats[index]++; - } else { - if (pkt_len < 64) - pstats->xstats[VHOST_UNDERSIZE_PKT]++; - else if (pkt_len <= 1522) - pstats->xstats[VHOST_1024_TO_1522_PKT]++; - else if (pkt_len > 1522) - pstats->xstats[VHOST_1523_TO_MAX_PKT]++; - } - vhost_count_xcast_packets(vq, buf); + for (i = 0; i < count; i++) { + xstats[i].id = stats[i].id; + xstats[i].value = stats[i].value; + } + + free(stats); + + return nstats; } static uint16_t @@ -402,9 +314,6 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) rte_vlan_strip(bufs[i]); r->stats.bytes += bufs[i]->pkt_len; - r->stats.xstats[VHOST_BYTE] += bufs[i]->pkt_len; - - vhost_update_single_packet_xstats(r, bufs[i]); } out: @@ -461,10 +370,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) break; } - for (i = 0; likely(i < nb_tx); i++) { + for (i = 0; likely(i < nb_tx); i++) nb_bytes += bufs[i]->pkt_len; - vhost_update_single_packet_xstats(r, bufs[i]); - } nb_missed = nb_bufs - nb_tx; @@ -472,17 +379,6 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) r->stats.bytes += nb_bytes; r->stats.missed_pkts += nb_missed; - r->stats.xstats[VHOST_BYTE] += nb_bytes; - r->stats.xstats[VHOST_MISSED_PKT] += nb_missed; - r->stats.xstats[VHOST_UNICAST_PKT] += nb_missed; - - /* According to RFC2863, ifHCOutUcastPkts, ifHCOutMulticastPkts and - * ifHCOutBroadcastPkts counters are increased when packets are not - * transmitted successfully. - */ - for (i = nb_tx; i < nb_bufs; i++) - vhost_count_xcast_packets(r, bufs[i]); - for (i = 0; likely(i < nb_tx); i++) rte_pktmbuf_free(bufs[i]); out: @@ -1566,7 +1462,7 @@ rte_pmd_vhost_probe(struct rte_vdev_device *dev) int ret = 0; char *iface_name; uint16_t queues; - uint64_t flags = 0; + uint64_t flags = RTE_VHOST_USER_NET_STATS_ENABLE; uint64_t disable_flags = 0; int client_mode = 0; int iommu_support = 0;