From patchwork Thu Jan 27 14:56:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 106615 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 84BC6A04A6; Thu, 27 Jan 2022 15:57:54 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5DA6642808; Thu, 27 Jan 2022 15:57:49 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 217EC427D2 for ; Thu, 27 Jan 2022 15:57:47 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1643295466; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FJ9r8HiBjj9XHpdK0Xe+axQb+rU9B7Xe76HteNIIg+Q=; b=WCAVATsAmF+JddTJpACjETwXp33DzuZEuSsGVv0qEtcA631e1r342imatV76ycx0L3B5JD aq7iD0eP96Bs9Ci4N6PO4M33uPdugYlEY1Sh/UB6e50ilDWTckMRA/bndZYL5d84TvKDAU Px3UdiaLWUA6rJvNOWjB7dpRtjt2YEs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-318-75KVdZJwOy6V8oN9Cx-KPQ-1; Thu, 27 Jan 2022 09:57:44 -0500 X-MC-Unique: 75KVdZJwOy6V8oN9Cx-KPQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D8B8B640AA; Thu, 27 Jan 2022 14:57:35 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.34]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8EADB798B7; Thu, 27 Jan 2022 14:57:34 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com Cc: Maxime Coquelin Subject: [PATCH 2/5] vhost: add per-virtqueue statistics support Date: Thu, 27 Jan 2022 15:56:52 +0100 Message-Id: <20220127145655.558029-3-maxime.coquelin@redhat.com> In-Reply-To: <20220127145655.558029-1-maxime.coquelin@redhat.com> References: <20220127145655.558029-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces new APIs for the application to query and reset per-virtqueue statistics. The patch also introduces generic counters. Signed-off-by: Maxime Coquelin --- lib/vhost/rte_vhost.h | 89 +++++++++++++++++++++++++++++++++ lib/vhost/socket.c | 4 +- lib/vhost/version.map | 5 ++ lib/vhost/vhost.c | 109 ++++++++++++++++++++++++++++++++++++++++- lib/vhost/vhost.h | 18 ++++++- lib/vhost/virtio_net.c | 53 ++++++++++++++++++++ 6 files changed, 274 insertions(+), 4 deletions(-) diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index b454c05868..e739091ca0 100644 --- a/lib/vhost/rte_vhost.h +++ b/lib/vhost/rte_vhost.h @@ -37,6 +37,7 @@ extern "C" { #define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6) #define RTE_VHOST_USER_ASYNC_COPY (1ULL << 7) #define RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS (1ULL << 8) +#define RTE_VHOST_USER_NET_STATS_ENABLE (1ULL << 9) /* Features. */ #ifndef VIRTIO_NET_F_GUEST_ANNOUNCE @@ -317,6 +318,32 @@ struct rte_vhost_power_monitor_cond { uint8_t match; }; +/** Maximum name length for the statistics counters */ +#define RTE_VHOST_STATS_NAME_SIZE 64 + +/** + * Vhost virtqueue statistics structure + * + * This structure is used by rte_vhost_vring_stats_get() to provide + * virtqueue statistics to the calling application. + * It maps a name ID, corresponding to an index in the array returned + * by rte_vhost_vring_stats_get_names(), to a statistic value. + */ +struct rte_vhost_stat { + uint64_t id; /**< The index in xstats name array. */ + uint64_t value; /**< The statistic counter value. */ +}; + +/** + * Vhost virtqueue statistic name element + * + * This structure is used by rte_vhost_vring_stats_get_anmes() to + * provide virtqueue statistics names to the calling application. + */ +struct rte_vhost_stat_name { + char name[RTE_VHOST_STATS_NAME_SIZE]; /**< The statistic name. */ +}; + /** * Convert guest physical address to host virtual address * @@ -1059,6 +1086,68 @@ __rte_experimental int rte_vhost_slave_config_change(int vid, bool need_reply); +/** + * Retrieve names of statistics of a Vhost virtqueue. + * + * There is an assumption that 'stat_names' and 'stats' arrays are matched + * by array index: stats_names[i].name => stats[i].value + * + * @param vid + * vhost device ID + * @param queue_id + * vhost queue index + * @param stats_names + * array of at least size elements to be filled. + * If set to NULL, the function returns the required number of elements. + * @param size + * The number of elements in stats_names array. + * @return + * A negative value on error, otherwise the number of entries filled in the + * stats name array. + */ +__rte_experimental +int +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, + struct rte_vhost_stat_name *name, unsigned int size); + +/** + * Retrieve statistics of a Vhost virtqueue. + * + * There is an assumption that 'stat_names' and 'stats' arrays are matched + * by array index: stats_names[i].name => stats[i].value + * + * @param vid + * vhost device ID + * @param queue_id + * vhost queue index + * @param stats + * A pointer to a table of structure of type rte_vhost_stat to be filled with + * virtqueue statistics ids and values. + * @param n + * The number of elements in stats array. + * @return + * A negative value on error, otherwise the number of entries filled in the + * stats table. + */ +__rte_experimental +int +rte_vhost_vring_stats_get(int vid, uint16_t queue_id, + struct rte_vhost_stat *stats, unsigned int n); + +/** + * Reset statistics of a Vhost virtqueue. + * + * @param vid + * vhost device ID + * @param queue_id + * vhost queue index + * @return + * 0 on success, a negative value on error. + */ +__rte_experimental +int +rte_vhost_vring_stats_reset(int vid, uint16_t queue_id); + #ifdef __cplusplus } #endif diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index c2f8013cd5..6020565fb6 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -43,6 +43,7 @@ struct vhost_user_socket { bool linearbuf; bool async_copy; bool net_compliant_ol_flags; + bool stats_enabled; /* * The "supported_features" indicates the feature bits the @@ -228,7 +229,7 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) vhost_set_ifname(vid, vsocket->path, size); vhost_setup_virtio_net(vid, vsocket->use_builtin_virtio_net, - vsocket->net_compliant_ol_flags); + vsocket->net_compliant_ol_flags, vsocket->stats_enabled); vhost_attach_vdpa_device(vid, vsocket->vdpa_dev); @@ -864,6 +865,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT; vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY; vsocket->net_compliant_ol_flags = flags & RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS; + vsocket->stats_enabled = flags & RTE_VHOST_USER_NET_STATS_ENABLE; if (vsocket->async_copy && (flags & (RTE_VHOST_USER_IOMMU_SUPPORT | diff --git a/lib/vhost/version.map b/lib/vhost/version.map index a7ef7f1976..b83f79c87f 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -84,6 +84,11 @@ EXPERIMENTAL { # added in 21.11 rte_vhost_get_monitor_addr; + + # added in 22.03 + rte_vhost_vring_stats_get_names; + rte_vhost_vring_stats_get; + rte_vhost_vring_stats_reset; }; INTERNAL { diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 42c01abf25..0c6a737aca 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -28,6 +28,28 @@ struct virtio_net *vhost_devices[MAX_VHOST_DEVICE]; pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER; +struct vhost_vq_stats_name_off { + char name[RTE_VHOST_STATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { + {"good_packets", offsetof(struct vhost_virtqueue, stats.packets)}, + {"good_bytes", offsetof(struct vhost_virtqueue, stats.bytes)}, + {"multicast_packets", offsetof(struct vhost_virtqueue, stats.multicast)}, + {"broadcast_packets", offsetof(struct vhost_virtqueue, stats.broadcast)}, + {"undersize_packets", offsetof(struct vhost_virtqueue, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct vhost_virtqueue, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct vhost_virtqueue, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct vhost_virtqueue, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct vhost_virtqueue, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct vhost_virtqueue, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])}, +}; + +#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) + /* Called with iotlb_lock read-locked */ uint64_t __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -758,7 +780,7 @@ vhost_set_ifname(int vid, const char *if_name, unsigned int if_len) } void -vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags) +vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats_enabled) { struct virtio_net *dev = get_device(vid); @@ -773,6 +795,10 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags) dev->flags |= VIRTIO_DEV_LEGACY_OL_FLAGS; else dev->flags &= ~VIRTIO_DEV_LEGACY_OL_FLAGS; + if (stats_enabled) + dev->flags |= VIRTIO_DEV_STATS_ENABLED; + else + dev->flags &= ~VIRTIO_DEV_STATS_ENABLED; } void @@ -1908,5 +1934,86 @@ rte_vhost_get_monitor_addr(int vid, uint16_t queue_id, return 0; } + +int +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, + struct rte_vhost_stat_name *name, unsigned int size) +{ + struct virtio_net *dev = get_device(vid); + unsigned int i; + + if (dev == NULL) + return -1; + + if (queue_id >= dev->nr_vring) + return -1; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return -1; + + if (name == NULL || size < VHOST_NB_VQ_STATS) + return VHOST_NB_VQ_STATS; + + for (i = 0; i < VHOST_NB_VQ_STATS; i++) + snprintf(name[i].name, sizeof(name[i].name), "%s_q%u_%s", + (queue_id & 1) ? "rx" : "tx", + queue_id / 2, vhost_vq_stat_strings[i].name); + + return VHOST_NB_VQ_STATS; +} + +int +rte_vhost_vring_stats_get(int vid, uint16_t queue_id, + struct rte_vhost_stat *stats, unsigned int n) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + unsigned int i; + + if (dev == NULL) + return -1; + + if (queue_id >= dev->nr_vring) + return -1; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return -1; + + if (stats == NULL || n < VHOST_NB_VQ_STATS) + return VHOST_NB_VQ_STATS; + + vq = dev->virtqueue[queue_id]; + + rte_spinlock_lock(&vq->access_lock); + for (i = 0; i < VHOST_NB_VQ_STATS; i++) { + stats[i].value = + *(uint64_t *)(((char *)vq) + vhost_vq_stat_strings[i].offset); + stats[i].id = i; + } + rte_spinlock_unlock(&vq->access_lock); + + return VHOST_NB_VQ_STATS; +} + +int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + + if (dev == NULL) + return -1; + + if (queue_id >= dev->nr_vring) + return -1; + + vq = dev->virtqueue[queue_id]; + + rte_spinlock_lock(&vq->access_lock); + memset(&vq->stats, 0, sizeof(vq->stats)); + rte_spinlock_unlock(&vq->access_lock); + + return 0; +} + RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO); RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 7085e0885c..4c151244c7 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -38,6 +38,8 @@ #define VIRTIO_DEV_FEATURES_FAILED ((uint32_t)1 << 4) /* Used to indicate that the virtio_net tx code should fill TX ol_flags */ #define VIRTIO_DEV_LEGACY_OL_FLAGS ((uint32_t)1 << 5) +/* Used to indicate the application has requested statistics collection */ +#define VIRTIO_DEV_STATS_ENABLED ((uint32_t)1 << 6) /* Backend value set by guest. */ #define VIRTIO_DEV_STOPPED -1 @@ -119,6 +121,18 @@ struct vring_used_elem_packed { uint32_t count; }; +/** + * Virtqueue statistics + */ +struct virtqueue_stats { + uint64_t packets; + uint64_t bytes; + uint64_t multicast; + uint64_t broadcast; + /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ + uint64_t size_bins[8]; +}; + /** * inflight async packet information */ @@ -235,6 +249,7 @@ struct vhost_virtqueue { #define VIRTIO_UNINITIALIZED_NOTIF (-1) struct vhost_vring_addr ring_addrs; + struct virtqueue_stats stats; } __rte_cache_aligned; /* Virtio device status as per Virtio specification */ @@ -696,7 +711,7 @@ int alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx); void vhost_attach_vdpa_device(int vid, struct rte_vdpa_device *dev); void vhost_set_ifname(int, const char *if_name, unsigned int if_len); -void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags); +void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags, bool stats_enabled); void vhost_enable_extbuf(int vid); void vhost_enable_linearbuf(int vid); int vhost_enable_guest_notification(struct virtio_net *dev, @@ -873,5 +888,4 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } - #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index f19713137c..550b239450 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -43,6 +43,54 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring) return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring; } +/* + * This function must be called with virtqueue's access_lock taken. + */ +static inline void +vhost_queue_stats_update(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count) +{ + struct virtqueue_stats *stats = &vq->stats; + int i; + + if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED)) + return; + + for (i = 0; i < count; i++) { + struct rte_ether_addr *ea; + struct rte_mbuf *pkt = pkts[i]; + uint32_t pkt_len = rte_pktmbuf_pkt_len(pkt); + + stats->packets++; + stats->bytes += pkt_len; + + if (pkt_len == 64) { + stats->size_bins[1]++; + } else if (pkt_len > 64 && pkt_len < 1024) { + uint32_t bin; + + /* count zeros, and offset into correct bin */ + bin = (sizeof(pkt_len) * 8) - __builtin_clz(pkt_len) - 5; + stats->size_bins[bin]++; + } else { + if (pkt_len < 64) + stats->size_bins[0]++; + else if (pkt_len < 1519) + stats->size_bins[6]++; + else + stats->size_bins[7]++; + } + + ea = rte_pktmbuf_mtod(pkt, struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + stats->broadcast++; + else + stats->multicast++; + } + } +} + static inline void do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq) { @@ -1375,6 +1423,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, else nb_tx = virtio_dev_rx_split(dev, vq, pkts, count); + vhost_queue_stats_update(dev, vq, pkts, nb_tx); + out: if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) vhost_user_iotlb_rd_unlock(vq); @@ -2941,6 +2991,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, * learning table will get updated first. */ pkts[0] = rarp_mbuf; + vhost_queue_stats_update(dev, vq, pkts, 1); pkts++; count -= 1; } @@ -2957,6 +3008,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, count = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count); } + vhost_queue_stats_update(dev, vq, pkts, count); + out: if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) vhost_user_iotlb_rd_unlock(vq);