From patchwork Fri Sep 9 08:15:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yang, Zhiyong" X-Patchwork-Id: 15728 X-Patchwork-Delegate: yuanhan.liu@linux.intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id B2B345323; Fri, 9 Sep 2016 10:17:34 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 3CA1B37A6 for ; Fri, 9 Sep 2016 10:17:32 +0200 (CEST) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP; 09 Sep 2016 01:17:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,304,1470726000"; d="scan'208";a="6470200" Received: from dpdk2.bj.intel.com ([172.16.182.65]) by orsmga005.jf.intel.com with ESMTP; 09 Sep 2016 01:17:29 -0700 From: Zhiyong Yang To: dev@dpdk.org Cc: yuanhan.liu@linux.intel.com, thomas.monjalon@6wind.com, "mailto:pmatilai"@redhat.com, Zhiyong Yang Date: Fri, 9 Sep 2016 16:15:27 +0800 Message-Id: <1473408927-40364-1-git-send-email-zhiyong.yang@intel.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1471608966-39077-1-git-send-email-zhiyong.yang@intel.com> References: <1471608966-39077-1-git-send-email-zhiyong.yang@intel.com> Subject: [dpdk-dev] [PATCH v2] net/vhost: add pmd xstats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This feature adds vhost pmd extended statistics from per queue perspective for the application such as OVS etc. The statistics counters are based on RFC 2819 and RFC 2863 as follows: rx/tx_good_packets rx/tx_total_bytes rx/tx_dropped_pkts rx/tx_broadcast_packets rx/tx_multicast_packets rx/tx_ucast_packets rx/tx_undersize_errors rx/tx_size_64_packets rx/tx_size_65_to_127_packets; rx/tx_size_128_to_255_packets; rx/tx_size_256_to_511_packets; rx/tx_size_512_to_1023_packets; rx/tx_size_1024_to_1522_packets; rx/tx_1523_to_max_packets; rx/tx_errors rx_fragmented_errors rx_jabber_errors rx_unknown_protos_packets; No API is changed or added. rte_eth_xstats_get_names() to retrieve what kinds of vhost xstats are supported, rte_eth_xstats_get() to retrieve vhost extended statistics, rte_eth_xstats_reset() to reset vhost extended statistics. The usage of vhost pmd xstats is the same as virtio pmd xstats. for example, when test-pmd application is running in interactive mode vhost pmd xstats will support the two following commands: show port xstats all | port_id will show vhost xstats clear port xstats all | port_id will reset vhost xstats net/virtio pmd xstats(the function virtio_update_packet_stats) is used as reference when implementing the feature. Signed-off-by: Zhiyong Yang --- Changes in v2: 1. remove the compiling switch. 2. fix two code bugs. drivers/net/vhost/rte_eth_vhost.c | 282 +++++++++++++++++++++++++++++++++++++- 1 file changed, 281 insertions(+), 1 deletion(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index 7539cd4..8f805f3 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -72,6 +72,10 @@ static struct ether_addr base_eth_addr = { } }; +struct vhost_xstats { + uint64_t stat[16]; +}; + struct vhost_queue { int vid; rte_atomic32_t allow_queuing; @@ -85,7 +89,8 @@ struct vhost_queue { uint64_t missed_pkts; uint64_t rx_bytes; uint64_t tx_bytes; -}; + struct vhost_xstats xstats; + }; struct pmd_internal { char *dev_name; @@ -127,6 +132,274 @@ struct rte_vhost_vring_state { static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS]; +enum rte_vhostqueue_rxtx { + RTE_VHOSTQUEUE_RX = 0, + RTE_VHOSTQUEUE_TX = 1 +}; + +#define RTE_ETH_VHOST_XSTATS_NAME_SIZE 64 + +struct rte_vhost_xstats_name_off { + char name[RTE_ETH_VHOST_XSTATS_NAME_SIZE]; + uint64_t offset; +}; + +/* [rt]_qX_ is prepended to the name string here */ +static const struct rte_vhost_xstats_name_off rte_vhost_rxq_stat_strings[] = { + {"good_packets", + offsetof(struct vhost_queue, rx_pkts)}, + {"total_bytes", + offsetof(struct vhost_queue, rx_bytes)}, + {"dropped_pkts", + offsetof(struct vhost_queue, missed_pkts)}, + {"broadcast_packets", + offsetof(struct vhost_queue, xstats.stat[8])}, + {"multicast_packets", + offsetof(struct vhost_queue, xstats.stat[9])}, + {"ucast_packets", + offsetof(struct vhost_queue, xstats.stat[10])}, + {"undersize_errors", + offsetof(struct vhost_queue, xstats.stat[0])}, + {"size_64_packets", + offsetof(struct vhost_queue, xstats.stat[1])}, + {"size_65_to_127_packets", + offsetof(struct vhost_queue, xstats.stat[2])}, + {"size_128_to_255_packets", + offsetof(struct vhost_queue, xstats.stat[3])}, + {"size_256_to_511_packets", + offsetof(struct vhost_queue, xstats.stat[4])}, + {"size_512_to_1023_packets", + offsetof(struct vhost_queue, xstats.stat[5])}, + {"size_1024_to_1522_packets", + offsetof(struct vhost_queue, xstats.stat[6])}, + {"size_1523_to_max_packets", + offsetof(struct vhost_queue, xstats.stat[7])}, + {"errors", + offsetof(struct vhost_queue, xstats.stat[11])}, + {"fragmented_errors", + offsetof(struct vhost_queue, xstats.stat[12])}, + {"jabber_errors", + offsetof(struct vhost_queue, xstats.stat[13])}, + {"unknown_protos_packets", + offsetof(struct vhost_queue, xstats.stat[14])}, +}; + +/* [tx]_qX_ is prepended to the name string here */ +static const struct rte_vhost_xstats_name_off rte_vhost_txq_stat_strings[] = { + {"good_packets", + offsetof(struct vhost_queue, tx_pkts)}, + {"total_bytes", + offsetof(struct vhost_queue, tx_bytes)}, + {"dropped_pkts", + offsetof(struct vhost_queue, missed_pkts)}, + {"broadcast_packets", + offsetof(struct vhost_queue, xstats.stat[8])}, + {"multicast_packets", + offsetof(struct vhost_queue, xstats.stat[9])}, + {"ucast_packets", + offsetof(struct vhost_queue, xstats.stat[10])}, + {"size_64_packets", + offsetof(struct vhost_queue, xstats.stat[1])}, + {"size_65_to_127_packets", + offsetof(struct vhost_queue, xstats.stat[2])}, + {"size_128_to_255_packets", + offsetof(struct vhost_queue, xstats.stat[3])}, + {"size_256_to_511_packets", + offsetof(struct vhost_queue, xstats.stat[4])}, + {"size_512_to_1023_packets", + offsetof(struct vhost_queue, xstats.stat[5])}, + {"size_1024_to_1522_packets", + offsetof(struct vhost_queue, xstats.stat[6])}, + {"size_1523_to_max_packets", + offsetof(struct vhost_queue, xstats.stat[7])}, + {"errors", + offsetof(struct vhost_queue, xstats.stat[11])}, +}; + +#define VHOST_NB_RXQ_XSTATS (sizeof(rte_vhost_rxq_stat_strings) / \ + sizeof(rte_vhost_rxq_stat_strings[0])) + +#define VHOST_NB_TXQ_XSTATS (sizeof(rte_vhost_txq_stat_strings) / \ + sizeof(rte_vhost_txq_stat_strings[0])) + +static void +vhost_dev_xstats_reset(struct rte_eth_dev *dev) +{ + struct vhost_queue *vqrx = NULL; + struct vhost_queue *vqtx = NULL; + unsigned int i = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (!dev->data->rx_queues[i]) + continue; + vqrx = (struct vhost_queue *)dev->data->rx_queues[i]; + vqrx->rx_pkts = 0; + vqrx->rx_bytes = 0; + vqrx->missed_pkts = 0; + memset(&vqrx->xstats, 0, sizeof(vqrx->xstats)); + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + if (!dev->data->tx_queues[i]) + continue; + vqtx = (struct vhost_queue *)dev->data->tx_queues[i]; + vqtx->tx_pkts = 0; + vqtx->tx_bytes = 0; + vqtx->missed_pkts = 0; + memset(&vqtx->xstats, 0, sizeof(vqtx->xstats)); + } +} + +static int +vhost_dev_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned int limit) +{ + unsigned int i = 0; + unsigned int t = 0; + int count = 0; + int nstats = dev->data->nb_rx_queues * VHOST_NB_RXQ_XSTATS + + dev->data->nb_tx_queues * VHOST_NB_TXQ_XSTATS; + + if (xstats_names) { + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct vhost_queue *rxvq = dev->data->rx_queues[i]; + + if (!rxvq) + continue; + for (t = 0; t < VHOST_NB_RXQ_XSTATS; t++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), + "rx_q%u_%s", i, + rte_vhost_rxq_stat_strings[t].name); + count++; + } + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct vhost_queue *txvq = dev->data->tx_queues[i]; + + if (!txvq) + continue; + for (t = 0; t < VHOST_NB_TXQ_XSTATS; t++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), + "tx_q%u_%s", i, + rte_vhost_txq_stat_strings[t].name); + count++; + } + } + return count; + } + return nstats; +} + +static int +vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, + unsigned int n) +{ + unsigned int i; + unsigned int t; + unsigned int count = 0; + + unsigned int nxstats = dev->data->nb_rx_queues * VHOST_NB_RXQ_XSTATS + + dev->data->nb_tx_queues * VHOST_NB_TXQ_XSTATS; + + if (n < nxstats) + return nxstats; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct vhost_queue *rxvq = + (struct vhost_queue *)dev->data->rx_queues[i]; + + if (!rxvq) + continue; + + for (t = 0; t < VHOST_NB_RXQ_XSTATS; t++) { + xstats[count].value = *(uint64_t *)(((char *)rxvq) + + rte_vhost_rxq_stat_strings[t].offset); + count++; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct vhost_queue *txvq = + (struct vhost_queue *)dev->data->tx_queues[i]; + + if (!txvq) + continue; + + for (t = 0; t < VHOST_NB_TXQ_XSTATS; t++) { + xstats[count].value = *(uint64_t *)(((char *)txvq) + + rte_vhost_txq_stat_strings[t].offset); + count++; + } + } + + return count; +} + +static void +vhost_update_packet_xstats(struct vhost_queue *vq, + struct rte_mbuf **bufs, + uint16_t nb_rxtx, + uint16_t nb_bufs, + enum rte_vhostqueue_rxtx vqueue_rxtx) +{ + uint32_t pkt_len = 0; + uint64_t i = 0; + uint64_t index; + struct ether_addr *ea = NULL; + struct vhost_xstats *xstats_update = &vq->xstats; + + for (i = 0; i < nb_rxtx ; i++) { + pkt_len = bufs[i]->pkt_len; + if (pkt_len == 64) { + xstats_update->stat[1]++; + + } else if (pkt_len > 64 && pkt_len < 1024) { + index = (sizeof(pkt_len) * 8) + - __builtin_clz(pkt_len) - 5; + xstats_update->stat[index]++; + } else { + if (pkt_len < 64) + xstats_update->stat[0]++; + else if (pkt_len <= 1522) + xstats_update->stat[6]++; + else if (pkt_len > 1522) + xstats_update->stat[7]++; + } + + ea = rte_pktmbuf_mtod(bufs[i], struct ether_addr *); + if (is_multicast_ether_addr(ea)) { + if (is_broadcast_ether_addr(ea)) + /* broadcast++; */ + xstats_update->stat[8]++; + else + /* multicast++; */ + xstats_update->stat[9]++; + } + } + /* non-multi/broadcast, multi/broadcast, including those + * that were discarded or not sent. from rfc2863 + */ + if (vqueue_rxtx == RTE_VHOSTQUEUE_RX) { + xstats_update->stat[10] = vq->rx_pkts + vq->missed_pkts + - (xstats_update->stat[8] + + xstats_update->stat[9]); + } else { + for (i = nb_rxtx; i < nb_bufs ; i++) { + ea = rte_pktmbuf_mtod(bufs[i], struct ether_addr *); + if (is_multicast_ether_addr(ea)) { + if (is_broadcast_ether_addr(ea)) + xstats_update->stat[8]++; + else + xstats_update->stat[9]++; + } + } + xstats_update->stat[10] = vq->tx_pkts + vq->missed_pkts + - (xstats_update->stat[8] + xstats_update->stat[9]); + } +} + static uint16_t eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { @@ -152,6 +425,8 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) r->rx_bytes += bufs[i]->pkt_len; } + vhost_update_packet_xstats(r, bufs, nb_rx, nb_rx, RTE_VHOSTQUEUE_RX); + out: rte_atomic32_set(&r->while_queuing, 0); @@ -182,6 +457,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) for (i = 0; likely(i < nb_tx); i++) r->tx_bytes += bufs[i]->pkt_len; + vhost_update_packet_xstats(r, bufs, nb_tx, nb_bufs, RTE_VHOSTQUEUE_TX); + for (i = 0; likely(i < nb_tx); i++) rte_pktmbuf_free(bufs[i]); out: @@ -682,6 +959,9 @@ static const struct eth_dev_ops ops = { .link_update = eth_link_update, .stats_get = eth_stats_get, .stats_reset = eth_stats_reset, + .xstats_reset = vhost_dev_xstats_reset, + .xstats_get = vhost_dev_xstats_get, + .xstats_get_names = vhost_dev_xstats_get_names, }; static int