From patchwork Sun Sep 26 12:56:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gaoxiang Liu X-Patchwork-Id: 99695 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CF3EA0547; Sun, 26 Sep 2021 14:56:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A1D884003D; Sun, 26 Sep 2021 14:56:41 +0200 (CEST) Received: from m12-11.163.com (m12-11.163.com [220.181.12.11]) by mails.dpdk.org (Postfix) with ESMTP id 5F05C4003C for ; Sun, 26 Sep 2021 14:56:40 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=From:Subject:Date:Message-Id:MIME-Version; bh=Qs8B5 c93X8LKbEZVddmoEZTTYvSW42nelrXFm/6m0Lw=; b=FOe+4KB63eQF6vlmtxLOT 3JNeavHboNOj+8h2jdfizp/Uf4o8wTIZLLD46of8uUeTRbsr9pIuMprHTNFw6mg8 i5Kz6RocQALP8obgbGVC8j45iUJjST5h5CJyJRUYLFFGV6xlcIW+78VMzefqJkuT FGJTfaOV/vS7AjtI3U3Hwg= Received: from DESKTOP-ONA2IA7.localdomain (unknown [211.138.116.207]) by smtp7 (Coremail) with SMTP id C8CowAB3vXb5bVBhPt5_yw--.25466S4; Sun, 26 Sep 2021 20:56:37 +0800 (CST) From: Gaoxiang Liu To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, liugaoxiang@huawei.com, Gaoxiang Liu Date: Sun, 26 Sep 2021 20:56:23 +0800 Message-Id: <20210926125623.833-1-gaoxiangliu0@163.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 X-CM-TRANSID: C8CowAB3vXb5bVBhPt5_yw--.25466S4 X-Coremail-Antispam: 1Uf129KBjvJXoWxWw45Kw17ZFy3Gr1kurWkCrg_yoWrXF1DpF y3t3sxAF45XanrtanxArZ8Z34rK34fCrW3KFZrGa4S9F4UCry3uayIga4Iqr1UGFW7Ars8 Cr4jqF1UKFWjv3DanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07U53ktUUUUU= X-Originating-IP: [211.138.116.207] X-CM-SenderInfo: xjdr5xxdqjzxjxq6il2tof0z/1tbiMgUaOlWBvem6ZgAAsL Subject: [dpdk-dev] [PATCH] net/vhost: merge vhost stats loop in vhost Tx/Rx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To improve performance in vhost Tx/Rx, merge vhost stats loop. eth_vhost_tx has 2 loop of send num iteraion. It can be merge into one. eth_vhost_rx has the same issue as Tx. Fixes: 4d6cf2ac93dc ("net/vhost: add extended statistics") Signed-off-by: Gaoxiang Liu --- drivers/net/vhost/rte_eth_vhost.c | 62 ++++++++++++++----------------- 1 file changed, 28 insertions(+), 34 deletions(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index a202931e9a..e451ee2f55 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -336,38 +336,29 @@ vhost_count_xcast_packets(struct vhost_queue *vq, } static void -vhost_update_packet_xstats(struct vhost_queue *vq, struct rte_mbuf **bufs, - uint16_t count, uint64_t nb_bytes, - uint64_t nb_missed) +vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf *buf { uint32_t pkt_len = 0; - uint64_t i = 0; uint64_t index; struct vhost_stats *pstats = &vq->stats; - pstats->xstats[VHOST_BYTE] += nb_bytes; - pstats->xstats[VHOST_MISSED_PKT] += nb_missed; - pstats->xstats[VHOST_UNICAST_PKT] += nb_missed; - - for (i = 0; i < count ; i++) { - pstats->xstats[VHOST_PKT]++; - pkt_len = bufs[i]->pkt_len; - if (pkt_len == 64) { - pstats->xstats[VHOST_64_PKT]++; - } else if (pkt_len > 64 && pkt_len < 1024) { - index = (sizeof(pkt_len) * 8) - - __builtin_clz(pkt_len) - 5; - pstats->xstats[index]++; - } else { - if (pkt_len < 64) - pstats->xstats[VHOST_UNDERSIZE_PKT]++; - else if (pkt_len <= 1522) - pstats->xstats[VHOST_1024_TO_1522_PKT]++; - else if (pkt_len > 1522) - pstats->xstats[VHOST_1523_TO_MAX_PKT]++; - } - vhost_count_xcast_packets(vq, bufs[i]); + pstats->xstats[VHOST_PKT]++; + pkt_len = buf->pkt_len; + if (pkt_len == 64) { + pstats->xstats[VHOST_64_PKT]++; + } else if (pkt_len > 64 && pkt_len < 1024) { + index = (sizeof(pkt_len) * 8) + - __builtin_clz(pkt_len) - 5; + pstats->xstats[index]++; + } else { + if (pkt_len < 64) + pstats->xstats[VHOST_UNDERSIZE_PKT]++; + else if (pkt_len <= 1522) + pstats->xstats[VHOST_1024_TO_1522_PKT]++; + else if (pkt_len > 1522) + pstats->xstats[VHOST_1523_TO_MAX_PKT]++; } + vhost_count_xcast_packets(vq, buf); } static uint16_t @@ -376,7 +367,6 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) struct vhost_queue *r = q; uint16_t i, nb_rx = 0; uint16_t nb_receive = nb_bufs; - uint64_t nb_bytes = 0; if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) return 0; @@ -411,11 +401,11 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if (r->internal->vlan_strip) rte_vlan_strip(bufs[i]); - nb_bytes += bufs[i]->pkt_len; - } + r->stats.bytes += bufs[i]->pkt_len; + r->stats->xstats[VHOST_BYTE] += bufs[i]->pkt_len; - r->stats.bytes += nb_bytes; - vhost_update_packet_xstats(r, bufs, nb_rx, nb_bytes, 0); + vhost_update_single_packet_xstats(r, bufs); + } out: rte_atomic32_set(&r->while_queuing, 0); @@ -471,16 +461,20 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) break; } - for (i = 0; likely(i < nb_tx); i++) + for (i = 0; likely(i < nb_tx); i++) { nb_bytes += bufs[i]->pkt_len; + vhost_update_single_packet_xstats(r, bufs); + } nb_missed = nb_bufs - nb_tx; r->stats.pkts += nb_tx; r->stats.bytes += nb_bytes; - r->stats.missed_pkts += nb_bufs - nb_tx; + r->stats.missed_pkts += nb_missed; - vhost_update_packet_xstats(r, bufs, nb_tx, nb_bytes, nb_missed); + r->stats->xstats[VHOST_BYTE] += nb_bytes; + r->xstats->xstats[VHOST_MISSED_PKT] += nb_missed; + r->xstats->xstats[VHOST_UNICAST_PKT] += nb_missed; /* According to RFC2863, ifHCOutUcastPkts, ifHCOutMulticastPkts and * ifHCOutBroadcastPkts counters are increased when packets are not