From patchwork Fri Dec 18 11:33:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 85446 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9134CA09FD; Fri, 18 Dec 2020 12:44:25 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 147A2CABB; Fri, 18 Dec 2020 12:44:10 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 091FFCA86 for ; Fri, 18 Dec 2020 12:44:06 +0100 (CET) IronPort-SDR: HuUESQxLcAfIWvNB7sVt6STFJ7yr07bL7QvD3w/P6iihnvmxP2x8km1ghBwHHtSA0Zww+6mj6M yKrdywfR0hRw== X-IronPort-AV: E=McAfee;i="6000,8403,9838"; a="193829606" X-IronPort-AV: E=Sophos;i="5.78,430,1599548400"; d="scan'208";a="193829606" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2020 03:44:05 -0800 IronPort-SDR: ARfhUBDNkceZkP9J8Y9JMR3COWIwuBve8T10wtMrlOygnVpUbB7L+hXxnqAPCADcoUgI3tPLtT KDQ45+97vpIA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,430,1599548400"; d="scan'208";a="370534214" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by orsmga008.jf.intel.com with ESMTP; 18 Dec 2020 03:44:04 -0800 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, Jiayu.Hu@intel.com, YvonneX.Yang@intel.com, Cheng Jiang Date: Fri, 18 Dec 2020 11:33:25 +0000 Message-Id: <20201218113327.70528-2-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201218113327.70528-1-Cheng1.jiang@intel.com> References: <20201218113327.70528-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 1/3] examples/vhost: add ioat ring space count and check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add ioat ring space count and check, if ioat ring space is not enough for the next async vhost packet enqueue, then just return to prevent enqueue failure. Signed-off-by: Cheng Jiang --- examples/vhost/ioat.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c index 71d8a1f1f5..b0b04aa453 100644 --- a/examples/vhost/ioat.c +++ b/examples/vhost/ioat.c @@ -17,6 +17,7 @@ struct packet_tracker { unsigned short next_read; unsigned short next_write; unsigned short last_remain; + unsigned short ioat_space; }; struct packet_tracker cb_tracker[MAX_VHOST_DEVICE]; @@ -113,7 +114,7 @@ open_ioat(const char *value) goto out; } rte_rawdev_start(dev_id); - + cb_tracker[dev_id].ioat_space = IOAT_RING_SIZE; dma_info->nr++; i++; } @@ -140,13 +141,9 @@ ioat_transfer_data_cb(int vid, uint16_t queue_id, src = descs[i_desc].src; dst = descs[i_desc].dst; i_seg = 0; + if (cb_tracker[dev_id].ioat_space < src->nr_segs) + break; while (i_seg < src->nr_segs) { - /* - * TODO: Assuming that the ring space of the - * IOAT device is large enough, so there is no - * error here, and the actual error handling - * will be added later. - */ rte_ioat_enqueue_copy(dev_id, (uintptr_t)(src->iov[i_seg].iov_base) + src->offset, @@ -158,7 +155,8 @@ ioat_transfer_data_cb(int vid, uint16_t queue_id, i_seg++; } write &= mask; - cb_tracker[dev_id].size_track[write] = i_seg; + cb_tracker[dev_id].size_track[write] = src->nr_segs; + cb_tracker[dev_id].ioat_space -= src->nr_segs; write++; } } else { @@ -186,6 +184,7 @@ ioat_check_completed_copies_cb(int vid, uint16_t queue_id, int dev_id = dma_bind[vid].dmas[queue_id * 2 + VIRTIO_RXQ].dev_id; n_seg = rte_ioat_completed_ops(dev_id, 255, dump, dump); + cb_tracker[dev_id].ioat_space += n_seg; n_seg += cb_tracker[dev_id].last_remain; if (!n_seg) return 0; From patchwork Fri Dec 18 11:33:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 85447 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CD73A09FD; Fri, 18 Dec 2020 12:44:46 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AFCD1CACD; Fri, 18 Dec 2020 12:44:11 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id F2181CAAF for ; Fri, 18 Dec 2020 12:44:08 +0100 (CET) IronPort-SDR: Ha2X6y6uSevRyvjcDaE2GM3inTaUzV7PFL13PjJfPLamU4K1Hle+X9Jagtg2iqedaDSQsCtmba v3+NjwZ1lldQ== X-IronPort-AV: E=McAfee;i="6000,8403,9838"; a="193829618" X-IronPort-AV: E=Sophos;i="5.78,430,1599548400"; d="scan'208";a="193829618" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2020 03:44:08 -0800 IronPort-SDR: RjORRR/U+XLyRM90rF4S7mMK36ME0pkKY9nAbV9wkdkHxLJd6aQR+OG9Sh/kHamDF1D0NJs2Fb U1lXlntEutTA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,430,1599548400"; d="scan'208";a="370534235" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by orsmga008.jf.intel.com with ESMTP; 18 Dec 2020 03:44:06 -0800 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, Jiayu.Hu@intel.com, YvonneX.Yang@intel.com, Cheng Jiang Date: Fri, 18 Dec 2020 11:33:26 +0000 Message-Id: <20201218113327.70528-3-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201218113327.70528-1-Cheng1.jiang@intel.com> References: <20201218113327.70528-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 2/3] examples/vhost: optimize vhost data path for batch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Change the vm2vm data path to batch enqueue for better performance. Signed-off-by: Cheng Jiang --- examples/vhost/main.c | 84 ++++++++++++++++++++++++++++++++++++++----- 1 file changed, 75 insertions(+), 9 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 8d8c3038bf..28226a4ff7 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -182,6 +182,11 @@ struct mbuf_table { /* TX queue for each data core. */ struct mbuf_table lcore_tx_queue[RTE_MAX_LCORE]; +static uint64_t vhost_tsc[MAX_VHOST_DEVICE]; + +/* TX queue for each vhost device. */ +struct mbuf_table vhost_m_table[MAX_VHOST_DEVICE]; + #define MBUF_TABLE_DRAIN_TSC ((rte_get_tsc_hz() + US_PER_S - 1) \ / US_PER_S * BURST_TX_DRAIN_US) #define VLAN_HLEN 4 @@ -804,6 +809,13 @@ unlink_vmdq(struct vhost_dev *vdev) } } +static inline void +free_pkts(struct rte_mbuf **pkts, uint16_t n) +{ + while (n--) + rte_pktmbuf_free(pkts[n]); +} + static __rte_always_inline void virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, struct rte_mbuf *m) @@ -837,6 +849,40 @@ virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, } } +static __rte_always_inline void +drain_vhost(struct vhost_dev *dst_vdev, struct rte_mbuf **m, uint16_t nr_xmit) +{ + uint16_t ret, nr_cpl; + struct rte_mbuf *m_cpl[MAX_PKT_BURST]; + + if (builtin_net_driver) { + ret = vs_enqueue_pkts(dst_vdev, VIRTIO_RXQ, m, nr_xmit); + } else if (async_vhost_driver) { + ret = rte_vhost_submit_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, + m, nr_xmit); + dst_vdev->nr_async_pkts += ret; + free_pkts(&m[ret], nr_xmit - ret); + + while (likely(dst_vdev->nr_async_pkts)) { + nr_cpl = rte_vhost_poll_enqueue_completed(dst_vdev->vid, + VIRTIO_RXQ, m_cpl, MAX_PKT_BURST); + dst_vdev->nr_async_pkts -= nr_cpl; + free_pkts(m_cpl, nr_cpl); + } + } else { + ret = rte_vhost_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, + m, nr_xmit); + } + + if (enable_stats) { + rte_atomic64_add(&dst_vdev->stats.rx_total_atomic, nr_xmit); + rte_atomic64_add(&dst_vdev->stats.rx_atomic, ret); + } + + if (!async_vhost_driver) + free_pkts(m, nr_xmit); +} + /* * Check if the packet destination MAC address is for a local device. If so then put * the packet on that devices RX queue. If not then return. @@ -846,6 +892,7 @@ virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m) { struct rte_ether_hdr *pkt_hdr; struct vhost_dev *dst_vdev; + struct mbuf_table *vhost_txq; pkt_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *); @@ -869,7 +916,19 @@ virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m) return 0; } - virtio_xmit(dst_vdev, vdev, m); + vhost_txq = &vhost_m_table[dst_vdev->vid]; + vhost_txq->m_table[vhost_txq->len++] = m; + + if (enable_stats) { + vdev->stats.tx_total++; + vdev->stats.tx++; + } + + if (unlikely(vhost_txq->len == MAX_PKT_BURST)) { + drain_vhost(dst_vdev, vhost_txq->m_table, MAX_PKT_BURST); + vhost_txq->len = 0; + vhost_tsc[dst_vdev->vid] = rte_rdtsc(); + } return 0; } @@ -940,13 +999,6 @@ static void virtio_tx_offload(struct rte_mbuf *m) tcp_hdr->cksum = get_psd_sum(l3_hdr, m->ol_flags); } -static inline void -free_pkts(struct rte_mbuf **pkts, uint16_t n) -{ - while (n--) - rte_pktmbuf_free(pkts[n]); -} - static __rte_always_inline void do_drain_mbuf_table(struct mbuf_table *tx_q) { @@ -986,7 +1038,6 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag) /*check if destination is local VM*/ if ((vm2vm_mode == VM2VM_SOFTWARE) && (virtio_tx_local(vdev, m) == 0)) { - rte_pktmbuf_free(m); return; } @@ -1144,8 +1195,10 @@ static __rte_always_inline void drain_virtio_tx(struct vhost_dev *vdev) { struct rte_mbuf *pkts[MAX_PKT_BURST]; + struct mbuf_table *vhost_txq; uint16_t count; uint16_t i; + uint64_t cur_tsc; if (builtin_net_driver) { count = vs_dequeue_pkts(vdev, VIRTIO_TXQ, mbuf_pool, @@ -1163,6 +1216,19 @@ drain_virtio_tx(struct vhost_dev *vdev) for (i = 0; i < count; ++i) virtio_tx_route(vdev, pkts[i], vlan_tags[vdev->vid]); + + vhost_txq = &vhost_m_table[vdev->vid]; + cur_tsc = rte_rdtsc(); + if (unlikely(cur_tsc - vhost_tsc[vdev->vid] > MBUF_TABLE_DRAIN_TSC)) { + if (vhost_txq->len) { + RTE_LOG_DP(DEBUG, VHOST_DATA, + "Vhost tX queue drained after timeout with burst size %u\n", + vhost_txq->len); + drain_vhost(vdev, vhost_txq->m_table, vhost_txq->len); + vhost_txq->len = 0; + vhost_tsc[vdev->vid] = cur_tsc; + } + } } /* From patchwork Fri Dec 18 11:33:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 85448 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D65B0A09FD; Fri, 18 Dec 2020 12:45:02 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 41B4BCAD9; Fri, 18 Dec 2020 12:44:14 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id BAC50CAD2 for ; Fri, 18 Dec 2020 12:44:11 +0100 (CET) IronPort-SDR: 0kTPZx1aCkOemFUkF1wwCVfgq6Tv7tvB04dU5Y80GX6lsWHgvw8S3neb2pPOwI59LE6Py0VEbI H98Z5pzV6g0g== X-IronPort-AV: E=McAfee;i="6000,8403,9838"; a="193829625" X-IronPort-AV: E=Sophos;i="5.78,430,1599548400"; d="scan'208";a="193829625" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2020 03:44:11 -0800 IronPort-SDR: Hf52ylfIFbARNMpRXZmWFXPp9fosfg9k+GUqoYClWc81d4v9ujpZVWtmtAgh0204Np+V4VgWLG U+i29yhZ1URw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,430,1599548400"; d="scan'208";a="370534249" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by orsmga008.jf.intel.com with ESMTP; 18 Dec 2020 03:44:09 -0800 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, Jiayu.Hu@intel.com, YvonneX.Yang@intel.com, Cheng Jiang Date: Fri, 18 Dec 2020 11:33:27 +0000 Message-Id: <20201218113327.70528-4-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201218113327.70528-1-Cheng1.jiang@intel.com> References: <20201218113327.70528-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 3/3] examples/vhost: refactor vhost async data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support latest async vhost api, refactor vhost async data path, and clean some codes. Signed-off-by: Cheng Jiang --- examples/vhost/main.c | 88 ++++++++++++++++++++----------------------- examples/vhost/main.h | 2 +- 2 files changed, 42 insertions(+), 48 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 28226a4ff7..0113147876 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -817,26 +817,26 @@ free_pkts(struct rte_mbuf **pkts, uint16_t n) } static __rte_always_inline void -virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, +complete_async_pkts(struct vhost_dev *vdev) +{ + struct rte_mbuf *p_cpl[MAX_PKT_BURST]; + uint16_t complete_count; + + complete_count = rte_vhost_poll_enqueue_completed(vdev->vid, + VIRTIO_RXQ, p_cpl, MAX_PKT_BURST); + rte_atomic16_sub(&vdev->nr_async_pkts, complete_count); + if (complete_count) + free_pkts(p_cpl, complete_count); +} + +static __rte_always_inline void +sync_virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, struct rte_mbuf *m) { uint16_t ret; - struct rte_mbuf *m_cpl[1]; if (builtin_net_driver) { ret = vs_enqueue_pkts(dst_vdev, VIRTIO_RXQ, &m, 1); - } else if (async_vhost_driver) { - ret = rte_vhost_submit_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, - &m, 1); - - if (likely(ret)) - dst_vdev->nr_async_pkts++; - - while (likely(dst_vdev->nr_async_pkts)) { - if (rte_vhost_poll_enqueue_completed(dst_vdev->vid, - VIRTIO_RXQ, m_cpl, 1)) - dst_vdev->nr_async_pkts--; - } } else { ret = rte_vhost_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, &m, 1); } @@ -850,25 +850,25 @@ virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, } static __rte_always_inline void -drain_vhost(struct vhost_dev *dst_vdev, struct rte_mbuf **m, uint16_t nr_xmit) +drain_vhost(struct vhost_dev *dst_vdev) { - uint16_t ret, nr_cpl; - struct rte_mbuf *m_cpl[MAX_PKT_BURST]; + uint16_t ret; + uint16_t nr_xmit = vhost_m_table[dst_vdev->vid].len; + struct rte_mbuf **m = vhost_m_table[dst_vdev->vid].m_table; if (builtin_net_driver) { ret = vs_enqueue_pkts(dst_vdev, VIRTIO_RXQ, m, nr_xmit); } else if (async_vhost_driver) { + uint32_t cpu_cpl_nr; + struct rte_mbuf *m_cpu_cpl[nr_xmit]; + complete_async_pkts(dst_vdev); + while (rte_atomic16_read(&dst_vdev->nr_async_pkts) >= 128) + complete_async_pkts(dst_vdev); + ret = rte_vhost_submit_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, - m, nr_xmit); - dst_vdev->nr_async_pkts += ret; + m, nr_xmit, m_cpu_cpl, &cpu_cpl_nr); + rte_atomic16_add(&dst_vdev->nr_async_pkts, ret - cpu_cpl_nr); free_pkts(&m[ret], nr_xmit - ret); - - while (likely(dst_vdev->nr_async_pkts)) { - nr_cpl = rte_vhost_poll_enqueue_completed(dst_vdev->vid, - VIRTIO_RXQ, m_cpl, MAX_PKT_BURST); - dst_vdev->nr_async_pkts -= nr_cpl; - free_pkts(m_cpl, nr_cpl); - } } else { ret = rte_vhost_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, m, nr_xmit); @@ -925,7 +925,7 @@ virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m) } if (unlikely(vhost_txq->len == MAX_PKT_BURST)) { - drain_vhost(dst_vdev, vhost_txq->m_table, MAX_PKT_BURST); + drain_vhost(dst_vdev); vhost_txq->len = 0; vhost_tsc[dst_vdev->vid] = rte_rdtsc(); } @@ -1031,7 +1031,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag) TAILQ_FOREACH(vdev2, &vhost_dev_list, global_vdev_entry) { if (vdev2 != vdev) - virtio_xmit(vdev2, vdev, m); + sync_virtio_xmit(vdev2, vdev, m); } goto queue2nic; } @@ -1124,31 +1124,17 @@ drain_mbuf_table(struct mbuf_table *tx_q) } } -static __rte_always_inline void -complete_async_pkts(struct vhost_dev *vdev, uint16_t qid) -{ - struct rte_mbuf *p_cpl[MAX_PKT_BURST]; - uint16_t complete_count; - - complete_count = rte_vhost_poll_enqueue_completed(vdev->vid, - qid, p_cpl, MAX_PKT_BURST); - vdev->nr_async_pkts -= complete_count; - if (complete_count) - free_pkts(p_cpl, complete_count); -} - static __rte_always_inline void drain_eth_rx(struct vhost_dev *vdev) { uint16_t rx_count, enqueue_count; + uint32_t cpu_cpl_nr; struct rte_mbuf *pkts[MAX_PKT_BURST]; + struct rte_mbuf *m_cpu_cpl[MAX_PKT_BURST]; rx_count = rte_eth_rx_burst(ports[0], vdev->vmdq_rx_q, pkts, MAX_PKT_BURST); - while (likely(vdev->nr_async_pkts)) - complete_async_pkts(vdev, VIRTIO_RXQ); - if (!rx_count) return; @@ -1170,13 +1156,21 @@ drain_eth_rx(struct vhost_dev *vdev) } } + complete_async_pkts(vdev); + while (rte_atomic16_read(&vdev->nr_async_pkts) >= 128) + complete_async_pkts(vdev); + if (builtin_net_driver) { enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ, pkts, rx_count); } else if (async_vhost_driver) { enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid, - VIRTIO_RXQ, pkts, rx_count); - vdev->nr_async_pkts += enqueue_count; + VIRTIO_RXQ, pkts, rx_count, + m_cpu_cpl, &cpu_cpl_nr); + rte_atomic16_add(&vdev->nr_async_pkts, + enqueue_count - cpu_cpl_nr); + if (cpu_cpl_nr) + free_pkts(m_cpu_cpl, cpu_cpl_nr); } else { enqueue_count = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, pkts, rx_count); @@ -1224,7 +1218,7 @@ drain_virtio_tx(struct vhost_dev *vdev) RTE_LOG_DP(DEBUG, VHOST_DATA, "Vhost tX queue drained after timeout with burst size %u\n", vhost_txq->len); - drain_vhost(vdev, vhost_txq->m_table, vhost_txq->len); + drain_vhost(vdev); vhost_txq->len = 0; vhost_tsc[vdev->vid] = cur_tsc; } diff --git a/examples/vhost/main.h b/examples/vhost/main.h index 4317b6ae81..d33ddb411b 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -51,7 +51,7 @@ struct vhost_dev { uint64_t features; size_t hdr_len; uint16_t nr_vrings; - uint16_t nr_async_pkts; + rte_atomic16_t nr_async_pkts; struct rte_vhost_memory *mem; struct device_statistics stats; TAILQ_ENTRY(vhost_dev) global_vdev_entry;