From patchwork Thu Jul 22 04:09:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 96187 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65113A0C45; Thu, 22 Jul 2021 06:26:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 22896410FE; Thu, 22 Jul 2021 06:26:34 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id BE5C7410FC for ; Thu, 22 Jul 2021 06:26:32 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10052"; a="209664611" X-IronPort-AV: E=Sophos;i="5.84,259,1620716400"; d="scan'208";a="209664611" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2021 21:26:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,259,1620716400"; d="scan'208";a="658668680" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.149]) by fmsmga006.fm.intel.com with ESMTP; 21 Jul 2021 21:26:30 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, Chenbo.Xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, yvonnex.yang@intel.com, Cheng Jiang Date: Thu, 22 Jul 2021 04:09:06 +0000 Message-Id: <20210722040907.20468-5-cheng1.jiang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210722040907.20468-1-cheng1.jiang@intel.com> References: <20210602042802.31943-1-cheng1.jiang@intel.com> <20210722040907.20468-1-cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v7 4/5] examples/vhost: handle memory hotplug for async vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When the guest memory is hotplugged, the vhost application which enables DMA acceleration must stop DMA transfers before the vhost re-maps the guest memory. To accomplish that, we need to do these changes in the vhost sample: 1. add inflight packets count. 2. add vring_state_changed() callback. 3. add inflight packets clear process in destroy_device() and vring_state_changed(). Signed-off-by: Cheng Jiang Reviewed-by: Maxime Coquelin --- examples/vhost/main.c | 55 +++++++++++++++++++++++++++++++++++++++++-- examples/vhost/main.h | 1 + 2 files changed, 54 insertions(+), 2 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 9cd855a696..bc3d71c898 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -851,8 +851,11 @@ complete_async_pkts(struct vhost_dev *vdev) complete_count = rte_vhost_poll_enqueue_completed(vdev->vid, VIRTIO_RXQ, p_cpl, MAX_PKT_BURST); - if (complete_count) + if (complete_count) { free_pkts(p_cpl, complete_count); + __atomic_sub_fetch(&vdev->pkts_inflight, complete_count, __ATOMIC_SEQ_CST); + } + } static __rte_always_inline void @@ -895,6 +898,7 @@ drain_vhost(struct vhost_dev *vdev) complete_async_pkts(vdev); ret = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, m, nr_xmit, m_cpu_cpl, &cpu_cpl_nr); + __atomic_add_fetch(&vdev->pkts_inflight, ret - cpu_cpl_nr, __ATOMIC_SEQ_CST); if (cpu_cpl_nr) free_pkts(m_cpu_cpl, cpu_cpl_nr); @@ -1226,6 +1230,9 @@ drain_eth_rx(struct vhost_dev *vdev) enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, pkts, rx_count, m_cpu_cpl, &cpu_cpl_nr); + __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count - cpu_cpl_nr, + __ATOMIC_SEQ_CST); + if (cpu_cpl_nr) free_pkts(m_cpu_cpl, cpu_cpl_nr); @@ -1397,8 +1404,19 @@ destroy_device(int vid) "(%d) device has been removed from data core\n", vdev->vid); - if (async_vhost_driver) + if (async_vhost_driver) { + uint16_t n_pkt = 0; + struct rte_mbuf *m_cpl[vdev->pkts_inflight]; + + while (vdev->pkts_inflight) { + n_pkt = rte_vhost_clear_queue_thread_unsafe(vid, VIRTIO_RXQ, + m_cpl, vdev->pkts_inflight); + free_pkts(m_cpl, n_pkt); + __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt, __ATOMIC_SEQ_CST); + } + rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); + } rte_free(vdev); } @@ -1487,6 +1505,38 @@ new_device(int vid) return 0; } +static int +vring_state_changed(int vid, uint16_t queue_id, int enable) +{ + struct vhost_dev *vdev = NULL; + + TAILQ_FOREACH(vdev, &vhost_dev_list, global_vdev_entry) { + if (vdev->vid == vid) + break; + } + if (!vdev) + return -1; + + if (queue_id != VIRTIO_RXQ) + return 0; + + if (async_vhost_driver) { + if (!enable) { + uint16_t n_pkt = 0; + struct rte_mbuf *m_cpl[vdev->pkts_inflight]; + + while (vdev->pkts_inflight) { + n_pkt = rte_vhost_clear_queue_thread_unsafe(vid, queue_id, + m_cpl, vdev->pkts_inflight); + free_pkts(m_cpl, n_pkt); + __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt, __ATOMIC_SEQ_CST); + } + } + } + + return 0; +} + /* * These callback allow devices to be added to the data core when configuration * has been fully complete. @@ -1495,6 +1545,7 @@ static const struct vhost_device_ops virtio_net_device_ops = { .new_device = new_device, .destroy_device = destroy_device, + .vring_state_changed = vring_state_changed, }; /* diff --git a/examples/vhost/main.h b/examples/vhost/main.h index 0ccdce4b4a..e7b1ac60a6 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -51,6 +51,7 @@ struct vhost_dev { uint64_t features; size_t hdr_len; uint16_t nr_vrings; + uint16_t pkts_inflight; struct rte_vhost_memory *mem; struct device_statistics stats; TAILQ_ENTRY(vhost_dev) global_vdev_entry;