From patchwork Fri Sep 17 08:12:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 99081 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 984EAA0C46; Fri, 17 Sep 2021 10:23:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7E46041103; Fri, 17 Sep 2021 10:23:44 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id C01A241103 for ; Fri, 17 Sep 2021 10:23:43 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10109"; a="286435459" X-IronPort-AV: E=Sophos;i="5.85,300,1624345200"; d="scan'208";a="286435459" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 01:23:43 -0700 X-IronPort-AV: E=Sophos;i="5.85,300,1624345200"; d="scan'208";a="554512531" Received: from unknown (HELO localhost.localdomain) ([10.240.183.50]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 01:23:39 -0700 From: Yuan Wang To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, xuan.ding@intel.com, cheng1.jiang@intel.com, wenwux.ma@intel.com, yvonnex.yang@intel.com, sunil.pai.g@intel.com Date: Fri, 17 Sep 2021 08:12:37 +0000 Message-Id: <20210917081238.73990-2-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210917081238.73990-1-yuanx.wang@intel.com> References: <20210909065807.812145-1-yuanx.wang@intel.com> <20210917081238.73990-1-yuanx.wang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 1/2] vhost: support to clear in-flight packets for async dequeue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. Signed-off-by: Yuan Wang --- lib/vhost/virtio_net.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 4bc69b9081..cc84a9d21e 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -27,6 +27,11 @@ #define VHOST_ASYNC_BATCH_THRESHOLD 32 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, + struct vhost_virtqueue *vq, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, bool legacy_ol_flags); + static __rte_always_inline bool rxvq_is_mergeable(struct virtio_net *dev) { @@ -2120,7 +2125,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue idx %d.\n", dev->vid, __func__, queue_id); return 0; @@ -2134,7 +2139,17 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count); + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, queue_id, pkts, + count, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } return n_pkts_cpl; } From patchwork Fri Sep 17 08:12:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 99083 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 909C7A0C46; Fri, 17 Sep 2021 10:24:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 18A3041120; Fri, 17 Sep 2021 10:23:50 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id B2668410EF for ; Fri, 17 Sep 2021 10:23:47 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10109"; a="286435474" X-IronPort-AV: E=Sophos;i="5.85,300,1624345200"; d="scan'208";a="286435474" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 01:23:47 -0700 X-IronPort-AV: E=Sophos;i="5.85,300,1624345200"; d="scan'208";a="554512552" Received: from unknown (HELO localhost.localdomain) ([10.240.183.50]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 01:23:43 -0700 From: Yuan Wang To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, xuan.ding@intel.com, cheng1.jiang@intel.com, wenwux.ma@intel.com, yvonnex.yang@intel.com, sunil.pai.g@intel.com Date: Fri, 17 Sep 2021 08:12:38 +0000 Message-Id: <20210917081238.73990-3-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210917081238.73990-1-yuanx.wang@intel.com> References: <20210909065807.812145-1-yuanx.wang@intel.com> <20210917081238.73990-1-yuanx.wang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/2] vhost: add thread-safe API for clearing in-flight packets in async vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds thread safe version for clearing in-flight packets function. Signed-off-by: Yuan Wang --- doc/guides/prog_guide/vhost_lib.rst | 8 ++++- lib/vhost/rte_vhost_async.h | 21 +++++++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 49 +++++++++++++++++++++++++++++ 4 files changed, 78 insertions(+), 1 deletion(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 9ed544db7a..bc21c879f3 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -300,7 +300,13 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing any locking. Completed packets are + returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count)`` + + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. * ``rte_vhost_async_try_dequeue_burst(vid, queue_id, mbuf_pool, pkts, count, nr_inflight)`` diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index 973efa19b1..887fc2fa47 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -256,6 +256,27 @@ int rte_vhost_async_get_inflight(int vid, uint16_t queue_id); __rte_experimental uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count); + +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count); + /** * This function tries to receive packets from the guest with offloading * copies to the async channel. The packets that are transfer completed diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 8eb7e92c32..b87d5906b8 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -88,4 +88,5 @@ EXPERIMENTAL { # added in 21.11 rte_vhost_async_try_dequeue_burst; + rte_vhost_clear_queue; }; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index cc84a9d21e..e7292332a8 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2154,6 +2154,55 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return n_pkts_cpl; } +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue idx %d.\n", + dev->vid, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (unlikely(!vq->async_registered)) { + VHOST_LOG_DATA(ERR, "(%d) %s: async not registered for queue id %d.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(ERR, + "(%d) %s: failed to clear async queue id %d, virtqueue busy.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, queue_id, pkts, + count, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + rte_spinlock_unlock(&vq->access_lock); + + return n_pkts_cpl; +} + static __rte_always_inline uint32_t virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count)