From patchwork Wed Sep 22 08:55:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 99408 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99BD7A0C45; Wed, 22 Sep 2021 11:07:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2C425411C6; Wed, 22 Sep 2021 11:07:25 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id BA92C411C4 for ; Wed, 22 Sep 2021 11:07:20 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10114"; a="210631445" X-IronPort-AV: E=Sophos;i="5.85,313,1624345200"; d="scan'208";a="210631445" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2021 02:07:20 -0700 X-IronPort-AV: E=Sophos;i="5.85,313,1624345200"; d="scan'208";a="550174095" Received: from unknown (HELO localhost.localdomain) ([10.240.183.50]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2021 02:07:16 -0700 From: Yuan Wang To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, xuan.ding@intel.com, cheng1.jiang@intel.com, wenwux.ma@intel.com, yvonnex.yang@intel.com, sunil.pai.g@intel.com Date: Wed, 22 Sep 2021 08:55:45 +0000 Message-Id: <20210922085546.54758-2-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210922085546.54758-1-yuanx.wang@intel.com> References: <20210909065807.812145-1-yuanx.wang@intel.com> <20210922085546.54758-1-yuanx.wang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 1/2] vhost: support to clear in-flight packets for async dequeue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. Signed-off-by: Yuan Wang Tested-by: Yvonne Yang --- lib/vhost/virtio_net.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 39399d2d31..21afcd1854 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -27,6 +27,11 @@ #define VHOST_ASYNC_BATCH_THRESHOLD 32 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, + struct vhost_virtqueue *vq, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, bool legacy_ol_flags); + static __rte_always_inline bool rxvq_is_mergeable(struct virtio_net *dev) { @@ -2120,7 +2125,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue idx %d.\n", dev->vid, __func__, queue_id); return 0; @@ -2134,7 +2139,17 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count); + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, queue_id, pkts, + count, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } return n_pkts_cpl; } From patchwork Wed Sep 22 08:55:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, YuanX" X-Patchwork-Id: 99409 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 87A93A0C45; Wed, 22 Sep 2021 11:07:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3A9A3411CB; Wed, 22 Sep 2021 11:07:29 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 881FA411CA for ; Wed, 22 Sep 2021 11:07:25 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10114"; a="210631450" X-IronPort-AV: E=Sophos;i="5.85,313,1624345200"; d="scan'208";a="210631450" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2021 02:07:24 -0700 X-IronPort-AV: E=Sophos;i="5.85,313,1624345200"; d="scan'208";a="550174123" Received: from unknown (HELO localhost.localdomain) ([10.240.183.50]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2021 02:07:20 -0700 From: Yuan Wang To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, xuan.ding@intel.com, cheng1.jiang@intel.com, wenwux.ma@intel.com, yvonnex.yang@intel.com, sunil.pai.g@intel.com Date: Wed, 22 Sep 2021 08:55:46 +0000 Message-Id: <20210922085546.54758-3-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210922085546.54758-1-yuanx.wang@intel.com> References: <20210909065807.812145-1-yuanx.wang@intel.com> <20210922085546.54758-1-yuanx.wang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 2/2] vhost: add thread-safe API for clearing in-flight packets in async vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds thread safe version for clearing in-flight packets function. Signed-off-by: Yuan Wang Tested-by: Yvonne Yang --- doc/guides/prog_guide/vhost_lib.rst | 8 ++++- lib/vhost/rte_vhost_async.h | 21 +++++++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 49 +++++++++++++++++++++++++++++ 4 files changed, 78 insertions(+), 1 deletion(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 9ed544db7a..bc21c879f3 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -300,7 +300,13 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing any locking. Completed packets are + returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count)`` + + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. * ``rte_vhost_async_try_dequeue_burst(vid, queue_id, mbuf_pool, pkts, count, nr_inflight)`` diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index 5e2429ab70..a418e0a03d 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -261,6 +261,27 @@ int rte_vhost_async_get_inflight(int vid, uint16_t queue_id); __rte_experimental uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count); + +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count); + /** * This function tries to receive packets from the guest with offloading * copies to the async channel. The packets that are transfer completed diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 8eb7e92c32..b87d5906b8 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -88,4 +88,5 @@ EXPERIMENTAL { # added in 21.11 rte_vhost_async_try_dequeue_burst; + rte_vhost_clear_queue; }; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 21afcd1854..2bf8a511d5 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2154,6 +2154,55 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return n_pkts_cpl; } +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl = 0; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue idx %d.\n", + dev->vid, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (unlikely(!vq->async_registered)) { + VHOST_LOG_DATA(ERR, "(%d) %s: async not registered for queue id %d.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(ERR, + "(%d) %s: failed to clear async queue id %d, virtqueue busy.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, queue_id, pkts, + count, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + rte_spinlock_unlock(&vq->access_lock); + + return n_pkts_cpl; +} + static __rte_always_inline uint32_t virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count)