From patchwork Fri Jul 16 19:51:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hu, Jiayu" X-Patchwork-Id: 95983 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22DEBA0C50; Fri, 16 Jul 2021 15:24:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8465741375; Fri, 16 Jul 2021 15:24:10 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 9662E40151; Fri, 16 Jul 2021 15:24:06 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10046"; a="210718529" X-IronPort-AV: E=Sophos;i="5.84,245,1620716400"; d="scan'208";a="210718529" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jul 2021 06:24:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,245,1620716400"; d="scan'208";a="496728415" Received: from npg_dpdk_virtio_jiayuhu_07.sh.intel.com ([10.67.119.25]) by FMSMGA003.fm.intel.com with ESMTP; 16 Jul 2021 06:24:04 -0700 From: Jiayu Hu To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, Jiayu Hu , stable@dpdk.org Date: Fri, 16 Jul 2021 15:51:27 -0400 Message-Id: <1626465089-17052-2-git-send-email-jiayu.hu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1626465089-17052-1-git-send-email-jiayu.hu@intel.com> References: <1626162383-89674-2-git-send-email-jiayu.hu@intel.com> <1626465089-17052-1-git-send-email-jiayu.hu@intel.com> Subject: [dpdk-dev] [PATCH v5 1/3] vhost: fix lock on device readiness notification X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The vhost notifies the application of device readiness via vhost_user_notify_queue_state(), but calling this function is not protected by the lock. This patch is to make this function call lock protected. Fixes: d0fcc38f5fa4 ("vhost: improve device readiness notifications") Cc: stable@dpdk.org Signed-off-by: Jiayu Hu Reviewed-by: Maxime Coquelin --- lib/vhost/vhost_user.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 031c578..31300e1 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -2995,9 +2995,6 @@ vhost_user_msg_handler(int vid, int fd) } } - if (unlock_required) - vhost_user_unlock_all_queue_pairs(dev); - /* If message was not handled at this stage, treat it as an error */ if (!handled) { VHOST_LOG_CONFIG(ERR, @@ -3032,6 +3029,8 @@ vhost_user_msg_handler(int vid, int fd) } } + if (unlock_required) + vhost_user_unlock_all_queue_pairs(dev); if (!virtio_is_ready(dev)) goto out; From patchwork Fri Jul 16 19:51:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hu, Jiayu" X-Patchwork-Id: 95984 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E022A0C50; Fri, 16 Jul 2021 15:24:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A49654137A; Fri, 16 Jul 2021 15:24:11 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 1926B4136F for ; Fri, 16 Jul 2021 15:24:07 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10046"; a="210718532" X-IronPort-AV: E=Sophos;i="5.84,245,1620716400"; d="scan'208";a="210718532" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jul 2021 06:24:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,245,1620716400"; d="scan'208";a="496728420" Received: from npg_dpdk_virtio_jiayuhu_07.sh.intel.com ([10.67.119.25]) by FMSMGA003.fm.intel.com with ESMTP; 16 Jul 2021 06:24:06 -0700 From: Jiayu Hu To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, Jiayu Hu Date: Fri, 16 Jul 2021 15:51:28 -0400 Message-Id: <1626465089-17052-3-git-send-email-jiayu.hu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1626465089-17052-1-git-send-email-jiayu.hu@intel.com> References: <1626162383-89674-2-git-send-email-jiayu.hu@intel.com> <1626465089-17052-1-git-send-email-jiayu.hu@intel.com> Subject: [dpdk-dev] [PATCH v5 2/3] vhost: rework async configuration structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch reworks the async configuration structure to improve code readability. In addition, add preserved padding fields on the structure for future usage. Signed-off-by: Jiayu Hu Reviewed-by: Chenbo Xia --- doc/guides/prog_guide/vhost_lib.rst | 21 +++++++++-------- examples/vhost/main.c | 8 +++---- lib/vhost/rte_vhost_async.h | 45 ++++++++++++++++++------------------- lib/vhost/vhost.c | 19 +++++++--------- lib/vhost/vhost.h | 3 +-- 5 files changed, 47 insertions(+), 49 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index d18fb98..2a61b85 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -218,26 +218,29 @@ The following is an overview of some key Vhost API functions: Enable or disable zero copy feature of the vhost crypto backend. -* ``rte_vhost_async_channel_register(vid, queue_id, features, ops)`` +* ``rte_vhost_async_channel_register(vid, queue_id, config, ops)`` - Register a vhost queue with async copy device channel after vring - is enabled. Following device ``features`` must be specified together + Register an async copy device channel for a vhost queue after vring + is enabled. Following device ``config`` must be specified together with the registration: - * ``async_inorder`` + * ``features`` - Async copy device can guarantee the ordering of copy completion - sequence. Copies are completed in the same order with that at - the submission time. + This field is used to specify async copy device features. - Currently, only ``async_inorder`` capable device is supported by vhost. + ``RTE_VHOST_ASYNC_INORDER`` represents the async copy device can + guarantee the order of copy completion is the same as the order + of copy submission. + + Currently, only ``RTE_VHOST_ASYNC_INORDER`` capable device is + supported by vhost. * ``async_threshold`` The copy length (in bytes) below which CPU copy will be used even if applications call async vhost APIs to enqueue/dequeue data. - Typical value is 512~1024 depending on the async device capability. + Typical value is 256~1024 depending on the async device capability. Applications must provide following ``ops`` callbacks for vhost lib to work with the async copy devices: diff --git a/examples/vhost/main.c b/examples/vhost/main.c index d2179ea..9cd855a 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1468,7 +1468,7 @@ new_device(int vid) vid, vdev->coreid); if (async_vhost_driver) { - struct rte_vhost_async_features f; + struct rte_vhost_async_config config = {0}; struct rte_vhost_async_channel_ops channel_ops; if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0) { @@ -1476,11 +1476,11 @@ new_device(int vid) channel_ops.check_completed_copies = ioat_check_completed_copies_cb; - f.async_inorder = 1; - f.async_threshold = 256; + config.features = RTE_VHOST_ASYNC_INORDER; + config.async_threshold = 256; return rte_vhost_async_channel_register(vid, VIRTIO_RXQ, - f.intval, &channel_ops); + config, &channel_ops); } } diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index 6faa31f..52b1727 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -93,49 +93,48 @@ struct async_inflight_info { }; /** - * dma channel feature bit definition + * async channel features */ -struct rte_vhost_async_features { - union { - uint32_t intval; - struct { - uint32_t async_inorder:1; - uint32_t resvd_0:15; - uint32_t async_threshold:12; - uint32_t resvd_1:4; - }; - }; +enum { + RTE_VHOST_ASYNC_FEATURE_UNKNOWN = 0U, + RTE_VHOST_ASYNC_INORDER = 1U << 0, }; /** - * register an async channel for vhost + * async channel configuration + */ +struct rte_vhost_async_config { + uint32_t async_threshold; + uint32_t features; + uint32_t rsvd[2]; +}; + +/** + * Register an async channel for a vhost queue * * @param vid * vhost device id async channel to be attached to * @param queue_id * vhost queue id async channel to be attached to - * @param features - * DMA channel feature bit - * b0 : DMA supports inorder data transfer - * b1 - b15: reserved - * b16 - b27: Packet length threshold for DMA transfer - * b28 - b31: reserved + * @param config + * Async channel configuration structure * @param ops - * DMA operation callbacks + * Async channel operation callbacks * @return * 0 on success, -1 on failures */ __rte_experimental int rte_vhost_async_channel_register(int vid, uint16_t queue_id, - uint32_t features, struct rte_vhost_async_channel_ops *ops); + struct rte_vhost_async_config config, + struct rte_vhost_async_channel_ops *ops); /** - * unregister a dma channel for vhost + * Unregister an async channel for a vhost queue * * @param vid - * vhost device id DMA channel to be detached + * vhost device id async channel to be detached from * @param queue_id - * vhost queue id DMA channel to be detached + * vhost queue id async channel to be detached from * @return * 0 on success, -1 on failures */ diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 53a470f..908758e 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1619,19 +1619,17 @@ int rte_vhost_extern_callback_register(int vid, return 0; } -int rte_vhost_async_channel_register(int vid, uint16_t queue_id, - uint32_t features, - struct rte_vhost_async_channel_ops *ops) +int +rte_vhost_async_channel_register(int vid, uint16_t queue_id, + struct rte_vhost_async_config config, + struct rte_vhost_async_channel_ops *ops) { struct vhost_virtqueue *vq; struct virtio_net *dev = get_device(vid); - struct rte_vhost_async_features f; if (dev == NULL || ops == NULL) return -1; - f.intval = features; - if (queue_id >= VHOST_MAX_VRING) return -1; @@ -1640,7 +1638,7 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id, if (unlikely(vq == NULL || !dev->async_copy)) return -1; - if (unlikely(!f.async_inorder)) { + if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) { VHOST_LOG_CONFIG(ERR, "async copy is not supported on non-inorder mode " "(vid %d, qid: %d)\n", vid, queue_id); @@ -1719,9 +1717,7 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id, vq->async_ops.check_completed_copies = ops->check_completed_copies; vq->async_ops.transfer_data = ops->transfer_data; - - vq->async_inorder = f.async_inorder; - vq->async_threshold = f.async_threshold; + vq->async_threshold = config.async_threshold; vq->async_registered = true; @@ -1731,7 +1727,8 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id, return 0; } -int rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) +int +rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) { struct vhost_virtqueue *vq; struct virtio_net *dev = get_device(vid); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 8ffe387..d98ca8a 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -218,9 +218,8 @@ struct vhost_virtqueue { }; /* vq async features */ - bool async_inorder; bool async_registered; - uint16_t async_threshold; + uint32_t async_threshold; int notif_enable; #define VIRTIO_UNINITIALIZED_NOTIF (-1) From patchwork Fri Jul 16 19:51:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hu, Jiayu" X-Patchwork-Id: 95985 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 360DEA0C50; Fri, 16 Jul 2021 15:24:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DBD354137E; Fri, 16 Jul 2021 15:24:14 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 7143C40151 for ; Fri, 16 Jul 2021 15:24:10 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10046"; a="210718538" X-IronPort-AV: E=Sophos;i="5.84,245,1620716400"; d="scan'208";a="210718538" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jul 2021 06:24:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,245,1620716400"; d="scan'208";a="496728429" Received: from npg_dpdk_virtio_jiayuhu_07.sh.intel.com ([10.67.119.25]) by FMSMGA003.fm.intel.com with ESMTP; 16 Jul 2021 06:24:07 -0700 From: Jiayu Hu To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, Jiayu Hu Date: Fri, 16 Jul 2021 15:51:29 -0400 Message-Id: <1626465089-17052-4-git-send-email-jiayu.hu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1626465089-17052-1-git-send-email-jiayu.hu@intel.com> References: <1626162383-89674-2-git-send-email-jiayu.hu@intel.com> <1626465089-17052-1-git-send-email-jiayu.hu@intel.com> Subject: [dpdk-dev] [PATCH v5 3/3] vhost: add thread unsafe async registeration functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds thread unsafe version for async register and unregister functions. Signed-off-by: Jiayu Hu Reviewed-by: Chenbo Xia --- doc/guides/prog_guide/vhost_lib.rst | 14 ++++ lib/vhost/rte_vhost_async.h | 41 ++++++++++ lib/vhost/version.map | 4 + lib/vhost/vhost.c | 149 +++++++++++++++++++++++++++--------- 4 files changed, 173 insertions(+), 35 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 2a61b85..c8638db 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -256,6 +256,13 @@ The following is an overview of some key Vhost API functions: vhost invokes this function to get the copy data completed by async devices. +* ``rte_vhost_async_channel_register_thread_unsafe(vid, queue_id, config, ops)`` + Register an async copy device channel for a vhost queue without + performing any locking. + + This function is only safe to call in vhost callback functions + (i.e., struct vhost_device_ops). + * ``rte_vhost_async_channel_unregister(vid, queue_id)`` Unregister the async copy device channel from a vhost queue. @@ -268,6 +275,13 @@ The following is an overview of some key Vhost API functions: devices for all vhost queues in destroy_device(), when a virtio device is paused or shut down. +* ``rte_vhost_async_channel_unregister_thread_unsafe(vid, queue_id)`` + Unregister the async copy device channel for a vhost queue without + performing any locking. + + This function is only safe to call in vhost callback functions + (i.e., struct vhost_device_ops). + * ``rte_vhost_submit_enqueue_burst(vid, queue_id, pkts, count, comp_pkts, comp_count)`` Submit an enqueue request to transmit ``count`` packets from host to guest diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index 52b1727..8f7fd35 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -142,6 +142,47 @@ __rte_experimental int rte_vhost_async_channel_unregister(int vid, uint16_t queue_id); /** + * Register an async channel for a vhost queue without performing any + * locking + * + * @note This function does not perform any locking, and is only safe to + * call in vhost callback functions. + * + * @param vid + * vhost device id async channel to be attached to + * @param queue_id + * vhost queue id async channel to be attached to + * @param config + * Async channel configuration + * @param ops + * Async channel operation callbacks + * @return + * 0 on success, -1 on failures + */ +__rte_experimental +int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id, + struct rte_vhost_async_config config, + struct rte_vhost_async_channel_ops *ops); + +/** + * Unregister an async channel for a vhost queue without performing any + * locking + * + * @note This function does not perform any locking, and is only safe to + * call in vhost callback functions. + * + * @param vid + * vhost device id async channel to be detached from + * @param queue_id + * vhost queue id async channel to be detached from + * @return + * 0 on success, -1 on failures + */ +__rte_experimental +int rte_vhost_async_channel_unregister_thread_unsafe(int vid, + uint16_t queue_id); + +/** * This function submits enqueue data to async engine. Successfully * enqueued packets can be transfer completed or being occupied by DMA * engines, when this API returns. Transfer completed packets are returned diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 9103a23..2363db8 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -79,4 +79,8 @@ EXPERIMENTAL { # added in 21.05 rte_vhost_get_negotiated_protocol_features; + + # added in 21.08 + rte_vhost_async_channel_register_thread_unsafe; + rte_vhost_async_channel_unregister_thread_unsafe; }; diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 908758e..d3d0205 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1619,43 +1619,19 @@ int rte_vhost_extern_callback_register(int vid, return 0; } -int -rte_vhost_async_channel_register(int vid, uint16_t queue_id, +static __rte_always_inline int +async_channel_register(int vid, uint16_t queue_id, struct rte_vhost_async_config config, struct rte_vhost_async_channel_ops *ops) { - struct vhost_virtqueue *vq; struct virtio_net *dev = get_device(vid); - - if (dev == NULL || ops == NULL) - return -1; - - if (queue_id >= VHOST_MAX_VRING) - return -1; - - vq = dev->virtqueue[queue_id]; - - if (unlikely(vq == NULL || !dev->async_copy)) - return -1; - - if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) { - VHOST_LOG_CONFIG(ERR, - "async copy is not supported on non-inorder mode " - "(vid %d, qid: %d)\n", vid, queue_id); - return -1; - } - - if (unlikely(ops->check_completed_copies == NULL || - ops->transfer_data == NULL)) - return -1; - - rte_spinlock_lock(&vq->access_lock); + struct vhost_virtqueue *vq = dev->virtqueue[queue_id]; if (unlikely(vq->async_registered)) { VHOST_LOG_CONFIG(ERR, "async register failed: channel already registered " "(vid %d, qid: %d)\n", vid, queue_id); - goto reg_out; + return -1; } vq->async_pkts_info = rte_malloc_socket(NULL, @@ -1666,7 +1642,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id, VHOST_LOG_CONFIG(ERR, "async register failed: cannot allocate memory for async_pkts_info " "(vid %d, qid: %d)\n", vid, queue_id); - goto reg_out; + return -1; } vq->it_pool = rte_malloc_socket(NULL, @@ -1677,7 +1653,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id, VHOST_LOG_CONFIG(ERR, "async register failed: cannot allocate memory for it_pool " "(vid %d, qid: %d)\n", vid, queue_id); - goto reg_out; + return -1; } vq->vec_pool = rte_malloc_socket(NULL, @@ -1688,7 +1664,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id, VHOST_LOG_CONFIG(ERR, "async register failed: cannot allocate memory for vec_pool " "(vid %d, qid: %d)\n", vid, queue_id); - goto reg_out; + return -1; } if (vq_is_packed(dev)) { @@ -1700,7 +1676,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id, VHOST_LOG_CONFIG(ERR, "async register failed: cannot allocate memory for async buffers " "(vid %d, qid: %d)\n", vid, queue_id); - goto reg_out; + return -1; } } else { vq->async_descs_split = rte_malloc_socket(NULL, @@ -1711,7 +1687,7 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id, VHOST_LOG_CONFIG(ERR, "async register failed: cannot allocate memory for async descs " "(vid %d, qid: %d)\n", vid, queue_id); - goto reg_out; + return -1; } } @@ -1721,10 +1697,78 @@ rte_vhost_async_channel_register(int vid, uint16_t queue_id, vq->async_registered = true; -reg_out: + return 0; +} + +int +rte_vhost_async_channel_register(int vid, uint16_t queue_id, + struct rte_vhost_async_config config, + struct rte_vhost_async_channel_ops *ops) +{ + struct vhost_virtqueue *vq; + struct virtio_net *dev = get_device(vid); + int ret; + + if (dev == NULL || ops == NULL) + return -1; + + if (queue_id >= VHOST_MAX_VRING) + return -1; + + vq = dev->virtqueue[queue_id]; + + if (unlikely(vq == NULL || !dev->async_copy)) + return -1; + + if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) { + VHOST_LOG_CONFIG(ERR, + "async copy is not supported on non-inorder mode " + "(vid %d, qid: %d)\n", vid, queue_id); + return -1; + } + + if (unlikely(ops->check_completed_copies == NULL || + ops->transfer_data == NULL)) + return -1; + + rte_spinlock_lock(&vq->access_lock); + ret = async_channel_register(vid, queue_id, config, ops); rte_spinlock_unlock(&vq->access_lock); - return 0; + return ret; +} + +int +rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id, + struct rte_vhost_async_config config, + struct rte_vhost_async_channel_ops *ops) +{ + struct vhost_virtqueue *vq; + struct virtio_net *dev = get_device(vid); + + if (dev == NULL || ops == NULL) + return -1; + + if (queue_id >= VHOST_MAX_VRING) + return -1; + + vq = dev->virtqueue[queue_id]; + + if (unlikely(vq == NULL || !dev->async_copy)) + return -1; + + if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) { + VHOST_LOG_CONFIG(ERR, + "async copy is not supported on non-inorder mode " + "(vid %d, qid: %d)\n", vid, queue_id); + return -1; + } + + if (unlikely(ops->check_completed_copies == NULL || + ops->transfer_data == NULL)) + return -1; + + return async_channel_register(vid, queue_id, config, ops); } int @@ -1775,5 +1819,40 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) return ret; } +int +rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id) +{ + struct vhost_virtqueue *vq; + struct virtio_net *dev = get_device(vid); + + if (dev == NULL) + return -1; + + if (queue_id >= VHOST_MAX_VRING) + return -1; + + vq = dev->virtqueue[queue_id]; + + if (vq == NULL) + return -1; + + if (!vq->async_registered) + return 0; + + if (vq->async_pkts_inflight_n) { + VHOST_LOG_CONFIG(ERR, "Failed to unregister async channel. " + "async inflight packets must be completed before unregistration.\n"); + return -1; + } + + vhost_free_async_mem(vq); + + vq->async_ops.transfer_data = NULL; + vq->async_ops.check_completed_copies = NULL; + vq->async_registered = false; + + return 0; +} + RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO); RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);