From patchwork Wed Oct 12 06:40:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Changpeng" X-Patchwork-Id: 118027 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A615EA00C2; Wed, 12 Oct 2022 08:40:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8BFCA42BF0; Wed, 12 Oct 2022 08:40:32 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 91FB540691 for ; Wed, 12 Oct 2022 08:40:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665556830; x=1697092830; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+xFdynqz83Ss/GhETjSvqz7G+uUiSWMOSZ/KcpTQ0pQ=; b=LEMBKRMeH3sdP9jNp+hD0w/JCVf3ez4p+9h7rBBalgsb8htPCalB6xwQ RFnTnDSNgiwgHtorWVXGPu9PfWm/BmBudEeZNcPrKgpEKx3TAR2j5kHZv w7X/+fmdHXKvF8GV0wrqz8uo3WMVDQPrwhYZeT0eh60BRYyerVy/kh0Sx x0IBYPcVnM3q8jln+Rn14l3ahHrylbuXMzCfIgNQZfBmyGTIPHyT3CK7u U++9FMm7wcLnB6xbyC7CZkbCfKDmToS2r28oANdrpkjTKf4fzUOYn8ZXf wEcQD6CJ0sruZdvF5f5u2Kl+fesxSPbGkv9B6JOLjSnkAcuG+OzamJQz7 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10497"; a="366713836" X-IronPort-AV: E=Sophos;i="5.95,178,1661842800"; d="scan'208";a="366713836" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2022 23:40:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10497"; a="628992818" X-IronPort-AV: E=Sophos;i="5.95,178,1661842800"; d="scan'208";a="628992818" Received: from waikikibeach52.sh.intel.com ([10.67.111.175]) by fmsmga007.fm.intel.com with ESMTP; 11 Oct 2022 23:40:24 -0700 From: Changpeng Liu To: dev@dpdk.org Cc: Changpeng Liu , Maxime Coquelin , Chenbo Xia , David Marchand Subject: [PATCH v2] vhost: add new `rte_vhost_vring_call_nonblock` API Date: Wed, 12 Oct 2022 14:40:07 +0800 Message-Id: <20221012064007.56040-1-changpeng.liu@intel.com> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20220906022225.17215-1-changpeng.liu@intel.com> References: <20220906022225.17215-1-changpeng.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Vhost-user library locks all VQ's access lock when processing vring based messages, such as SET_VRING_KICK and SET_VRING_CALL, and the data processing thread may already be started, e.g: SPDK vhost-blk and vhost-scsi will start the data processing thread when one vring is ready, then deadlock may happen when SPDK is posting interrupts to VM. Here, we add a new API which allows caller to try again later for this case. Bugzilla ID: 1015 Fixes: c5736998305d ("vhost: fix missing virtqueue lock protection") Signed-off-by: Changpeng Liu Reviewed-by: Maxime Coquelin --- lib/vhost/rte_vhost.h | 15 +++++++++++++++ lib/vhost/version.map | 3 +++ lib/vhost/vhost.c | 30 ++++++++++++++++++++++++++++++ 3 files changed, 48 insertions(+) diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index bb7d86a432..d22b25cd4e 100644 --- a/lib/vhost/rte_vhost.h +++ b/lib/vhost/rte_vhost.h @@ -909,6 +909,21 @@ rte_vhost_clr_inflight_desc_packed(int vid, uint16_t vring_idx, */ int rte_vhost_vring_call(int vid, uint16_t vring_idx); +/** + * Notify the guest that used descriptors have been added to the vring. This + * function acts as a memory barrier. This function will return -EAGAIN when + * vq's access lock is held by other thread, user should try again later. + * + * @param vid + * vhost device ID + * @param vring_idx + * vring index + * @return + * 0 on success, -1 on failure, -EAGAIN for another retry + */ +__rte_experimental +int rte_vhost_vring_call_nonblock(int vid, uint16_t vring_idx); + /** * Get vhost RX queue avail count. * diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 7a00b65740..c8c44b8326 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -94,6 +94,9 @@ EXPERIMENTAL { rte_vhost_async_try_dequeue_burst; rte_vhost_driver_get_vdpa_dev_type; rte_vhost_clear_queue; + + # added in 22.11 + rte_vhost_vring_call_nonblock; }; INTERNAL { diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 8740aa2788..ed6efb003f 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1317,6 +1317,36 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx) return 0; } +int +rte_vhost_vring_call_nonblock(int vid, uint16_t vring_idx) +{ + struct virtio_net *dev; + struct vhost_virtqueue *vq; + + dev = get_device(vid); + if (!dev) + return -1; + + if (vring_idx >= VHOST_MAX_VRING) + return -1; + + vq = dev->virtqueue[vring_idx]; + if (!vq) + return -1; + + if (!rte_spinlock_trylock(&vq->access_lock)) + return -EAGAIN; + + if (vq_is_packed(dev)) + vhost_vring_call_packed(dev, vq); + else + vhost_vring_call_split(dev, vq); + + rte_spinlock_unlock(&vq->access_lock); + + return 0; +} + uint16_t rte_vhost_avail_entries(int vid, uint16_t queue_id) {