From patchwork Mon Aug 19 11:34:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 57740 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D94961BEAA; Mon, 19 Aug 2019 13:37:33 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 4983C1BE95; Mon, 19 Aug 2019 13:37:29 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Aug 2019 04:37:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,403,1559545200"; d="scan'208";a="177856890" Received: from dpdk-virtio-tbie-2.sh.intel.com ([10.67.104.71]) by fmsmga008.fm.intel.com with ESMTP; 19 Aug 2019 04:37:27 -0700 From: Tiwei Bie To: dev@dpdk.org, maxime.coquelin@redhat.com, zhihong.wang@intel.com Cc: lvyilong.lyl@alibaba-inc.com, yinan.wang@intel.com, xnhp0320@icloud.com, stable@dpdk.org Date: Mon, 19 Aug 2019 19:34:57 +0800 Message-Id: <20190819113457.15569-4-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190819113457.15569-1-tiwei.bie@intel.com> References: <20190819113457.15569-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 3/3] vhost: protect vring access done by application X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Besides the enqueue/dequeue API, other APIs of the builtin net backend should also be protected. Fixes: a3688046995f ("vhost: protect active rings from async ring changes") Cc: stable@dpdk.org Reported-by: Peng He Signed-off-by: Tiwei Bie Reviewed-by: Maxime Coquelin --- lib/librte_vhost/vhost.c | 50 +++++++++++++++++++++++++++++++--------- 1 file changed, 39 insertions(+), 11 deletions(-) diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index 77be16069..cea44df8c 100644 --- a/lib/librte_vhost/vhost.c +++ b/lib/librte_vhost/vhost.c @@ -785,22 +785,33 @@ rte_vhost_avail_entries(int vid, uint16_t queue_id) { struct virtio_net *dev; struct vhost_virtqueue *vq; + uint16_t ret = 0; dev = get_device(vid); if (!dev) return 0; vq = dev->virtqueue[queue_id]; - if (!vq->enabled) - return 0; - return *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx; + rte_spinlock_lock(&vq->access_lock); + + if (unlikely(!vq->enabled || vq->avail == NULL)) + goto out; + + ret = *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx; + +out: + rte_spinlock_unlock(&vq->access_lock); + return ret; } -static inline void +static inline int vhost_enable_notify_split(struct virtio_net *dev, struct vhost_virtqueue *vq, int enable) { + if (vq->used == NULL) + return -1; + if (!(dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))) { if (enable) vq->used->flags &= ~VRING_USED_F_NO_NOTIFY; @@ -810,17 +821,21 @@ vhost_enable_notify_split(struct virtio_net *dev, if (enable) vhost_avail_event(vq) = vq->last_avail_idx; } + return 0; } -static inline void +static inline int vhost_enable_notify_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, int enable) { uint16_t flags; + if (vq->device_event == NULL) + return -1; + if (!enable) { vq->device_event->flags = VRING_EVENT_F_DISABLE; - return; + return 0; } flags = VRING_EVENT_F_ENABLE; @@ -833,6 +848,7 @@ vhost_enable_notify_packed(struct virtio_net *dev, rte_smp_wmb(); vq->device_event->flags = flags; + return 0; } int @@ -840,18 +856,23 @@ rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable) { struct virtio_net *dev = get_device(vid); struct vhost_virtqueue *vq; + int ret; if (!dev) return -1; vq = dev->virtqueue[queue_id]; + rte_spinlock_lock(&vq->access_lock); + if (vq_is_packed(dev)) - vhost_enable_notify_packed(dev, vq, enable); + ret = vhost_enable_notify_packed(dev, vq, enable); else - vhost_enable_notify_split(dev, vq, enable); + ret = vhost_enable_notify_split(dev, vq, enable); - return 0; + rte_spinlock_unlock(&vq->access_lock); + + return ret; } void @@ -890,6 +911,7 @@ rte_vhost_rx_queue_count(int vid, uint16_t qid) { struct virtio_net *dev; struct vhost_virtqueue *vq; + uint32_t ret = 0; dev = get_device(vid); if (dev == NULL) @@ -905,10 +927,16 @@ rte_vhost_rx_queue_count(int vid, uint16_t qid) if (vq == NULL) return 0; + rte_spinlock_lock(&vq->access_lock); + if (unlikely(vq->enabled == 0 || vq->avail == NULL)) - return 0; + goto out; - return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx; + ret = *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx; + +out: + rte_spinlock_unlock(&vq->access_lock); + return ret; } int rte_vhost_get_vdpa_device_id(int vid)