From patchwork Thu Aug 31 09:50:15 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 28133 X-Patchwork-Delegate: yuanhan.liu@linux.intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 341BD3255; Thu, 31 Aug 2017 11:52:17 +0200 (CEST) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 01B803230 for ; Thu, 31 Aug 2017 11:52:14 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 57B9F5275A; Thu, 31 Aug 2017 09:52:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 57B9F5275A Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=maxime.coquelin@redhat.com Received: from localhost.localdomain (ovpn-112-32.ams2.redhat.com [10.36.112.32]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6C52177DCE; Thu, 31 Aug 2017 09:52:02 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, yliu@fridaylinux.org, jfreiman@redhat.com, tiwei.bie@intel.com Cc: mst@redhat.com, vkaplans@redhat.com, jasowang@redhat.com, Maxime Coquelin Date: Thu, 31 Aug 2017 11:50:15 +0200 Message-Id: <20170831095023.21037-14-maxime.coquelin@redhat.com> In-Reply-To: <20170831095023.21037-1-maxime.coquelin@redhat.com> References: <20170831095023.21037-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Thu, 31 Aug 2017 09:52:13 +0000 (UTC) Subject: [dpdk-dev] [PATCH 13/21] vhost: use the guest IOVA to host VA helper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Replace rte_vhost_gpa_to_vva() calls with vhost_iova_to_vva(), which requires to also pass the mapped len and the access permissions needed. Signed-off-by: Maxime Coquelin --- lib/librte_vhost/virtio_net.c | 71 +++++++++++++++++++++++++++++-------------- 1 file changed, 49 insertions(+), 22 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 04255dc85..18531c97d 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -168,8 +168,9 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr) } static __rte_always_inline int -copy_mbuf_to_desc(struct virtio_net *dev, struct vring_desc *descs, - struct rte_mbuf *m, uint16_t desc_idx, uint32_t size) +copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct vring_desc *descs, struct rte_mbuf *m, + uint16_t desc_idx, uint32_t size) { uint32_t desc_avail, desc_offset; uint32_t mbuf_avail, mbuf_offset; @@ -180,7 +181,8 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vring_desc *descs, uint16_t nr_desc = 1; desc = &descs[desc_idx]; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, vq, desc->addr, + desc->len, VHOST_ACCESS_RW); /* * Checking of 'desc_addr' placed outside of 'unlikely' macro to avoid * performance issue with some versions of gcc (4.8.4 and 5.3.0) which @@ -219,7 +221,9 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vring_desc *descs, return -1; desc = &descs[desc->next]; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, vq, desc->addr, + desc->len, + VHOST_ACCESS_RW); if (unlikely(!desc_addr)) return -1; @@ -304,8 +308,10 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, if (vq->desc[desc_idx].flags & VRING_DESC_F_INDIRECT) { descs = (struct vring_desc *)(uintptr_t) - rte_vhost_gpa_to_vva(dev->mem, - vq->desc[desc_idx].addr); + vhost_iova_to_vva(dev, + vq, vq->desc[desc_idx].addr, + vq->desc[desc_idx].len, + VHOST_ACCESS_RO); if (unlikely(!descs)) { count = i; break; @@ -318,7 +324,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, sz = vq->size; } - err = copy_mbuf_to_desc(dev, descs, pkts[i], desc_idx, sz); + err = copy_mbuf_to_desc(dev, vq, descs, pkts[i], desc_idx, sz); if (unlikely(err)) { count = i; break; @@ -361,7 +367,9 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (vq->desc[idx].flags & VRING_DESC_F_INDIRECT) { descs = (struct vring_desc *)(uintptr_t) - rte_vhost_gpa_to_vva(dev->mem, vq->desc[idx].addr); + vhost_iova_to_vva(dev, vq, vq->desc[idx].addr, + vq->desc[idx].len, + VHOST_ACCESS_RO); if (unlikely(!descs)) return -1; @@ -436,8 +444,9 @@ reserve_avail_buf_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq, } static __rte_always_inline int -copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct rte_mbuf *m, - struct buf_vector *buf_vec, uint16_t num_buffers) +copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf *m, struct buf_vector *buf_vec, + uint16_t num_buffers) { uint32_t vec_idx = 0; uint64_t desc_addr; @@ -450,7 +459,9 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct rte_mbuf *m, if (unlikely(m == NULL)) return -1; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, buf_vec[vec_idx].buf_addr); + desc_addr = vhost_iova_to_vva(dev, vq, buf_vec[vec_idx].buf_addr, + buf_vec[vec_idx].buf_len, + VHOST_ACCESS_RW); if (buf_vec[vec_idx].buf_len < dev->vhost_hlen || !desc_addr) return -1; @@ -471,8 +482,11 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct rte_mbuf *m, /* done with current desc buf, get the next one */ if (desc_avail == 0) { vec_idx++; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, - buf_vec[vec_idx].buf_addr); + desc_addr = + vhost_iova_to_vva(dev, vq, + buf_vec[vec_idx].buf_addr, + buf_vec[vec_idx].buf_len, + VHOST_ACCESS_RW); if (unlikely(!desc_addr)) return -1; @@ -569,7 +583,7 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, dev->vid, vq->last_avail_idx, vq->last_avail_idx + num_buffers); - if (copy_mbuf_to_desc_mergeable(dev, pkts[pkt_idx], + if (copy_mbuf_to_desc_mergeable(dev, vq, pkts[pkt_idx], buf_vec, num_buffers) < 0) { vq->shadow_used_idx -= num_buffers; break; @@ -768,8 +782,9 @@ put_zmbuf(struct zcopy_mbuf *zmbuf) } static __rte_always_inline int -copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, - uint16_t max_desc, struct rte_mbuf *m, uint16_t desc_idx, +copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct vring_desc *descs, uint16_t max_desc, + struct rte_mbuf *m, uint16_t desc_idx, struct rte_mempool *mbuf_pool) { struct vring_desc *desc; @@ -787,7 +802,10 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, (desc->flags & VRING_DESC_F_INDIRECT)) return -1; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, + vq, desc->addr, + desc->len, + VHOST_ACCESS_RO); if (unlikely(!desc_addr)) return -1; @@ -807,7 +825,10 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, if (unlikely(desc->flags & VRING_DESC_F_INDIRECT)) return -1; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, + vq, desc->addr, + desc->len, + VHOST_ACCESS_RO); if (unlikely(!desc_addr)) return -1; @@ -871,7 +892,10 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vring_desc *descs, if (unlikely(desc->flags & VRING_DESC_F_INDIRECT)) return -1; - desc_addr = rte_vhost_gpa_to_vva(dev->mem, desc->addr); + desc_addr = vhost_iova_to_vva(dev, + vq, desc->addr, + desc->len, + VHOST_ACCESS_RO); if (unlikely(!desc_addr)) return -1; @@ -1117,8 +1141,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, if (vq->desc[desc_indexes[i]].flags & VRING_DESC_F_INDIRECT) { desc = (struct vring_desc *)(uintptr_t) - rte_vhost_gpa_to_vva(dev->mem, - vq->desc[desc_indexes[i]].addr); + vhost_iova_to_vva(dev, vq, + vq->desc[desc_indexes[i]].addr, + sizeof(*desc), + VHOST_ACCESS_RO); if (unlikely(!desc)) break; @@ -1138,7 +1164,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, break; } - err = copy_desc_to_mbuf(dev, desc, sz, pkts[i], idx, mbuf_pool); + err = copy_desc_to_mbuf(dev, vq, desc, sz, + pkts[i], idx, mbuf_pool); if (unlikely(err)) { rte_pktmbuf_free(pkts[i]); break;