From patchwork Thu Jun 11 10:02:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Fu X-Patchwork-Id: 71266 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 596FBA00C5; Thu, 11 Jun 2020 12:14:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A4AE81BDF8; Thu, 11 Jun 2020 12:14:03 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id B84BF1BDF8 for ; Thu, 11 Jun 2020 12:14:01 +0200 (CEST) IronPort-SDR: zVsjiQT8vgkYHeae/9k0Ap9ZLEMxP7TYD08U7H0ppklMrSiL7rla2GpUGLbO1VJHlZEOOcDyxD 21UagHlw/VyA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jun 2020 03:14:00 -0700 IronPort-SDR: tcQORwqLzyvtKuhYUmoNrl688G6IvyZn1q6yVRu/J/HqYe3+9nujCVY7VFaV0+0iVzO3ympH7q ns+iI2hOp32A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,499,1583222400"; d="scan'208";a="296542271" Received: from npg-dpdk-patrickfu-sl1.sh.intel.com ([10.67.117.45]) by fmsmga004.fm.intel.com with ESMTP; 11 Jun 2020 03:13:58 -0700 From: patrick.fu@intel.com To: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, zhihong.wang@intel.com, xiaolong.ye@intel.com Cc: patrick.fu@intel.com, cheng1.jiang@intel.com, cunming.liang@intel.com Date: Thu, 11 Jun 2020 18:02:04 +0800 Message-Id: <1591869725-13331-2-git-send-email-patrick.fu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1591869725-13331-1-git-send-email-patrick.fu@intel.com> References: <1591869725-13331-1-git-send-email-patrick.fu@intel.com> Subject: [dpdk-dev] [PATCH v1 1/2] vhost: introduce async data path registration API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Patrick This patch introduces registration/un-registration APIs for async data path together with all required data structures and DMA callback function proto-types. Signed-off-by: Patrick --- lib/librte_vhost/Makefile | 3 +- lib/librte_vhost/rte_vhost.h | 1 + lib/librte_vhost/rte_vhost_async.h | 134 +++++++++++++++++++++++++++++++++++++ lib/librte_vhost/socket.c | 20 ++++++ lib/librte_vhost/vhost.c | 74 +++++++++++++++++++- lib/librte_vhost/vhost.h | 30 ++++++++- lib/librte_vhost/vhost_user.c | 28 ++++++-- 7 files changed, 283 insertions(+), 7 deletions(-) create mode 100644 lib/librte_vhost/rte_vhost_async.h diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile index e592795..3aed094 100644 --- a/lib/librte_vhost/Makefile +++ b/lib/librte_vhost/Makefile @@ -41,7 +41,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_VHOST) := fd_man.c iotlb.c socket.c vhost.c \ vhost_user.c virtio_net.c vdpa.c # install includes -SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_vhost.h rte_vdpa.h +SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_vhost.h rte_vdpa.h \ + rte_vhost_async.h # only compile vhost crypto when cryptodev is enabled ifeq ($(CONFIG_RTE_LIBRTE_CRYPTODEV),y) diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index d43669f..cec4d07 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -35,6 +35,7 @@ #define RTE_VHOST_USER_EXTBUF_SUPPORT (1ULL << 5) /* support only linear buffers (no chained mbufs) */ #define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6) +#define RTE_VHOST_USER_ASYNC_COPY (1ULL << 7) /** Protocol features. */ #ifndef VHOST_USER_PROTOCOL_F_MQ diff --git a/lib/librte_vhost/rte_vhost_async.h b/lib/librte_vhost/rte_vhost_async.h new file mode 100644 index 0000000..82f2ebe --- /dev/null +++ b/lib/librte_vhost/rte_vhost_async.h @@ -0,0 +1,134 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _RTE_VHOST_ASYNC_H_ +#define _RTE_VHOST_ASYNC_H_ + +#include "rte_vhost.h" + +/** + * iovec iterator + */ +struct iov_it { + /** offset to the first byte of interesting data */ + size_t offset; + /** total bytes of data in this iterator */ + size_t count; + /** pointer to the iovec array */ + struct iovec *iov; + /** number of iovec in this iterator */ + unsigned long nr_segs; +}; + +/** + * dma transfer descriptor pair + */ +struct dma_trans_desc { + /** source memory iov_it */ + struct iov_it *src; + /** destination memory iov_it */ + struct iov_it *dst; +}; + +/** + * dma transfer status + */ +struct dma_trans_status { + /** An array of application specific data for source memory */ + uintptr_t *src_opaque_data; + /** An array of application specific data for destination memory */ + uintptr_t *dst_opaque_data; +}; + +/** + * dma operation callbacks to be implemented by applications + */ +struct rte_vhost_async_channel_ops { + /** + * instruct a DMA channel to perform copies for a batch of packets + * + * @param vid + * id of vhost device to perform data copies + * @param queue_id + * queue id to perform data copies + * @param descs + * an array of DMA transfer memory descriptors + * @param opaque_data + * opaque data pair sending to DMA engine + * @param count + * number of elements in the "descs" array + * @return + * -1 on failure, number of descs processed on success + */ + int (*transfer_data)(int vid, uint16_t queue_id, + struct dma_trans_desc *descs, + struct dma_trans_status *opaque_data, + uint16_t count); + /** + * check copy-completed packets from a DMA channel + * @param vid + * id of vhost device to check copy completion + * @param queue_id + * queue id to check copyp completion + * @param opaque_data + * buffer to receive the opaque data pair from DMA engine + * @param max_packets + * max number of packets could be completed + * @return + * -1 on failure, number of iov segments completed on success + */ + int (*check_completed_copies)(int vid, uint16_t queue_id, + struct dma_trans_status *opaque_data, + uint16_t max_packets); +}; + +/** + * dma channel feature bit definition + */ +struct dma_channel_features { + union { + uint32_t intval; + struct { + uint32_t inorder:1; + uint32_t resvd0115:15; + uint32_t threshold:12; + uint32_t resvd2831:4; + }; + }; +}; + +/** + * register a dma channel for vhost + * + * @param vid + * vhost device id DMA channel to be attached to + * @param queue_id + * vhost queue id DMA channel to be attached to + * @param features + * DMA channel feature bit + * b0 : DMA supports inorder data transfer + * b1 - b15: reserved + * b16 - b27: Packet length threshold for DMA transfer + * b28 - b31: reserved + * @param ops + * DMA operation callbacks + * @return + * 0 on success, -1 on failures + */ +int rte_vhost_async_channel_register(int vid, uint16_t queue_id, + uint32_t features, struct rte_vhost_async_channel_ops *ops); + +/** + * unregister a dma channel for vhost + * + * @param vid + * vhost device id DMA channel to be detached + * @param queue_id + * vhost queue id DMA channel to be detached + * @return + * 0 on success, -1 on failures + */ +int rte_vhost_async_channel_unregister(int vid, uint16_t queue_id); + +#endif /* _RTE_VDPA_H_ */ diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c index 0a66ef9..f817783 100644 --- a/lib/librte_vhost/socket.c +++ b/lib/librte_vhost/socket.c @@ -42,6 +42,7 @@ struct vhost_user_socket { bool use_builtin_virtio_net; bool extbuf; bool linearbuf; + bool async_copy; /* * The "supported_features" indicates the feature bits the @@ -210,6 +211,7 @@ struct vhost_user { size_t size; struct vhost_user_connection *conn; int ret; + struct virtio_net *dev; if (vsocket == NULL) return; @@ -241,6 +243,13 @@ struct vhost_user { if (vsocket->linearbuf) vhost_enable_linearbuf(vid); + if (vsocket->async_copy) { + dev = get_device(vid); + + if (dev) + dev->async_copy = 1; + } + VHOST_LOG_CONFIG(INFO, "new device, handle is %d\n", vid); if (vsocket->notify_ops->new_connection) { @@ -891,6 +900,17 @@ struct vhost_user_reconnect_list { goto out_mutex; } + vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY; + + if (vsocket->async_copy && + (flags & (RTE_VHOST_USER_IOMMU_SUPPORT | + RTE_VHOST_USER_POSTCOPY_SUPPORT))) { + VHOST_LOG_CONFIG(ERR, "error: enabling async copy and IOMMU " + "or post-copy feature simultaneously is not " + "supported\n"); + goto out_mutex; + } + /* * Set the supported features correctly for the builtin vhost-user * net driver. diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index 0266318..e6b688a 100644 --- a/lib/librte_vhost/vhost.c +++ b/lib/librte_vhost/vhost.c @@ -332,8 +332,13 @@ { if (vq_is_packed(dev)) rte_free(vq->shadow_used_packed); - else + else { rte_free(vq->shadow_used_split); + if (vq->async_pkts_pending) + rte_free(vq->async_pkts_pending); + if (vq->async_pending_info) + rte_free(vq->async_pending_info); + } rte_free(vq->batch_copy_elems); rte_mempool_free(vq->iotlb_pool); rte_free(vq); @@ -1527,3 +1532,70 @@ int rte_vhost_extern_callback_register(int vid, if (vhost_data_log_level >= 0) rte_log_set_level(vhost_data_log_level, RTE_LOG_WARNING); } + +int rte_vhost_async_channel_register(int vid, uint16_t queue_id, + uint32_t features, + struct rte_vhost_async_channel_ops *ops) +{ + struct vhost_virtqueue *vq; + struct virtio_net *dev = get_device(vid); + struct dma_channel_features f; + + if (dev == NULL || ops == NULL) + return -1; + + f.intval = features; + + vq = dev->virtqueue[queue_id]; + + if (vq == NULL) + return -1; + + /** packed queue is not supported */ + if (vq_is_packed(dev) || !f.inorder) + return -1; + + if (ops->check_completed_copies == NULL || + ops->transfer_data == NULL) + return -1; + + rte_spinlock_lock(&vq->access_lock); + + vq->async_ops.check_completed_copies = ops->check_completed_copies; + vq->async_ops.transfer_data = ops->transfer_data; + + vq->async_inorder = f.inorder; + vq->async_threshold = f.threshold; + + vq->async_registered = true; + + rte_spinlock_unlock(&vq->access_lock); + + return 0; +} + +int rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) +{ + struct vhost_virtqueue *vq; + struct virtio_net *dev = get_device(vid); + + if (dev == NULL) + return -1; + + vq = dev->virtqueue[queue_id]; + + if (vq == NULL) + return -1; + + rte_spinlock_lock(&vq->access_lock); + + vq->async_ops.transfer_data = NULL; + vq->async_ops.check_completed_copies = NULL; + + vq->async_registered = false; + + rte_spinlock_unlock(&vq->access_lock); + + return 0; +} + diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index df98d15..a7fbe23 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -23,6 +23,8 @@ #include "rte_vhost.h" #include "rte_vdpa.h" +#include "rte_vhost_async.h" + /* Used to indicate that the device is running on a data core */ #define VIRTIO_DEV_RUNNING 1 /* Used to indicate that the device is ready to operate */ @@ -39,6 +41,11 @@ #define VHOST_LOG_CACHE_NR 32 +#define MAX_PKT_BURST 32 + +#define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2) +#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2) + #define PACKED_DESC_ENQUEUE_USED_FLAG(w) \ ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED | VRING_DESC_F_WRITE) : \ VRING_DESC_F_WRITE) @@ -200,6 +207,25 @@ struct vhost_virtqueue { TAILQ_HEAD(, vhost_iotlb_entry) iotlb_list; int iotlb_cache_nr; TAILQ_HEAD(, vhost_iotlb_entry) iotlb_pending_list; + + /* operation callbacks for async dma */ + struct rte_vhost_async_channel_ops async_ops; + + struct iov_it it_pool[VHOST_MAX_ASYNC_IT]; + struct iovec vec_pool[VHOST_MAX_ASYNC_VEC]; + + /* async data transfer status */ + uintptr_t **async_pkts_pending; + #define ASYNC_PENDING_INFO_N_MSK 0xFFFF + #define ASYNC_PENDING_INFO_N_SFT 16 + uint64_t *async_pending_info; + uint16_t async_pkts_idx; + uint16_t async_pkts_inflight_n; + + /* vq async features */ + bool async_inorder; + bool async_registered; + uint16_t async_threshold; } __rte_cache_aligned; /* Old kernels have no such macros defined */ @@ -353,6 +379,7 @@ struct virtio_net { int16_t broadcast_rarp; uint32_t nr_vring; int dequeue_zero_copy; + int async_copy; int extbuf; int linearbuf; struct vhost_virtqueue *virtqueue[VHOST_MAX_QUEUE_PAIRS * 2]; @@ -702,7 +729,8 @@ uint64_t translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, /* Don't kick guest if we don't reach index specified by guest. */ if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) { uint16_t old = vq->signalled_used; - uint16_t new = vq->last_used_idx; + uint16_t new = vq->async_pkts_inflight_n ? + vq->used->idx:vq->last_used_idx; bool signalled_used_valid = vq->signalled_used_valid; vq->signalled_used = new; diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c index 84bebad..d7600bf 100644 --- a/lib/librte_vhost/vhost_user.c +++ b/lib/librte_vhost/vhost_user.c @@ -464,12 +464,25 @@ } else { if (vq->shadow_used_split) rte_free(vq->shadow_used_split); + if (vq->async_pkts_pending) + rte_free(vq->async_pkts_pending); + if (vq->async_pending_info) + rte_free(vq->async_pending_info); + vq->shadow_used_split = rte_malloc(NULL, vq->size * sizeof(struct vring_used_elem), RTE_CACHE_LINE_SIZE); - if (!vq->shadow_used_split) { + vq->async_pkts_pending = rte_malloc(NULL, + vq->size * sizeof(uintptr_t), + RTE_CACHE_LINE_SIZE); + vq->async_pending_info = rte_malloc(NULL, + vq->size * sizeof(uint64_t), + RTE_CACHE_LINE_SIZE); + if (!vq->shadow_used_split || + !vq->async_pkts_pending || + !vq->async_pending_info) { VHOST_LOG_CONFIG(ERR, - "failed to allocate memory for shadow used ring.\n"); + "failed to allocate memory for vq internal data.\n"); return RTE_VHOST_MSG_RESULT_ERR; } } @@ -1147,7 +1160,8 @@ goto err_mmap; } - populate = (dev->dequeue_zero_copy) ? MAP_POPULATE : 0; + populate = (dev->dequeue_zero_copy || dev->async_copy) ? + MAP_POPULATE : 0; mmap_addr = mmap(NULL, mmap_size, PROT_READ | PROT_WRITE, MAP_SHARED | populate, fd, 0); @@ -1162,7 +1176,7 @@ reg->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset; - if (dev->dequeue_zero_copy) + if (dev->dequeue_zero_copy || dev->async_copy) if (add_guest_pages(dev, reg, alignment) < 0) { VHOST_LOG_CONFIG(ERR, "adding guest pages to region %u failed.\n", @@ -1945,6 +1959,12 @@ static int vhost_user_set_vring_err(struct virtio_net **pdev __rte_unused, } else { rte_free(vq->shadow_used_split); vq->shadow_used_split = NULL; + if (vq->async_pkts_pending) + rte_free(vq->async_pkts_pending); + if (vq->async_pending_info) + rte_free(vq->async_pending_info); + vq->async_pkts_pending = NULL; + vq->async_pending_info = NULL; } rte_free(vq->batch_copy_elems); From patchwork Thu Jun 11 10:02:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Fu X-Patchwork-Id: 71267 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10C8DA00C5; Thu, 11 Jun 2020 12:14:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 72A951BE90; Thu, 11 Jun 2020 12:14:07 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id E5D6E1BE9B for ; Thu, 11 Jun 2020 12:14:05 +0200 (CEST) IronPort-SDR: efmyxVTDxbJzehE/EhHEtq73VR0ESxX9H0iBIL3JVCqfpljqntXOvtbwN/7RUnNSqpPNNemcQi OWAISejrb+VQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jun 2020 03:14:05 -0700 IronPort-SDR: tge3X7M8gKulE/uggjHFVTkusHhU8pdwBIN8l51tUlecUY/t5/GTIjyFGZPpkPyzi2ApvoNYhF ok1ifg6aS8Rg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,499,1583222400"; d="scan'208";a="296542293" Received: from npg-dpdk-patrickfu-sl1.sh.intel.com ([10.67.117.45]) by fmsmga004.fm.intel.com with ESMTP; 11 Jun 2020 03:14:03 -0700 From: patrick.fu@intel.com To: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, zhihong.wang@intel.com, xiaolong.ye@intel.com Cc: patrick.fu@intel.com, cheng1.jiang@intel.com, cunming.liang@intel.com Date: Thu, 11 Jun 2020 18:02:05 +0800 Message-Id: <1591869725-13331-3-git-send-email-patrick.fu@intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1591869725-13331-1-git-send-email-patrick.fu@intel.com> References: <1591869725-13331-1-git-send-email-patrick.fu@intel.com> Subject: [dpdk-dev] [PATCH v1 2/2] vhost: introduce async enqueue for split ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Patrick This patch implement async enqueue data path for split ring. Signed-off-by: Patrick --- lib/librte_vhost/rte_vhost_async.h | 38 +++ lib/librte_vhost/virtio_net.c | 538 ++++++++++++++++++++++++++++++++++++- 2 files changed, 574 insertions(+), 2 deletions(-) diff --git a/lib/librte_vhost/rte_vhost_async.h b/lib/librte_vhost/rte_vhost_async.h index 82f2ebe..efcba0a 100644 --- a/lib/librte_vhost/rte_vhost_async.h +++ b/lib/librte_vhost/rte_vhost_async.h @@ -131,4 +131,42 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id, */ int rte_vhost_async_channel_unregister(int vid, uint16_t queue_id); +/** + * This function submit enqueue data to DMA. This function has no + * guranttee to the transfer completion upon return. Applications should + * poll transfer status by rte_vhost_poll_enqueue_completed() + * + * @param vid + * id of vhost device to enqueue data + * @param queue_id + * queue id to enqueue data + * @param pkts + * array of packets to be enqueued + * @param count + * packets num to be enqueued + * @return + * num of packets enqueued + */ +uint16_t rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count); + +/** + * This function check DMA completion status for a specific vhost + * device queue. Packets which finish copying (enqueue) operation + * will be returned in an array. + * + * @param vid + * id of vhost device to enqueue data + * @param queue_id + * queue id to enqueue data + * @param pkts + * blank array to get return packet pointer + * @param count + * size of the packet array + * @return + * num of packets returned + */ +uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count); + #endif /* _RTE_VDPA_H_ */ diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 751c1f3..cf9f884 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -17,14 +17,15 @@ #include #include #include +#include #include "iotlb.h" #include "vhost.h" -#define MAX_PKT_BURST 32 - #define MAX_BATCH_LEN 256 +#define VHOST_ASYNC_BATCH_THRESHOLD 8 + static __rte_always_inline bool rxvq_is_mergeable(struct virtio_net *dev) { @@ -117,6 +118,35 @@ } static __rte_always_inline void +async_flush_shadow_used_ring_split(struct virtio_net *dev, + struct vhost_virtqueue *vq) +{ + uint16_t used_idx = vq->last_used_idx & (vq->size - 1); + + if (used_idx + vq->shadow_used_idx <= vq->size) { + do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, + vq->shadow_used_idx); + } else { + uint16_t size; + + /* update used ring interval [used_idx, vq->size] */ + size = vq->size - used_idx; + do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, size); + + /* update the left half used ring interval [0, left_size] */ + do_flush_shadow_used_ring_split(dev, vq, 0, size, + vq->shadow_used_idx - size); + } + vq->last_used_idx += vq->shadow_used_idx; + + rte_smp_wmb(); + + vhost_log_cache_sync(dev, vq); + + vq->shadow_used_idx = 0; +} + +static __rte_always_inline void update_shadow_used_ring_split(struct vhost_virtqueue *vq, uint16_t desc_idx, uint32_t len) { @@ -905,6 +935,199 @@ return error; } +static __rte_always_inline void +async_fill_vec(struct iovec *v, void *base, size_t len) +{ + v->iov_base = base; + v->iov_len = len; +} + +static __rte_always_inline void +async_fill_it(struct iov_it *it, size_t count, + struct iovec *vec, unsigned long nr_seg) +{ + it->offset = 0; + it->count = count; + + if (count) { + it->iov = vec; + it->nr_segs = nr_seg; + } else { + it->iov = 0; + it->nr_segs = 0; + } +} + +static __rte_always_inline void +async_fill_des(struct dma_trans_desc *desc, + struct iov_it *src, struct iov_it *dst) +{ + desc->src = src; + desc->dst = dst; +} + +static __rte_always_inline int +async_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf *m, struct buf_vector *buf_vec, + uint16_t nr_vec, uint16_t num_buffers, + struct iovec *src_iovec, struct iovec *dst_iovec, + struct iov_it *src_it, struct iov_it *dst_it) +{ + uint32_t vec_idx = 0; + uint32_t mbuf_offset, mbuf_avail; + uint32_t buf_offset, buf_avail; + uint64_t buf_addr, buf_iova, buf_len; + uint32_t cpy_len, cpy_threshold; + uint64_t hdr_addr; + struct rte_mbuf *hdr_mbuf; + struct batch_copy_elem *batch_copy = vq->batch_copy_elems; + struct virtio_net_hdr_mrg_rxbuf tmp_hdr, *hdr = NULL; + int error = 0; + + uint32_t tlen = 0; + int tvec_idx = 0; + void *hpa; + + if (unlikely(m == NULL)) { + error = -1; + goto out; + } + + cpy_threshold = vq->async_threshold; + + buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; + buf_len = buf_vec[vec_idx].buf_len; + + if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) { + error = -1; + goto out; + } + + hdr_mbuf = m; + hdr_addr = buf_addr; + if (unlikely(buf_len < dev->vhost_hlen)) + hdr = &tmp_hdr; + else + hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)hdr_addr; + + VHOST_LOG_DATA(DEBUG, "(%d) RX: num merge buffers %d\n", + dev->vid, num_buffers); + + if (unlikely(buf_len < dev->vhost_hlen)) { + buf_offset = dev->vhost_hlen - buf_len; + vec_idx++; + buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; + buf_len = buf_vec[vec_idx].buf_len; + buf_avail = buf_len - buf_offset; + } else { + buf_offset = dev->vhost_hlen; + buf_avail = buf_len - dev->vhost_hlen; + } + + mbuf_avail = rte_pktmbuf_data_len(m); + mbuf_offset = 0; + + while (mbuf_avail != 0 || m->next != NULL) { + /* done with current buf, get the next one */ + if (buf_avail == 0) { + vec_idx++; + if (unlikely(vec_idx >= nr_vec)) { + error = -1; + goto out; + } + + buf_addr = buf_vec[vec_idx].buf_addr; + buf_iova = buf_vec[vec_idx].buf_iova; + buf_len = buf_vec[vec_idx].buf_len; + + buf_offset = 0; + buf_avail = buf_len; + } + + /* done with current mbuf, get the next one */ + if (mbuf_avail == 0) { + m = m->next; + + mbuf_offset = 0; + mbuf_avail = rte_pktmbuf_data_len(m); + } + + if (hdr_addr) { + virtio_enqueue_offload(hdr_mbuf, &hdr->hdr); + if (rxvq_is_mergeable(dev)) + ASSIGN_UNLESS_EQUAL(hdr->num_buffers, + num_buffers); + + if (unlikely(hdr == &tmp_hdr)) { + copy_vnet_hdr_to_desc(dev, vq, buf_vec, hdr); + } else { + PRINT_PACKET(dev, (uintptr_t)hdr_addr, + dev->vhost_hlen, 0); + vhost_log_cache_write_iova(dev, vq, + buf_vec[0].buf_iova, + dev->vhost_hlen); + } + + hdr_addr = 0; + } + + cpy_len = RTE_MIN(buf_avail, mbuf_avail); + + if (unlikely(cpy_len >= cpy_threshold)) { + hpa = (void *)(uintptr_t)gpa_to_hpa(dev, + buf_iova + buf_offset, cpy_len); + + if (unlikely(!hpa)) { + error = -1; + goto out; + } + + async_fill_vec(src_iovec + tvec_idx, + (void *)(uintptr_t)rte_pktmbuf_iova_offset(m, + mbuf_offset), cpy_len); + + async_fill_vec(dst_iovec + tvec_idx, hpa, cpy_len); + + tlen += cpy_len; + tvec_idx++; + } else { + if (unlikely(vq->batch_copy_nb_elems >= vq->size)) { + rte_memcpy( + (void *)((uintptr_t)(buf_addr + buf_offset)), + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), + cpy_len); + + PRINT_PACKET(dev, + (uintptr_t)(buf_addr + buf_offset), + cpy_len, 0); + } else { + batch_copy[vq->batch_copy_nb_elems].dst = + (void *)((uintptr_t)(buf_addr + buf_offset)); + batch_copy[vq->batch_copy_nb_elems].src = + rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + batch_copy[vq->batch_copy_nb_elems].log_addr = + buf_iova + buf_offset; + batch_copy[vq->batch_copy_nb_elems].len = + cpy_len; + vq->batch_copy_nb_elems++; + } + } + + mbuf_avail -= cpy_len; + mbuf_offset += cpy_len; + buf_avail -= cpy_len; + buf_offset += cpy_len; + } + +out: + async_fill_it(src_it, tlen, src_iovec, tvec_idx); + async_fill_it(dst_it, tlen, dst_iovec, tvec_idx); + + return error; +} + static __rte_always_inline int vhost_enqueue_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -1236,6 +1459,317 @@ return virtio_dev_rx(dev, queue_id, pkts, count); } +static __rte_always_inline void +virtio_dev_rx_async_submit_split_err(struct virtio_net *dev, + struct vhost_virtqueue *vq, uint16_t queue_id, + uint16_t last_idx, uint16_t shadow_idx) +{ + while (vq->async_pkts_inflight_n) { + int er = vq->async_ops.check_completed_copies(dev->vid, + queue_id, 0, MAX_PKT_BURST); + + if (er < 0) { + vq->async_pkts_inflight_n = 0; + break; + } + + vq->async_pkts_inflight_n -= er; + } + + vq->shadow_used_idx = shadow_idx; + vq->last_avail_idx = last_idx; +} + +static __rte_noinline uint32_t +virtio_dev_rx_async_submit_split(struct virtio_net *dev, + struct vhost_virtqueue *vq, uint16_t queue_id, + struct rte_mbuf **pkts, uint32_t count) +{ + uint32_t pkt_idx = 0, pkt_burst_idx = 0; + uint16_t num_buffers; + struct buf_vector buf_vec[BUF_VECTOR_MAX]; + uint16_t avail_head, last_idx, shadow_idx; + + struct iov_it *it_pool = vq->it_pool; + struct iovec *vec_pool = vq->vec_pool; + struct dma_trans_desc tdes[MAX_PKT_BURST]; + struct iovec *src_iovec = vec_pool; + struct iovec *dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1); + struct iov_it *src_it = it_pool; + struct iov_it *dst_it = it_pool + 1; + uint16_t n_free_slot, slot_idx; + int n_pkts = 0; + + avail_head = *((volatile uint16_t *)&vq->avail->idx); + last_idx = vq->last_avail_idx; + shadow_idx = vq->shadow_used_idx; + + /* + * The ordering between avail index and + * desc reads needs to be enforced. + */ + rte_smp_rmb(); + + rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); + + for (pkt_idx = 0; pkt_idx < count; pkt_idx++) { + uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen; + uint16_t nr_vec = 0; + + if (unlikely(reserve_avail_buf_split(dev, vq, + pkt_len, buf_vec, &num_buffers, + avail_head, &nr_vec) < 0)) { + VHOST_LOG_DATA(DEBUG, + "(%d) failed to get enough desc from vring\n", + dev->vid); + vq->shadow_used_idx -= num_buffers; + break; + } + + VHOST_LOG_DATA(DEBUG, "(%d) current index %d | end index %d\n", + dev->vid, vq->last_avail_idx, + vq->last_avail_idx + num_buffers); + + if (async_mbuf_to_desc(dev, vq, pkts[pkt_idx], + buf_vec, nr_vec, num_buffers, + src_iovec, dst_iovec, src_it, dst_it) < 0) { + vq->shadow_used_idx -= num_buffers; + break; + } + + slot_idx = (vq->async_pkts_idx + pkt_idx) & (vq->size - 1); + if (src_it->count) { + async_fill_des(&tdes[pkt_burst_idx], src_it, dst_it); + pkt_burst_idx++; + vq->async_pending_info[slot_idx] = + num_buffers | (src_it->nr_segs << 16); + src_iovec += src_it->nr_segs; + dst_iovec += dst_it->nr_segs; + src_it += 2; + dst_it += 2; + } else { + vq->async_pending_info[slot_idx] = num_buffers; + vq->async_pkts_inflight_n++; + } + + vq->last_avail_idx += num_buffers; + + if (pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD || + (pkt_idx == count - 1 && pkt_burst_idx)) { + n_pkts = vq->async_ops.transfer_data(dev->vid, + queue_id, tdes, 0, pkt_burst_idx); + src_iovec = vec_pool; + dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1); + src_it = it_pool; + dst_it = it_pool + 1; + + if (unlikely(n_pkts < (int)pkt_burst_idx)) { + vq->async_pkts_inflight_n += + n_pkts > 0 ? n_pkts : 0; + virtio_dev_rx_async_submit_split_err(dev, + vq, queue_id, last_idx, shadow_idx); + return 0; + } + + pkt_burst_idx = 0; + vq->async_pkts_inflight_n += n_pkts; + } + } + + if (pkt_burst_idx) { + n_pkts = vq->async_ops.transfer_data(dev->vid, + queue_id, tdes, 0, pkt_burst_idx); + if (unlikely(n_pkts <= (int)pkt_burst_idx)) { + vq->async_pkts_inflight_n += n_pkts > 0 ? n_pkts : 0; + virtio_dev_rx_async_submit_split_err(dev, vq, queue_id, + last_idx, shadow_idx); + return 0; + } + + vq->async_pkts_inflight_n += n_pkts; + } + + do_data_copy_enqueue(dev, vq); + + n_free_slot = vq->size - vq->async_pkts_idx; + if (n_free_slot > pkt_idx) { + rte_memcpy(&vq->async_pkts_pending[vq->async_pkts_idx], + pkts, pkt_idx * sizeof(uintptr_t)); + vq->async_pkts_idx += pkt_idx; + } else { + rte_memcpy(&vq->async_pkts_pending[vq->async_pkts_idx], + pkts, n_free_slot * sizeof(uintptr_t)); + rte_memcpy(&vq->async_pkts_pending[0], + &pkts[n_free_slot], + (pkt_idx - n_free_slot) * sizeof(uintptr_t)); + vq->async_pkts_idx = pkt_idx - n_free_slot; + } + + if (likely(vq->shadow_used_idx)) + async_flush_shadow_used_ring_split(dev, vq); + + return pkt_idx; +} + +uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl, n_pkts_put = 0, n_descs = 0; + uint16_t start_idx, pkts_idx, vq_size; + uint64_t *async_pending_info; + + VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); + if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue idx %d.\n", + dev->vid, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + rte_spinlock_lock(&vq->access_lock); + + pkts_idx = vq->async_pkts_idx; + async_pending_info = vq->async_pending_info; + vq_size = vq->size; + start_idx = pkts_idx > vq->async_pkts_inflight_n ? + pkts_idx - vq->async_pkts_inflight_n : + (vq_size - vq->async_pkts_inflight_n + pkts_idx) & + (vq_size - 1); + + n_pkts_cpl = + vq->async_ops.check_completed_copies(vid, queue_id, 0, count); + + rte_smp_wmb(); + + while (likely(((start_idx + n_pkts_put) & (vq_size - 1)) != pkts_idx)) { + uint64_t info = async_pending_info[ + (start_idx + n_pkts_put) & (vq_size - 1)]; + uint64_t n_segs; + n_pkts_put++; + n_descs += info & ASYNC_PENDING_INFO_N_MSK; + n_segs = info >> ASYNC_PENDING_INFO_N_SFT; + + if (n_segs) { + if (!n_pkts_cpl || n_pkts_cpl < n_segs) { + n_pkts_put--; + n_descs -= info & ASYNC_PENDING_INFO_N_MSK; + if (n_pkts_cpl) { + async_pending_info[ + (start_idx + n_pkts_put) & + (vq_size - 1)] = + ((n_segs - n_pkts_cpl) << + ASYNC_PENDING_INFO_N_SFT) | + (info & ASYNC_PENDING_INFO_N_MSK); + n_pkts_cpl = 0; + } + break; + } + n_pkts_cpl -= n_segs; + } + } + + if (n_pkts_put) { + vq->async_pkts_inflight_n -= n_pkts_put; + *(volatile uint16_t *)&vq->used->idx += n_descs; + + vhost_vring_call_split(dev, vq); + } + + if (start_idx + n_pkts_put <= vq_size) { + rte_memcpy(pkts, &vq->async_pkts_pending[start_idx], + n_pkts_put * sizeof(uintptr_t)); + } else { + rte_memcpy(pkts, &vq->async_pkts_pending[start_idx], + (vq_size - start_idx) * sizeof(uintptr_t)); + rte_memcpy(&pkts[vq_size - start_idx], vq->async_pkts_pending, + (n_pkts_put - vq_size + start_idx) * sizeof(uintptr_t)); + } + + rte_spinlock_unlock(&vq->access_lock); + + return n_pkts_put; +} + +static __rte_always_inline uint32_t +virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint32_t count) +{ + struct vhost_virtqueue *vq; + uint32_t nb_tx = 0; + bool drawback = false; + + VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); + if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue idx %d.\n", + dev->vid, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + rte_spinlock_lock(&vq->access_lock); + + if (unlikely(vq->enabled == 0)) + goto out_access_unlock; + + if (unlikely(!vq->async_registered)) { + drawback = true; + goto out_access_unlock; + } + + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_lock(vq); + + if (unlikely(vq->access_ok == 0)) + if (unlikely(vring_translate(dev, vq) < 0)) + goto out; + + count = RTE_MIN((uint32_t)MAX_PKT_BURST, count); + if (count == 0) + goto out; + + /* TODO: packed queue not implemented */ + if (vq_is_packed(dev)) + nb_tx = 0; + else + nb_tx = virtio_dev_rx_async_submit_split(dev, + vq, queue_id, pkts, count); + +out: + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) + vhost_user_iotlb_rd_unlock(vq); + +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); + + if (drawback) + return rte_vhost_enqueue_burst(dev->vid, queue_id, pkts, count); + + return nb_tx; +} + +uint16_t +rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count) +{ + struct virtio_net *dev = get_device(vid); + + if (!dev) + return 0; + + if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { + VHOST_LOG_DATA(ERR, + "(%d) %s: built-in vhost net backend is disabled.\n", + dev->vid, __func__); + return 0; + } + + return virtio_dev_rx_async_submit(dev, queue_id, pkts, count); +} + static inline bool virtio_net_with_host_offload(struct virtio_net *dev) {