List patch comments

GET /api/patches/73361/comments/?format=api&order=id
HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Link: 
<https://patches.dpdk.org/api/patches/73361/comments/?format=api&order=id&page=1>; rel="first",
<https://patches.dpdk.org/api/patches/73361/comments/?format=api&order=id&page=1>; rel="last"
Vary: Accept
[ { "id": 115326, "web_url": "https://patches.dpdk.org/comment/115326/", "msgid": "<MN2PR11MB4063D0F6D9CB94F805AB1A4B9C660@MN2PR11MB4063.namprd11.prod.outlook.com>", "list_archive_url": "https://inbox.dpdk.org/dev/MN2PR11MB4063D0F6D9CB94F805AB1A4B9C660@MN2PR11MB4063.namprd11.prod.outlook.com", "date": "2020-07-07T08:22:19", "subject": "Re: [dpdk-dev] [PATCH v6 2/2] vhost: introduce async enqueue for\n\tsplit ring", "submitter": { "id": 1276, "url": "https://patches.dpdk.org/api/people/1276/?format=api", "name": "Chenbo Xia", "email": "chenbo.xia@intel.com" }, "content": "> -----Original Message-----\n> From: Fu, Patrick <patrick.fu@intel.com>\n> Sent: Tuesday, July 7, 2020 1:07 PM\n> To: dev@dpdk.org; maxime.coquelin@redhat.com; Xia, Chenbo\n> <chenbo.xia@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>\n> Cc: Fu, Patrick <patrick.fu@intel.com>; Wang, Yinan <yinan.wang@intel.com>;\n> Jiang, Cheng1 <cheng1.jiang@intel.com>; Liang, Cunming\n> <cunming.liang@intel.com>\n> Subject: [PATCH v6 2/2] vhost: introduce async enqueue for split ring\n> \n> From: Patrick Fu <patrick.fu@intel.com>\n> \n> This patch implements async enqueue data path for split ring. 2 new async data\n> path APIs are defined, by which applications can submit and poll packets to/from\n> async engines. The async engine is either a physical DMA device or it could also\n> be a software emulated backend.\n> The async enqueue data path leverages callback functions registered by\n> applications to work with the async engine.\n> \n> Signed-off-by: Patrick Fu <patrick.fu@intel.com>\n> ---\n> lib/librte_vhost/rte_vhost_async.h | 40 +++\n> lib/librte_vhost/virtio_net.c | 551 ++++++++++++++++++++++++++++-\n> 2 files changed, 589 insertions(+), 2 deletions(-)\n> \n> diff --git a/lib/librte_vhost/rte_vhost_async.h\n> b/lib/librte_vhost/rte_vhost_async.h\n> index d5a59279a..c8ad8dbc7 100644\n> --- a/lib/librte_vhost/rte_vhost_async.h\n> +++ b/lib/librte_vhost/rte_vhost_async.h\n> @@ -133,4 +133,44 @@ int rte_vhost_async_channel_register(int vid, uint16_t\n> queue_id, __rte_experimental int rte_vhost_async_channel_unregister(int vid,\n> uint16_t queue_id);\n> \n> +/**\n> + * This function submit enqueue data to async engine. This function has\n> + * no guranttee to the transfer completion upon return. Applications\n> + * should poll transfer status by rte_vhost_poll_enqueue_completed()\n> + *\n> + * @param vid\n> + * id of vhost device to enqueue data\n> + * @param queue_id\n> + * queue id to enqueue data\n> + * @param pkts\n> + * array of packets to be enqueued\n> + * @param count\n> + * packets num to be enqueued\n> + * @return\n> + * num of packets enqueued\n> + */\n> +__rte_experimental\n> +uint16_t rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,\n> +\t\tstruct rte_mbuf **pkts, uint16_t count);\n> +\n> +/**\n> + * This function check async completion status for a specific vhost\n> + * device queue. Packets which finish copying (enqueue) operation\n> + * will be returned in an array.\n> + *\n> + * @param vid\n> + * id of vhost device to enqueue data\n> + * @param queue_id\n> + * queue id to enqueue data\n> + * @param pkts\n> + * blank array to get return packet pointer\n> + * @param count\n> + * size of the packet array\n> + * @return\n> + * num of packets returned\n> + */\n> +__rte_experimental\n> +uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,\n> +\t\tstruct rte_mbuf **pkts, uint16_t count);\n> +\n> #endif /* _RTE_VHOST_ASYNC_H_ */\n> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index\n> 751c1f373..236498f71 100644\n> --- a/lib/librte_vhost/virtio_net.c\n> +++ b/lib/librte_vhost/virtio_net.c\n> @@ -17,14 +17,15 @@\n> #include <rte_arp.h>\n> #include <rte_spinlock.h>\n> #include <rte_malloc.h>\n> +#include <rte_vhost_async.h>\n> \n> #include \"iotlb.h\"\n> #include \"vhost.h\"\n> \n> -#define MAX_PKT_BURST 32\n> -\n> #define MAX_BATCH_LEN 256\n> \n> +#define VHOST_ASYNC_BATCH_THRESHOLD 32\n> +\n> static __rte_always_inline bool\n> rxvq_is_mergeable(struct virtio_net *dev) { @@ -116,6 +117,31 @@\n> flush_shadow_used_ring_split(struct virtio_net *dev, struct vhost_virtqueue *vq)\n> \t\tsizeof(vq->used->idx));\n> }\n> \n> +static __rte_always_inline void\n> +async_flush_shadow_used_ring_split(struct virtio_net *dev,\n> +\tstruct vhost_virtqueue *vq)\n> +{\n> +\tuint16_t used_idx = vq->last_used_idx & (vq->size - 1);\n> +\n> +\tif (used_idx + vq->shadow_used_idx <= vq->size) {\n> +\t\tdo_flush_shadow_used_ring_split(dev, vq, used_idx, 0,\n> +\t\t\t\t\t vq->shadow_used_idx);\n> +\t} else {\n> +\t\tuint16_t size;\n> +\n> +\t\t/* update used ring interval [used_idx, vq->size] */\n> +\t\tsize = vq->size - used_idx;\n> +\t\tdo_flush_shadow_used_ring_split(dev, vq, used_idx, 0, size);\n> +\n> +\t\t/* update the left half used ring interval [0, left_size] */\n> +\t\tdo_flush_shadow_used_ring_split(dev, vq, 0, size,\n> +\t\t\t\t\t vq->shadow_used_idx - size);\n> +\t}\n> +\n> +\tvq->last_used_idx += vq->shadow_used_idx;\n> +\tvq->shadow_used_idx = 0;\n> +}\n> +\n> static __rte_always_inline void\n> update_shadow_used_ring_split(struct vhost_virtqueue *vq,\n> \t\t\t uint16_t desc_idx, uint32_t len)\n> @@ -905,6 +931,200 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct\n> vhost_virtqueue *vq,\n> \treturn error;\n> }\n> \n> +static __rte_always_inline void\n> +async_fill_vec(struct iovec *v, void *base, size_t len) {\n> +\tv->iov_base = base;\n> +\tv->iov_len = len;\n> +}\n> +\n> +static __rte_always_inline void\n> +async_fill_iter(struct rte_vhost_iov_iter *it, size_t count,\n> +\tstruct iovec *vec, unsigned long nr_seg) {\n> +\tit->offset = 0;\n> +\tit->count = count;\n> +\n> +\tif (count) {\n> +\t\tit->iov = vec;\n> +\t\tit->nr_segs = nr_seg;\n> +\t} else {\n> +\t\tit->iov = 0;\n> +\t\tit->nr_segs = 0;\n> +\t}\n> +}\n> +\n> +static __rte_always_inline void\n> +async_fill_desc(struct rte_vhost_async_desc *desc,\n> +\tstruct rte_vhost_iov_iter *src, struct rte_vhost_iov_iter *dst) {\n> +\tdesc->src = src;\n> +\tdesc->dst = dst;\n> +}\n> +\n> +static __rte_always_inline int\n> +async_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,\n> +\t\t\tstruct rte_mbuf *m, struct buf_vector *buf_vec,\n> +\t\t\tuint16_t nr_vec, uint16_t num_buffers,\n> +\t\t\tstruct iovec *src_iovec, struct iovec *dst_iovec,\n> +\t\t\tstruct rte_vhost_iov_iter *src_it,\n> +\t\t\tstruct rte_vhost_iov_iter *dst_it)\n> +{\n> +\tuint32_t vec_idx = 0;\n> +\tuint32_t mbuf_offset, mbuf_avail;\n> +\tuint32_t buf_offset, buf_avail;\n> +\tuint64_t buf_addr, buf_iova, buf_len;\n> +\tuint32_t cpy_len, cpy_threshold;\n> +\tuint64_t hdr_addr;\n> +\tstruct rte_mbuf *hdr_mbuf;\n> +\tstruct batch_copy_elem *batch_copy = vq->batch_copy_elems;\n> +\tstruct virtio_net_hdr_mrg_rxbuf tmp_hdr, *hdr = NULL;\n> +\tint error = 0;\n> +\n> +\tuint32_t tlen = 0;\n> +\tint tvec_idx = 0;\n> +\tvoid *hpa;\n> +\n> +\tif (unlikely(m == NULL)) {\n> +\t\terror = -1;\n> +\t\tgoto out;\n> +\t}\n> +\n> +\tcpy_threshold = vq->async_threshold;\n> +\n> +\tbuf_addr = buf_vec[vec_idx].buf_addr;\n> +\tbuf_iova = buf_vec[vec_idx].buf_iova;\n> +\tbuf_len = buf_vec[vec_idx].buf_len;\n> +\n> +\tif (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1)) {\n> +\t\terror = -1;\n> +\t\tgoto out;\n> +\t}\n> +\n> +\thdr_mbuf = m;\n> +\thdr_addr = buf_addr;\n> +\tif (unlikely(buf_len < dev->vhost_hlen))\n> +\t\thdr = &tmp_hdr;\n> +\telse\n> +\t\thdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)hdr_addr;\n> +\n> +\tVHOST_LOG_DATA(DEBUG, \"(%d) RX: num merge buffers %d\\n\",\n> +\t\tdev->vid, num_buffers);\n> +\n> +\tif (unlikely(buf_len < dev->vhost_hlen)) {\n> +\t\tbuf_offset = dev->vhost_hlen - buf_len;\n> +\t\tvec_idx++;\n> +\t\tbuf_addr = buf_vec[vec_idx].buf_addr;\n> +\t\tbuf_iova = buf_vec[vec_idx].buf_iova;\n> +\t\tbuf_len = buf_vec[vec_idx].buf_len;\n> +\t\tbuf_avail = buf_len - buf_offset;\n> +\t} else {\n> +\t\tbuf_offset = dev->vhost_hlen;\n> +\t\tbuf_avail = buf_len - dev->vhost_hlen;\n> +\t}\n> +\n> +\tmbuf_avail = rte_pktmbuf_data_len(m);\n> +\tmbuf_offset = 0;\n> +\n> +\twhile (mbuf_avail != 0 || m->next != NULL) {\n> +\t\t/* done with current buf, get the next one */\n> +\t\tif (buf_avail == 0) {\n> +\t\t\tvec_idx++;\n> +\t\t\tif (unlikely(vec_idx >= nr_vec)) {\n> +\t\t\t\terror = -1;\n> +\t\t\t\tgoto out;\n> +\t\t\t}\n> +\n> +\t\t\tbuf_addr = buf_vec[vec_idx].buf_addr;\n> +\t\t\tbuf_iova = buf_vec[vec_idx].buf_iova;\n> +\t\t\tbuf_len = buf_vec[vec_idx].buf_len;\n> +\n> +\t\t\tbuf_offset = 0;\n> +\t\t\tbuf_avail = buf_len;\n> +\t\t}\n> +\n> +\t\t/* done with current mbuf, get the next one */\n> +\t\tif (mbuf_avail == 0) {\n> +\t\t\tm = m->next;\n> +\n> +\t\t\tmbuf_offset = 0;\n> +\t\t\tmbuf_avail = rte_pktmbuf_data_len(m);\n> +\t\t}\n> +\n> +\t\tif (hdr_addr) {\n> +\t\t\tvirtio_enqueue_offload(hdr_mbuf, &hdr->hdr);\n> +\t\t\tif (rxvq_is_mergeable(dev))\n> +\t\t\t\tASSIGN_UNLESS_EQUAL(hdr->num_buffers,\n> +\t\t\t\t\t\tnum_buffers);\n> +\n> +\t\t\tif (unlikely(hdr == &tmp_hdr)) {\n> +\t\t\t\tcopy_vnet_hdr_to_desc(dev, vq, buf_vec, hdr);\n> +\t\t\t} else {\n> +\t\t\t\tPRINT_PACKET(dev, (uintptr_t)hdr_addr,\n> +\t\t\t\t\t\tdev->vhost_hlen, 0);\n> +\t\t\t\tvhost_log_cache_write_iova(dev, vq,\n> +\t\t\t\t\t\tbuf_vec[0].buf_iova,\n> +\t\t\t\t\t\tdev->vhost_hlen);\n> +\t\t\t}\n> +\n> +\t\t\thdr_addr = 0;\n> +\t\t}\n> +\n> +\t\tcpy_len = RTE_MIN(buf_avail, mbuf_avail);\n> +\n> +\t\tif (unlikely(cpy_len >= cpy_threshold)) {\n> +\t\t\thpa = (void *)(uintptr_t)gpa_to_hpa(dev,\n> +\t\t\t\t\tbuf_iova + buf_offset, cpy_len);\n> +\n> +\t\t\tif (unlikely(!hpa)) {\n> +\t\t\t\terror = -1;\n> +\t\t\t\tgoto out;\n> +\t\t\t}\n> +\n> +\t\t\tasync_fill_vec(src_iovec + tvec_idx,\n> +\t\t\t\t(void *)(uintptr_t)rte_pktmbuf_iova_offset(m,\n> +\t\t\t\t\t\tmbuf_offset), cpy_len);\n> +\n> +\t\t\tasync_fill_vec(dst_iovec + tvec_idx, hpa, cpy_len);\n> +\n> +\t\t\ttlen += cpy_len;\n> +\t\t\ttvec_idx++;\n> +\t\t} else {\n> +\t\t\tif (unlikely(vq->batch_copy_nb_elems >= vq->size)) {\n> +\t\t\t\trte_memcpy(\n> +\t\t\t\t(void *)((uintptr_t)(buf_addr + buf_offset)),\n> +\t\t\t\trte_pktmbuf_mtod_offset(m, void *,\n> mbuf_offset),\n> +\t\t\t\tcpy_len);\n> +\n> +\t\t\t\tPRINT_PACKET(dev,\n> +\t\t\t\t\t(uintptr_t)(buf_addr + buf_offset),\n> +\t\t\t\t\tcpy_len, 0);\n> +\t\t\t} else {\n> +\t\t\t\tbatch_copy[vq->batch_copy_nb_elems].dst =\n> +\t\t\t\t(void *)((uintptr_t)(buf_addr + buf_offset));\n> +\t\t\t\tbatch_copy[vq->batch_copy_nb_elems].src =\n> +\t\t\t\trte_pktmbuf_mtod_offset(m, void *,\n> mbuf_offset);\n> +\t\t\t\tbatch_copy[vq-\n> >batch_copy_nb_elems].log_addr =\n> +\t\t\t\t\tbuf_iova + buf_offset;\n> +\t\t\t\tbatch_copy[vq->batch_copy_nb_elems].len =\n> +\t\t\t\t\tcpy_len;\n> +\t\t\t\tvq->batch_copy_nb_elems++;\n> +\t\t\t}\n> +\t\t}\n> +\n> +\t\tmbuf_avail -= cpy_len;\n> +\t\tmbuf_offset += cpy_len;\n> +\t\tbuf_avail -= cpy_len;\n> +\t\tbuf_offset += cpy_len;\n> +\t}\n> +\n> +out:\n> +\tasync_fill_iter(src_it, tlen, src_iovec, tvec_idx);\n> +\tasync_fill_iter(dst_it, tlen, dst_iovec, tvec_idx);\n> +\n> +\treturn error;\n> +}\n> +\n> static __rte_always_inline int\n> vhost_enqueue_single_packed(struct virtio_net *dev,\n> \t\t\t struct vhost_virtqueue *vq,\n> @@ -1236,6 +1456,333 @@ rte_vhost_enqueue_burst(int vid, uint16_t\n> queue_id,\n> \treturn virtio_dev_rx(dev, queue_id, pkts, count); }\n> \n> +static __rte_always_inline uint16_t\n> +virtio_dev_rx_async_get_info_idx(uint16_t pkts_idx,\n> +\tuint16_t vq_size, uint16_t n_inflight) {\n> +\treturn pkts_idx > n_inflight ? (pkts_idx - n_inflight) :\n> +\t\t(vq_size - n_inflight + pkts_idx) & (vq_size - 1); }\n> +\n> +static __rte_always_inline void\n> +virtio_dev_rx_async_submit_split_err(struct virtio_net *dev,\n> +\tstruct vhost_virtqueue *vq, uint16_t queue_id,\n> +\tuint16_t last_idx, uint16_t shadow_idx) {\n> +\tuint16_t start_idx, pkts_idx, vq_size;\n> +\tuint64_t *async_pending_info;\n> +\n> +\tpkts_idx = vq->async_pkts_idx;\n> +\tasync_pending_info = vq->async_pending_info;\n> +\tvq_size = vq->size;\n> +\tstart_idx = virtio_dev_rx_async_get_info_idx(pkts_idx,\n> +\t\tvq_size, vq->async_pkts_inflight_n);\n> +\n> +\twhile (likely((start_idx & (vq_size - 1)) != pkts_idx)) {\n> +\t\tuint64_t n_seg =\n> +\t\t\tasync_pending_info[(start_idx) & (vq_size - 1)] >>\n> +\t\t\tASYNC_PENDING_INFO_N_SFT;\n> +\n> +\t\twhile (n_seg)\n> +\t\t\tn_seg -= vq->async_ops.check_completed_copies(dev-\n> >vid,\n> +\t\t\t\tqueue_id, 0, 1);\n> +\t}\n> +\n> +\tvq->async_pkts_inflight_n = 0;\n> +\tvq->batch_copy_nb_elems = 0;\n> +\n> +\tvq->shadow_used_idx = shadow_idx;\n> +\tvq->last_avail_idx = last_idx;\n> +}\n> +\n> +static __rte_noinline uint32_t\n> +virtio_dev_rx_async_submit_split(struct virtio_net *dev,\n> +\tstruct vhost_virtqueue *vq, uint16_t queue_id,\n> +\tstruct rte_mbuf **pkts, uint32_t count) {\n> +\tuint32_t pkt_idx = 0, pkt_burst_idx = 0;\n> +\tuint16_t num_buffers;\n> +\tstruct buf_vector buf_vec[BUF_VECTOR_MAX];\n> +\tuint16_t avail_head, last_idx, shadow_idx;\n> +\n> +\tstruct rte_vhost_iov_iter *it_pool = vq->it_pool;\n> +\tstruct iovec *vec_pool = vq->vec_pool;\n> +\tstruct rte_vhost_async_desc tdes[MAX_PKT_BURST];\n> +\tstruct iovec *src_iovec = vec_pool;\n> +\tstruct iovec *dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1);\n> +\tstruct rte_vhost_iov_iter *src_it = it_pool;\n> +\tstruct rte_vhost_iov_iter *dst_it = it_pool + 1;\n> +\tuint16_t n_free_slot, slot_idx;\n> +\tint n_pkts = 0;\n> +\n> +\tavail_head = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE);\n> +\tlast_idx = vq->last_avail_idx;\n> +\tshadow_idx = vq->shadow_used_idx;\n> +\n> +\t/*\n> +\t * The ordering between avail index and\n> +\t * desc reads needs to be enforced.\n> +\t */\n> +\trte_smp_rmb();\n> +\n> +\trte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);\n> +\n> +\tfor (pkt_idx = 0; pkt_idx < count; pkt_idx++) {\n> +\t\tuint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;\n> +\t\tuint16_t nr_vec = 0;\n> +\n> +\t\tif (unlikely(reserve_avail_buf_split(dev, vq,\n> +\t\t\t\t\t\tpkt_len, buf_vec,\n> &num_buffers,\n> +\t\t\t\t\t\tavail_head, &nr_vec) < 0)) {\n> +\t\t\tVHOST_LOG_DATA(DEBUG,\n> +\t\t\t\t\"(%d) failed to get enough desc from vring\\n\",\n> +\t\t\t\tdev->vid);\n> +\t\t\tvq->shadow_used_idx -= num_buffers;\n> +\t\t\tbreak;\n> +\t\t}\n> +\n> +\t\tVHOST_LOG_DATA(DEBUG, \"(%d) current index %d | end\n> index %d\\n\",\n> +\t\t\tdev->vid, vq->last_avail_idx,\n> +\t\t\tvq->last_avail_idx + num_buffers);\n> +\n> +\t\tif (async_mbuf_to_desc(dev, vq, pkts[pkt_idx],\n> +\t\t\t\tbuf_vec, nr_vec, num_buffers,\n> +\t\t\t\tsrc_iovec, dst_iovec, src_it, dst_it) < 0) {\n> +\t\t\tvq->shadow_used_idx -= num_buffers;\n> +\t\t\tbreak;\n> +\t\t}\n> +\n> +\t\tslot_idx = (vq->async_pkts_idx + pkt_idx) & (vq->size - 1);\n> +\t\tif (src_it->count) {\n> +\t\t\tasync_fill_desc(&tdes[pkt_burst_idx], src_it, dst_it);\n> +\t\t\tpkt_burst_idx++;\n> +\t\t\tvq->async_pending_info[slot_idx] =\n> +\t\t\t\tnum_buffers | (src_it->nr_segs << 16);\n> +\t\t\tsrc_iovec += src_it->nr_segs;\n> +\t\t\tdst_iovec += dst_it->nr_segs;\n> +\t\t\tsrc_it += 2;\n> +\t\t\tdst_it += 2;\n> +\t\t} else {\n> +\t\t\tvq->async_pending_info[slot_idx] = num_buffers;\n> +\t\t\tvq->async_pkts_inflight_n++;\n> +\t\t}\n> +\n> +\t\tvq->last_avail_idx += num_buffers;\n> +\n> +\t\tif (pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD ||\n> +\t\t\t\t(pkt_idx == count - 1 && pkt_burst_idx)) {\n> +\t\t\tn_pkts = vq->async_ops.transfer_data(dev->vid,\n> +\t\t\t\t\tqueue_id, tdes, 0, pkt_burst_idx);\n> +\t\t\tsrc_iovec = vec_pool;\n> +\t\t\tdst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1);\n> +\t\t\tsrc_it = it_pool;\n> +\t\t\tdst_it = it_pool + 1;\n> +\n> +\t\t\tif (unlikely(n_pkts < (int)pkt_burst_idx)) {\n> +\t\t\t\tvq->async_pkts_inflight_n +=\n> +\t\t\t\t\tn_pkts > 0 ? n_pkts : 0;\n> +\t\t\t\tvirtio_dev_rx_async_submit_split_err(dev,\n> +\t\t\t\t\tvq, queue_id, last_idx, shadow_idx);\n> +\t\t\t\treturn 0;\n> +\t\t\t}\n> +\n> +\t\t\tpkt_burst_idx = 0;\n> +\t\t\tvq->async_pkts_inflight_n += n_pkts;\n> +\t\t}\n> +\t}\n> +\n> +\tif (pkt_burst_idx) {\n> +\t\tn_pkts = vq->async_ops.transfer_data(dev->vid,\n> +\t\t\t\tqueue_id, tdes, 0, pkt_burst_idx);\n> +\t\tif (unlikely(n_pkts < (int)pkt_burst_idx)) {\n> +\t\t\tvq->async_pkts_inflight_n += n_pkts > 0 ? n_pkts : 0;\n> +\t\t\tvirtio_dev_rx_async_submit_split_err(dev, vq, queue_id,\n> +\t\t\t\tlast_idx, shadow_idx);\n> +\t\t\treturn 0;\n> +\t\t}\n> +\n> +\t\tvq->async_pkts_inflight_n += n_pkts;\n> +\t}\n> +\n> +\tdo_data_copy_enqueue(dev, vq);\n> +\n> +\tn_free_slot = vq->size - vq->async_pkts_idx;\n> +\tif (n_free_slot > pkt_idx) {\n> +\t\trte_memcpy(&vq->async_pkts_pending[vq->async_pkts_idx],\n> +\t\t\tpkts, pkt_idx * sizeof(uintptr_t));\n> +\t\tvq->async_pkts_idx += pkt_idx;\n> +\t} else {\n> +\t\trte_memcpy(&vq->async_pkts_pending[vq->async_pkts_idx],\n> +\t\t\tpkts, n_free_slot * sizeof(uintptr_t));\n> +\t\trte_memcpy(&vq->async_pkts_pending[0],\n> +\t\t\t&pkts[n_free_slot],\n> +\t\t\t(pkt_idx - n_free_slot) * sizeof(uintptr_t));\n> +\t\tvq->async_pkts_idx = pkt_idx - n_free_slot;\n> +\t}\n> +\n> +\tif (likely(vq->shadow_used_idx))\n> +\t\tasync_flush_shadow_used_ring_split(dev, vq);\n> +\n> +\treturn pkt_idx;\n> +}\n> +\n> +uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,\n> +\t\tstruct rte_mbuf **pkts, uint16_t count) {\n> +\tstruct virtio_net *dev = get_device(vid);\n> +\tstruct vhost_virtqueue *vq;\n> +\tuint16_t n_pkts_cpl, n_pkts_put = 0, n_descs = 0;\n> +\tuint16_t start_idx, pkts_idx, vq_size;\n> +\tuint64_t *async_pending_info;\n> +\n> +\tVHOST_LOG_DATA(DEBUG, \"(%d) %s\\n\", dev->vid, __func__);\n> +\tif (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {\n> +\t\tVHOST_LOG_DATA(ERR, \"(%d) %s: invalid virtqueue idx %d.\\n\",\n> +\t\t\tdev->vid, __func__, queue_id);\n> +\t\treturn 0;\n> +\t}\n> +\n> +\tvq = dev->virtqueue[queue_id];\n> +\n> +\trte_spinlock_lock(&vq->access_lock);\n> +\n> +\tpkts_idx = vq->async_pkts_idx;\n> +\tasync_pending_info = vq->async_pending_info;\n> +\tvq_size = vq->size;\n> +\tstart_idx = virtio_dev_rx_async_get_info_idx(pkts_idx,\n> +\t\tvq_size, vq->async_pkts_inflight_n);\n> +\n> +\tn_pkts_cpl =\n> +\t\tvq->async_ops.check_completed_copies(vid, queue_id, 0,\n> count);\n> +\n> +\trte_smp_wmb();\n> +\n> +\twhile (likely(((start_idx + n_pkts_put) & (vq_size - 1)) != pkts_idx)) {\n> +\t\tuint64_t info = async_pending_info[\n> +\t\t\t(start_idx + n_pkts_put) & (vq_size - 1)];\n> +\t\tuint64_t n_segs;\n> +\t\tn_pkts_put++;\n> +\t\tn_descs += info & ASYNC_PENDING_INFO_N_MSK;\n> +\t\tn_segs = info >> ASYNC_PENDING_INFO_N_SFT;\n> +\n> +\t\tif (n_segs) {\n> +\t\t\tif (!n_pkts_cpl || n_pkts_cpl < n_segs) {\n> +\t\t\t\tn_pkts_put--;\n> +\t\t\t\tn_descs -= info &\n> ASYNC_PENDING_INFO_N_MSK;\n> +\t\t\t\tif (n_pkts_cpl) {\n> +\t\t\t\t\tasync_pending_info[\n> +\t\t\t\t\t\t(start_idx + n_pkts_put) &\n> +\t\t\t\t\t\t(vq_size - 1)] =\n> +\t\t\t\t\t((n_segs - n_pkts_cpl) <<\n> +\t\t\t\t\t ASYNC_PENDING_INFO_N_SFT) |\n> +\t\t\t\t\t(info &\n> ASYNC_PENDING_INFO_N_MSK);\n> +\t\t\t\t\tn_pkts_cpl = 0;\n> +\t\t\t\t}\n> +\t\t\t\tbreak;\n> +\t\t\t}\n> +\t\t\tn_pkts_cpl -= n_segs;\n> +\t\t}\n> +\t}\n> +\n> +\tif (n_pkts_put) {\n> +\t\tvq->async_pkts_inflight_n -= n_pkts_put;\n> +\t\t__atomic_add_fetch(&vq->used->idx, n_descs,\n> __ATOMIC_RELEASE);\n> +\n> +\t\tvhost_vring_call_split(dev, vq);\n> +\t}\n> +\n> +\tif (start_idx + n_pkts_put <= vq_size) {\n> +\t\trte_memcpy(pkts, &vq->async_pkts_pending[start_idx],\n> +\t\t\tn_pkts_put * sizeof(uintptr_t));\n> +\t} else {\n> +\t\trte_memcpy(pkts, &vq->async_pkts_pending[start_idx],\n> +\t\t\t(vq_size - start_idx) * sizeof(uintptr_t));\n> +\t\trte_memcpy(&pkts[vq_size - start_idx], vq-\n> >async_pkts_pending,\n> +\t\t\t(n_pkts_put - vq_size + start_idx) * sizeof(uintptr_t));\n> +\t}\n> +\n> +\trte_spinlock_unlock(&vq->access_lock);\n> +\n> +\treturn n_pkts_put;\n> +}\n> +\n> +static __rte_always_inline uint32_t\n> +virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id,\n> +\tstruct rte_mbuf **pkts, uint32_t count) {\n> +\tstruct vhost_virtqueue *vq;\n> +\tuint32_t nb_tx = 0;\n> +\tbool drawback = false;\n> +\n> +\tVHOST_LOG_DATA(DEBUG, \"(%d) %s\\n\", dev->vid, __func__);\n> +\tif (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {\n> +\t\tVHOST_LOG_DATA(ERR, \"(%d) %s: invalid virtqueue idx %d.\\n\",\n> +\t\t\tdev->vid, __func__, queue_id);\n> +\t\treturn 0;\n> +\t}\n> +\n> +\tvq = dev->virtqueue[queue_id];\n> +\n> +\trte_spinlock_lock(&vq->access_lock);\n> +\n> +\tif (unlikely(vq->enabled == 0))\n> +\t\tgoto out_access_unlock;\n> +\n> +\tif (unlikely(!vq->async_registered)) {\n> +\t\tdrawback = true;\n> +\t\tgoto out_access_unlock;\n> +\t}\n> +\n> +\tif (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))\n> +\t\tvhost_user_iotlb_rd_lock(vq);\n> +\n> +\tif (unlikely(vq->access_ok == 0))\n> +\t\tif (unlikely(vring_translate(dev, vq) < 0))\n> +\t\t\tgoto out;\n> +\n> +\tcount = RTE_MIN((uint32_t)MAX_PKT_BURST, count);\n> +\tif (count == 0)\n> +\t\tgoto out;\n> +\n> +\t/* TODO: packed queue not implemented */\n> +\tif (vq_is_packed(dev))\n> +\t\tnb_tx = 0;\n> +\telse\n> +\t\tnb_tx = virtio_dev_rx_async_submit_split(dev,\n> +\t\t\t\tvq, queue_id, pkts, count);\n> +\n> +out:\n> +\tif (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))\n> +\t\tvhost_user_iotlb_rd_unlock(vq);\n> +\n> +out_access_unlock:\n> +\trte_spinlock_unlock(&vq->access_lock);\n> +\n> +\tif (drawback)\n> +\t\treturn rte_vhost_enqueue_burst(dev->vid, queue_id, pkts,\n> count);\n> +\n> +\treturn nb_tx;\n> +}\n> +\n> +uint16_t\n> +rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,\n> +\t\tstruct rte_mbuf **pkts, uint16_t count) {\n> +\tstruct virtio_net *dev = get_device(vid);\n> +\n> +\tif (!dev)\n> +\t\treturn 0;\n> +\n> +\tif (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {\n> +\t\tVHOST_LOG_DATA(ERR,\n> +\t\t\t\"(%d) %s: built-in vhost net backend is disabled.\\n\",\n> +\t\t\tdev->vid, __func__);\n> +\t\treturn 0;\n> +\t}\n> +\n> +\treturn virtio_dev_rx_async_submit(dev, queue_id, pkts, count); }\n> +\n> static inline bool\n> virtio_net_with_host_offload(struct virtio_net *dev) {\n> --\n> 2.18.4\n\nReviewed-by: Chenbo Xia <chenbo.xia@intel.com>", "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 4A058A00BE;\n\tTue, 7 Jul 2020 10:22:28 +0200 (CEST)", "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id C0FEC1DAC4;\n\tTue, 7 Jul 2020 10:22:26 +0200 (CEST)", "from mga11.intel.com (mga11.intel.com [192.55.52.93])\n by dpdk.org (Postfix) with ESMTP id B50991DAC4\n for <dev@dpdk.org>; Tue, 7 Jul 2020 10:22:24 +0200 (CEST)", "from orsmga007.jf.intel.com ([10.7.209.58])\n by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 07 Jul 2020 01:22:23 -0700", "from fmsmsx103.amr.corp.intel.com ([10.18.124.201])\n by orsmga007.jf.intel.com with ESMTP; 07 Jul 2020 01:22:23 -0700", "from fmsmsx607.amr.corp.intel.com (10.18.126.87) by\n FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS)\n id 14.3.439.0; Tue, 7 Jul 2020 01:22:22 -0700", "from fmsmsx611.amr.corp.intel.com (10.18.126.91) by\n fmsmsx607.amr.corp.intel.com (10.18.126.87) with Microsoft SMTP Server\n (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id\n 15.1.1713.5; Tue, 7 Jul 2020 01:22:22 -0700", "from FMSEDG001.ED.cps.intel.com (10.1.192.133) by\n fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server\n (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5\n via Frontend Transport; Tue, 7 Jul 2020 01:22:22 -0700", "from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.105)\n by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (TLS) id\n 14.3.439.0; Tue, 7 Jul 2020 01:22:22 -0700", "from MN2PR11MB4063.namprd11.prod.outlook.com (2603:10b6:208:13f::22)\n by MN2PR11MB3712.namprd11.prod.outlook.com (2603:10b6:208:f6::29)\n with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.20; Tue, 7 Jul\n 2020 08:22:20 +0000", "from MN2PR11MB4063.namprd11.prod.outlook.com\n ([fe80::7cde:8326:5010:c47e]) by MN2PR11MB4063.namprd11.prod.outlook.com\n ([fe80::7cde:8326:5010:c47e%7]) with mapi id 15.20.3153.029; Tue, 7 Jul 2020\n 08:22:20 +0000" ], "IronPort-SDR": [ "\n KTTGAbKDUIwkOKXDMa//aRDUlBcoTm6TszqeGiz59qzzLkRryyAzIGvTV2G3fDQIZdILOAUXU+\n WgjGJl13V7AQ==", "\n dFuvU/EYTXYFOZ49KzNLPkPDBFTtIRGfaDruB3YZZEuwu/eQWjpr3z34CGiifIQq/FXxPcHuic\n 94tyJKokqiWA==" ], "X-IronPort-AV": [ "E=McAfee;i=\"6000,8403,9674\"; a=\"145656633\"", "E=Sophos;i=\"5.75,323,1589266800\"; d=\"scan'208\";a=\"145656633\"", "E=Sophos;i=\"5.75,323,1589266800\"; d=\"scan'208\";a=\"323468696\"" ], "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "ARC-Seal": "i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;\n b=fClPnyFpYwlrFN06eR2/QjYPWnlR/C3Bmr4q69TowcvaDnXqmzJhjh76biRMCVtAA6ykZm+mNmIAi6BCxdqipPhfAwNXAiGPlT/J0DI3+zgu89WLWKnjlTyLzZE3Q1WDyJLomtl7gJ3hmR+gOsEZyvQWtwcTU5B8TVaZwXS1A+sx+WqgYIqBScXswM1Upa0j+W9SC54OczzuzfTlAEB3KNB0YJArGCQOOGNPLE3Gw+CAR0BtyvjGLVryM3vqHW7knsPem5zh6Smnql0AQx31LpTm2oOKR3i+8qw2rOSH8k04SvAD8NcqcxoANX8cM2k8IamZIkLmfHoZ4Z7dZBwwkw==", "ARC-Message-Signature": "i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector9901;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=j1RvUohZBFpQatVMSYMgmNwvFcJ+3rNBQMqVe8w8KxY=;\n b=Nwn93bhznwdy8gENXDK5Kxm7WKHP4jV6QNgzfSfdNhBYYwJjwMcuzxkZRqHDXYU0hEb0GdANYvB6uHrnHBlhuAatTppCZhcGRL1D+UMXDXTQBF+mM+QkcAlGHhIchDsNBWdPdbGZ4/P3rOYxGc/DIIkUUobyxrl5d3LMZO9CMr9W3mJUkj9xb9v0+agLhqjIW4VoaPv6+X9paWKfcSBXgdJ44ANbUYHQ3UFvJdxhxQ8LE1yeo6S2YrDi0ktycGbfNRhpi1IAdtB3MbSUWCxc45moMyWjGuh/LGBMTh2jX/VDyUue+95y/XNnQw2++itsrcFuAna14Sc02CL2ZuBqJg==", "ARC-Authentication-Results": "i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;\n dkim=pass header.d=intel.com; arc=none", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;\n s=selector2-intel-onmicrosoft-com;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=j1RvUohZBFpQatVMSYMgmNwvFcJ+3rNBQMqVe8w8KxY=;\n b=CKxrHSAQAlSJsyzFb1aXyEtlPf+7sk9JQQs9cegapzgKrElhutkwL9OycrI21x+wC9F1ZaB620y95tgsu3EU8W/MAtH1+e/sA2aVn4NHn78YxxbXS31zcEn1Pv494Oyfv3/qDXWU4EMUJ0UNv2eaZFuzdkBaE3jrKV1ch3IblMI=", "From": "\"Xia, Chenbo\" <chenbo.xia@intel.com>", "To": "\"Fu, Patrick\" <patrick.fu@intel.com>, \"dev@dpdk.org\" <dev@dpdk.org>,\n \"maxime.coquelin@redhat.com\" <maxime.coquelin@redhat.com>, \"Wang, Zhihong\"\n <zhihong.wang@intel.com>", "CC": "\"Wang, Yinan\" <yinan.wang@intel.com>, \"Jiang, Cheng1\"\n <cheng1.jiang@intel.com>, \"Liang, Cunming\" <cunming.liang@intel.com>", "Thread-Topic": "[PATCH v6 2/2] vhost: introduce async enqueue for split ring", "Thread-Index": "AQHWVByCXb5WPe7jLE603+/BKReqyKj7xzCw", "Date": "Tue, 7 Jul 2020 08:22:19 +0000", "Message-ID": "\n <MN2PR11MB4063D0F6D9CB94F805AB1A4B9C660@MN2PR11MB4063.namprd11.prod.outlook.com>", "References": "<1591869725-13331-1-git-send-email-patrick.fu@intel.com>\n <20200707050709.205480-1-patrick.fu@intel.com>\n <20200707050709.205480-3-patrick.fu@intel.com>", "In-Reply-To": "<20200707050709.205480-3-patrick.fu@intel.com>", "Accept-Language": "zh-CN, en-US", "Content-Language": "en-US", "X-MS-Has-Attach": "", "X-MS-TNEF-Correlator": "", "authentication-results": "intel.com; dkim=none (message not signed)\n header.d=none;intel.com; dmarc=none action=none header.from=intel.com;", "x-originating-ip": "[192.198.147.218]", "x-ms-publictraffictype": "Email", "x-ms-office365-filtering-correlation-id": "e64fef8c-19d9-42c1-8158-08d8224ede00", "x-ms-traffictypediagnostic": "MN2PR11MB3712:", "x-ms-exchange-transport-forked": "True", "x-microsoft-antispam-prvs": "\n <MN2PR11MB371280C2F18ACA1E514095C19C660@MN2PR11MB3712.namprd11.prod.outlook.com>", "x-ms-oob-tlc-oobclassifiers": "OLM:4303;", "x-forefront-prvs": "0457F11EAF", "x-ms-exchange-senderadcheck": "1", "x-microsoft-antispam": "BCL:0;", "x-microsoft-antispam-message-info": "\n SIv/hZT3RuikK9D6ji4bERHdzLyFVQkw/7pkn1VMd8//sH3sxgg/AJfr5RkmXWywVmyCO9fXSbosDSbXPaEnmBgqZUvXmVjhFOBG8TCluZJ+xqyL+PAR3TiIAn3Fy9cU4IbndrK++NgX2UcEYfcjEREkdh1JWSWuogBiGFl+/1amu+4x+s2n7zvnZE2u0Hc1wMzsFw0uE85nXgh0mRpb/R5YrGq/YrBMp/7RPbVaf7VUZFBYKcSLNGNjzgKp0kn3jl3vB9TA08uwszpTIzqXGvWN7MlywFuo0Uy8tm5UFM0AHCmASK1iy/vgpbdjn1K6", "x-forefront-antispam-report": "CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;\n IPV:NLI; SFV:NSPM; H:MN2PR11MB4063.namprd11.prod.outlook.com; PTR:; CAT:NONE;\n SFTY:;\n SFS:(4636009)(39860400002)(136003)(366004)(346002)(396003)(376002)(26005)(2906002)(110136005)(316002)(6636002)(478600001)(66946007)(64756008)(66446008)(66556008)(66476007)(7696005)(4326008)(6506007)(54906003)(76116006)(107886003)(30864003)(53546011)(33656002)(52536014)(8936002)(186003)(8676002)(86362001)(55016002)(5660300002)(9686003)(83380400001)(71200400001);\n DIR:OUT; SFP:1102;", "x-ms-exchange-antispam-messagedata": "\n +6iUwchVJoCLyeRIKLdAAHn1LrMgFC2vx5Ls84eIKMtzLFcFMuEsq8U1TGic3Dn+P4jgoJbY6X6Q99MIbZNt9soR+MxdceNrPMqPKZf+1/8ir8NAWkoNmhxJD/Tv+Dwc5HRUZFKnEhwV4tu6aZpXON23S9kZNyLDfTDzl9FPo50cwRbv6GVeHMj6Haf1FbxxF8i0F+Ql9quQtPGkz35ovDf7HZImZBkTec17HBzfo0llY+a+kSwu0nZWzsxtHOdKthRtH4j88KhfbrNxxaqDEYjMeWFZC2Jem7sFwFlT6CPzqKZ93pjqay7YZ1I9loyNw5b41OAYyAtFWMQiLx6+HHc/AnD1CZMHIfC8keUOVrqtLXtObaUgwNobjirg4fzqLuYWdS/IztA3lB6vbm8Utnaya9VMV8Sfm9A3XzreAveCyEFfINdgN7DGUfDkCKZote47mXtaQCST7qNm6EnQZcf2xgOfdJMf3Zj5R5EInar9kGKQpdo65SlzLnKgVava", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "quoted-printable", "MIME-Version": "1.0", "X-MS-Exchange-CrossTenant-AuthAs": "Internal", "X-MS-Exchange-CrossTenant-AuthSource": "MN2PR11MB4063.namprd11.prod.outlook.com", "X-MS-Exchange-CrossTenant-Network-Message-Id": "\n e64fef8c-19d9-42c1-8158-08d8224ede00", "X-MS-Exchange-CrossTenant-originalarrivaltime": "07 Jul 2020 08:22:19.9244 (UTC)", "X-MS-Exchange-CrossTenant-fromentityheader": "Hosted", "X-MS-Exchange-CrossTenant-id": "46c98d88-e344-4ed4-8496-4ed7712e255d", "X-MS-Exchange-CrossTenant-mailboxtype": "HOSTED", "X-MS-Exchange-CrossTenant-userprincipalname": "\n qkN/EFBH3IEAwE/iKlnxcLZfuNtGo33xf6GHsHKxte6IjxBj9XNyY/GP7BGdV2oP7p/vdoz+NMPMKhrqEj4PLA==", "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "MN2PR11MB3712", "X-OriginatorOrg": "intel.com", "Subject": "Re: [dpdk-dev] [PATCH v6 2/2] vhost: introduce async enqueue for\n\tsplit ring", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "addressed": null } ]