From patchwork Fri Jan 13 02:56:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 121991 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF58C423BC; Fri, 13 Jan 2023 04:46:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8318142D4E; Fri, 13 Jan 2023 04:46:27 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id B295A42D4E for ; Fri, 13 Jan 2023 04:46:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673581585; x=1705117585; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SA5sNQRI597n7+ls6KIOfVbq4IdV3rfPTFmdazo39cE=; b=doGLV5A/kX2/S5M5W3q18ceWb99/MTcgjGdYlLQuenXpGYp6jyEjlVg4 rwcDl8FrIkVvv/cDu3QLp1nCWZbkuVCBDgVy4pfJA8JbqNi4/w47EKVBd X+YPWEgIyvuGsC3F2QKS3lAzUmON4QaKg7PCAidodzOomknsoEZHoB6Fj Fnm7vNprRda1aJ1I7XvXw60g7KmZ8QotDjMwt+o47U2CE9YXR3JVFTcHl kvmhr3VfX96N7cpNQd8Z66TF9G9/q/C1R9I0n+M1jUJ1Usz0nFZdre1o8 vR13t2dDacn175JBjIzqBBVdIiB4+5poMij7FYzxaN/Y1hJkFn+Ian6nM Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="410141386" X-IronPort-AV: E=Sophos;i="5.97,212,1669104000"; d="scan'208";a="410141386" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 19:46:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="831944389" X-IronPort-AV: E=Sophos;i="5.97,212,1669104000"; d="scan'208";a="831944389" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.118.237]) by orsmga005.jf.intel.com with ESMTP; 12 Jan 2023 19:46:22 -0800 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, wenwux.ma@intel.com, yuanx.wang@intel.com, xingguang.he@intel.com, Cheng Jiang Subject: [PATCH v2 1/3] vhost: remove redundant copy for packed shadow used ring Date: Fri, 13 Jan 2023 02:56:51 +0000 Message-Id: <20230113025653.16583-2-cheng1.jiang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230113025653.16583-1-cheng1.jiang@intel.com> References: <20221220004415.29576-1-cheng1.jiang@intel.com> <20230113025653.16583-1-cheng1.jiang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In the packed ring enqueue data path of the current asynchronous Vhost design, the shadow used ring is first copied to the sync shadow used ring, and then it will be moved to the async shadow used ring for some historical reasons. This is completely unnecessary. This patch removes redundant copy for the shadow used ring. The async shadow used ring will be updated directly. Signed-off-by: Cheng Jiang Reviewed-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 66 ++++++++++++++++++++---------------------- 1 file changed, 31 insertions(+), 35 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 9abf752f30..7c3ec128a0 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -572,6 +572,26 @@ vhost_shadow_enqueue_packed(struct vhost_virtqueue *vq, } } +static __rte_always_inline void +vhost_async_shadow_enqueue_packed(struct vhost_virtqueue *vq, + uint32_t *len, + uint16_t *id, + uint16_t *count, + uint16_t num_buffers) +{ + uint16_t i; + struct vhost_async *async = vq->async; + + for (i = 0; i < num_buffers; i++) { + async->buffers_packed[async->buffer_idx_packed].id = id[i]; + async->buffers_packed[async->buffer_idx_packed].len = len[i]; + async->buffers_packed[async->buffer_idx_packed].count = count[i]; + async->buffer_idx_packed++; + if (async->buffer_idx_packed >= vq->size) + async->buffer_idx_packed -= vq->size; + } +} + static __rte_always_inline void vhost_shadow_enqueue_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -1647,23 +1667,6 @@ store_dma_desc_info_split(struct vring_used_elem *s_ring, struct vring_used_elem } } -static __rte_always_inline void -store_dma_desc_info_packed(struct vring_used_elem_packed *s_ring, - struct vring_used_elem_packed *d_ring, - uint16_t ring_size, uint16_t s_idx, uint16_t d_idx, uint16_t count) -{ - size_t elem_size = sizeof(struct vring_used_elem_packed); - - if (d_idx + count <= ring_size) { - rte_memcpy(d_ring + d_idx, s_ring + s_idx, count * elem_size); - } else { - uint16_t size = ring_size - d_idx; - - rte_memcpy(d_ring + d_idx, s_ring + s_idx, size * elem_size); - rte_memcpy(d_ring, s_ring + s_idx + size, (count - size) * elem_size); - } -} - static __rte_noinline uint32_t virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) @@ -1822,7 +1825,8 @@ vhost_enqueue_async_packed(struct virtio_net *dev, if (unlikely(mbuf_to_desc(dev, vq, pkt, buf_vec, nr_vec, *nr_buffers, true) < 0)) return -1; - vhost_shadow_enqueue_packed(vq, buffer_len, buffer_buf_id, buffer_desc_count, *nr_buffers); + vhost_async_shadow_enqueue_packed(vq, buffer_len, buffer_buf_id, + buffer_desc_count, *nr_buffers); return 0; } @@ -1852,6 +1856,7 @@ dma_error_handler_packed(struct vhost_virtqueue *vq, uint16_t slot_idx, { uint16_t descs_err = 0; uint16_t buffers_err = 0; + struct vhost_async *async = vq->async; struct async_inflight_info *pkts_info = vq->async->pkts_info; *pkt_idx -= nr_err; @@ -1869,7 +1874,10 @@ dma_error_handler_packed(struct vhost_virtqueue *vq, uint16_t slot_idx, vq->avail_wrap_counter ^= 1; } - vq->shadow_used_idx -= buffers_err; + if (async->buffer_idx_packed >= buffers_err) + async->buffer_idx_packed -= buffers_err; + else + async->buffer_idx_packed = async->buffer_idx_packed + vq->size - buffers_err; } static __rte_noinline uint32_t @@ -1921,23 +1929,11 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue dma_error_handler_packed(vq, slot_idx, pkt_err, &pkt_idx); } - if (likely(vq->shadow_used_idx)) { - /* keep used descriptors. */ - store_dma_desc_info_packed(vq->shadow_used_packed, async->buffers_packed, - vq->size, 0, async->buffer_idx_packed, - vq->shadow_used_idx); - - async->buffer_idx_packed += vq->shadow_used_idx; - if (async->buffer_idx_packed >= vq->size) - async->buffer_idx_packed -= vq->size; - - async->pkts_idx += pkt_idx; - if (async->pkts_idx >= vq->size) - async->pkts_idx -= vq->size; + async->pkts_idx += pkt_idx; + if (async->pkts_idx >= vq->size) + async->pkts_idx -= vq->size; - vq->shadow_used_idx = 0; - async->pkts_inflight_n += pkt_idx; - } + async->pkts_inflight_n += pkt_idx; return pkt_idx; } From patchwork Fri Jan 13 02:56:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 121992 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 09DFC423BC; Fri, 13 Jan 2023 04:46:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B9AC542D85; Fri, 13 Jan 2023 04:46:30 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 1344342D6D for ; Fri, 13 Jan 2023 04:46:28 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673581589; x=1705117589; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DqbcJmEDrGEFD+TXbMwJhifyfOQUCBLNuvqtUgCzceI=; b=bUB5t73HJRUiwmHWUmxlewqd5mFc6L+SdF60Hghk8NOr29Up98dobSWG GhahTbrs2MCSDQgoMJLv0xfMOBFaOKy0SH27nAK81yjc608dgGfsamSGo Z5QDkH2acq/RCBcf9z3Snx6ZjF8S+4XPmWPO0bUAEbo3AB0Ms1kFNueBa DaguWkSqiqyRII+F6o8+1GWZozxLVCuPywRvRQsNupdARw+24Mm0A8YRK mNX/Sdgt+mfbYjUraJZFdMiRy3evutvKbjgYrnWm4d8d3JzP1I2lXsqKM RmX8xu/5dvbXLliPdetPTBRNmX3/N6Ft+VTwNIHCsNmyNmNm0ubdsriCR Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="410141393" X-IronPort-AV: E=Sophos;i="5.97,212,1669104000"; d="scan'208";a="410141393" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 19:46:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="831944402" X-IronPort-AV: E=Sophos;i="5.97,212,1669104000"; d="scan'208";a="831944402" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.118.237]) by orsmga005.jf.intel.com with ESMTP; 12 Jan 2023 19:46:25 -0800 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, wenwux.ma@intel.com, yuanx.wang@intel.com, xingguang.he@intel.com, Cheng Jiang Subject: [PATCH v2 2/3] vhost: add batch enqueue in async vhost packed ring Date: Fri, 13 Jan 2023 02:56:52 +0000 Message-Id: <20230113025653.16583-3-cheng1.jiang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230113025653.16583-1-cheng1.jiang@intel.com> References: <20221220004415.29576-1-cheng1.jiang@intel.com> <20230113025653.16583-1-cheng1.jiang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add batch enqueue function in asynchronous vhost packed ring to improve the performance. Chained mbufs are not supported, it will be handled in single enqueue function. Signed-off-by: Cheng Jiang Reviewed-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 163 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 163 insertions(+) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 7c3ec128a0..aea33ef127 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -432,6 +432,24 @@ vhost_flush_enqueue_batch_packed(struct virtio_net *dev, vq_inc_last_used_packed(vq, PACKED_BATCH_SIZE); } +static __rte_always_inline void +vhost_async_shadow_enqueue_packed_batch(struct vhost_virtqueue *vq, + uint64_t *lens, + uint16_t *ids) +{ + uint16_t i; + struct vhost_async *async = vq->async; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + async->buffers_packed[async->buffer_idx_packed].id = ids[i]; + async->buffers_packed[async->buffer_idx_packed].len = lens[i]; + async->buffers_packed[async->buffer_idx_packed].count = 1; + async->buffer_idx_packed++; + if (async->buffer_idx_packed >= vq->size) + async->buffer_idx_packed -= vq->size; + } +} + static __rte_always_inline void vhost_shadow_dequeue_batch_packed_inorder(struct vhost_virtqueue *vq, uint16_t id) @@ -1451,6 +1469,58 @@ virtio_dev_rx_sync_batch_check(struct virtio_net *dev, return 0; } +static __rte_always_inline int +virtio_dev_rx_async_batch_check(struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, + uint64_t *desc_addrs, + uint64_t *lens, + int16_t dma_id, + uint16_t vchan_id) +{ + bool wrap_counter = vq->avail_wrap_counter; + struct vring_packed_desc *descs = vq->desc_packed; + uint16_t avail_idx = vq->last_avail_idx; + uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); + uint16_t i; + + if (unlikely(avail_idx & PACKED_BATCH_MASK)) + return -1; + + if (unlikely((avail_idx + PACKED_BATCH_SIZE) > vq->size)) + return -1; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + if (unlikely(pkts[i]->next != NULL)) + return -1; + if (unlikely(!desc_is_avail(&descs[avail_idx + i], + wrap_counter))) + return -1; + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) + lens[i] = descs[avail_idx + i].len; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + if (unlikely(pkts[i]->pkt_len > (lens[i] - buf_offset))) + return -1; + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) + desc_addrs[i] = descs[avail_idx + i].addr; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + if (unlikely(!desc_addrs[i])) + return -1; + if (unlikely(lens[i] != descs[avail_idx + i].len)) + return -1; + } + + if (rte_dma_burst_capacity(dma_id, vchan_id) < PACKED_BATCH_SIZE) + return -1; + + return 0; +} + static __rte_always_inline void virtio_dev_rx_batch_packed_copy(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -1850,6 +1920,84 @@ virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, return 0; } +static __rte_always_inline void +virtio_dev_rx_async_packed_batch_enqueue(struct virtio_net *dev, + struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, + uint64_t *desc_addrs, + uint64_t *lens) +{ + uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); + struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_BATCH_SIZE]; + struct vring_packed_desc *descs = vq->desc_packed; + struct vhost_async *async = vq->async; + uint16_t avail_idx = vq->last_avail_idx; + uint32_t mbuf_offset = 0; + uint16_t ids[PACKED_BATCH_SIZE]; + uint64_t mapped_len[PACKED_BATCH_SIZE]; + void *host_iova[PACKED_BATCH_SIZE]; + uintptr_t desc; + uint16_t i; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + rte_prefetch0((void *)(uintptr_t)desc_addrs[i]); + desc = vhost_iova_to_vva(dev, vq, desc_addrs[i], &lens[i], VHOST_ACCESS_RW); + hdrs[i] = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)desc; + lens[i] = pkts[i]->pkt_len + + sizeof(struct virtio_net_hdr_mrg_rxbuf); + } + + if (rxvq_is_mergeable(dev)) { + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + ASSIGN_UNLESS_EQUAL(hdrs[i]->num_buffers, 1); + } + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) + virtio_enqueue_offload(pkts[i], &hdrs[i]->hdr); + + vq_inc_last_avail_packed(vq, PACKED_BATCH_SIZE); + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + host_iova[i] = (void *)(uintptr_t)gpa_to_first_hpa(dev, + desc_addrs[i] + buf_offset, lens[i], &mapped_len[i]); + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + async_iter_initialize(dev, async); + async_iter_add_iovec(dev, async, + (void *)(uintptr_t)rte_pktmbuf_iova_offset(pkts[i], mbuf_offset), + host_iova[i], + mapped_len[i]); + async->iter_idx++; + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) + vhost_log_cache_write_iova(dev, vq, descs[avail_idx + i].addr, lens[i]); + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) + ids[i] = descs[avail_idx + i].id; + + vhost_async_shadow_enqueue_packed_batch(vq, lens, ids); +} + +static __rte_always_inline int +virtio_dev_rx_async_packed_batch(struct virtio_net *dev, + struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, + int16_t dma_id, uint16_t vchan_id) +{ + uint64_t desc_addrs[PACKED_BATCH_SIZE]; + uint64_t lens[PACKED_BATCH_SIZE]; + + if (virtio_dev_rx_async_batch_check(vq, pkts, desc_addrs, lens, dma_id, vchan_id) == -1) + return -1; + + virtio_dev_rx_async_packed_batch_enqueue(dev, vq, pkts, desc_addrs, lens); + + return 0; +} + static __rte_always_inline void dma_error_handler_packed(struct vhost_virtqueue *vq, uint16_t slot_idx, uint32_t nr_err, uint32_t *pkt_idx) @@ -1893,10 +2041,25 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue struct async_inflight_info *pkts_info = async->pkts_info; uint32_t pkt_err = 0; uint16_t slot_idx = 0; + uint16_t i; do { rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]); + if (count - pkt_idx >= PACKED_BATCH_SIZE) { + if (!virtio_dev_rx_async_packed_batch(dev, vq, &pkts[pkt_idx], + dma_id, vchan_id)) { + for (i = 0; i < PACKED_BATCH_SIZE; i++) { + slot_idx = (async->pkts_idx + pkt_idx) % vq->size; + pkts_info[slot_idx].descs = 1; + pkts_info[slot_idx].nr_buffers = 1; + pkts_info[slot_idx].mbuf = pkts[pkt_idx]; + pkt_idx++; + } + continue; + } + } + num_buffers = 0; num_descs = 0; if (unlikely(virtio_dev_rx_async_packed(dev, vq, pkts[pkt_idx], From patchwork Fri Jan 13 02:56:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 121993 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3C1F5423BC; Fri, 13 Jan 2023 04:46:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B2B8B42D75; Fri, 13 Jan 2023 04:46:34 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 7C08442D73 for ; Fri, 13 Jan 2023 04:46:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673581592; x=1705117592; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cgZoqK4BB4J57t04TUFVVONgRN2niRr4wUjM05enN18=; b=McJefXSNLqxntNOTRDY6KnNuOd8YSHwRplwVDuyYHYHxm9XAp3i0T8Z9 Z7gVHyGRQm7MTClobxCzn0PHNaqlLpMm/IFZo0FivDYaOdNUo6+SGR4oY 97NBCoYVupZC/41NyJY7p/4FrSutqA07z+g0hARvFJkeL4tkFD78p+ieh d4zFy/+MDjMaZKdoSu5GB+xyej9dasv3Nkx/FCOTa2YYbkNEFC2S159bJ /q1B5mvqhGfThT2YhiadNNbyE4bi0r2IOVQJfHMhekgr6GBLdD46AT22v INcP0hU5m/IQg3YzoWetby2Is89MUpkVGuHVdbnFOEnP6C1P2zCCjfQK2 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="410141405" X-IronPort-AV: E=Sophos;i="5.97,212,1669104000"; d="scan'208";a="410141405" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 19:46:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="831944417" X-IronPort-AV: E=Sophos;i="5.97,212,1669104000"; d="scan'208";a="831944417" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.118.237]) by orsmga005.jf.intel.com with ESMTP; 12 Jan 2023 19:46:29 -0800 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, wenwux.ma@intel.com, yuanx.wang@intel.com, xingguang.he@intel.com, Cheng Jiang Subject: [PATCH v2 3/3] vhost: add batch dequeue in async vhost packed ring Date: Fri, 13 Jan 2023 02:56:53 +0000 Message-Id: <20230113025653.16583-4-cheng1.jiang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230113025653.16583-1-cheng1.jiang@intel.com> References: <20221220004415.29576-1-cheng1.jiang@intel.com> <20230113025653.16583-1-cheng1.jiang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add batch dequeue function in asynchronous vhost packed ring to improve the performance. Chained mbufs are not supported, it will be handled in single dequeue function. Signed-off-by: Cheng Jiang Signed-off-by: Yuan Wang Reviewed-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 170 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 167 insertions(+), 3 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index aea33ef127..8caf05319e 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -450,6 +450,23 @@ vhost_async_shadow_enqueue_packed_batch(struct vhost_virtqueue *vq, } } +static __rte_always_inline void +vhost_async_shadow_dequeue_packed_batch(struct vhost_virtqueue *vq, uint16_t *ids) +{ + uint16_t i; + struct vhost_async *async = vq->async; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + async->buffers_packed[async->buffer_idx_packed].id = ids[i]; + async->buffers_packed[async->buffer_idx_packed].len = 0; + async->buffers_packed[async->buffer_idx_packed].count = 1; + + async->buffer_idx_packed++; + if (async->buffer_idx_packed >= vq->size) + async->buffer_idx_packed -= vq->size; + } +} + static __rte_always_inline void vhost_shadow_dequeue_batch_packed_inorder(struct vhost_virtqueue *vq, uint16_t id) @@ -3199,6 +3216,80 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev, return -1; } +static __rte_always_inline int +vhost_async_tx_batch_packed_check(struct virtio_net *dev, + struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, + uint16_t avail_idx, + uintptr_t *desc_addrs, + uint64_t *lens, + uint16_t *ids, + int16_t dma_id, + uint16_t vchan_id) +{ + bool wrap = vq->avail_wrap_counter; + struct vring_packed_desc *descs = vq->desc_packed; + uint64_t buf_lens[PACKED_BATCH_SIZE]; + uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); + uint16_t flags, i; + + if (unlikely(avail_idx & PACKED_BATCH_MASK)) + return -1; + if (unlikely((avail_idx + PACKED_BATCH_SIZE) > vq->size)) + return -1; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + flags = descs[avail_idx + i].flags; + if (unlikely((wrap != !!(flags & VRING_DESC_F_AVAIL)) || + (wrap == !!(flags & VRING_DESC_F_USED)) || + (flags & PACKED_DESC_SINGLE_DEQUEUE_FLAG))) + return -1; + } + + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) + lens[i] = descs[avail_idx + i].len; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + desc_addrs[i] = descs[avail_idx + i].addr; + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + if (unlikely(!desc_addrs[i])) + return -1; + if (unlikely((lens[i] != descs[avail_idx + i].len))) + return -1; + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + if (virtio_dev_pktmbuf_prep(dev, pkts[i], lens[i])) + goto err; + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) + buf_lens[i] = pkts[i]->buf_len - pkts[i]->data_off; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + if (unlikely(buf_lens[i] < (lens[i] - buf_offset))) + goto err; + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + pkts[i]->pkt_len = lens[i] - buf_offset; + pkts[i]->data_len = pkts[i]->pkt_len; + ids[i] = descs[avail_idx + i].id; + } + + if (rte_dma_burst_capacity(dma_id, vchan_id) < PACKED_BATCH_SIZE) + return -1; + + return 0; + +err: + return -1; +} + static __rte_always_inline int virtio_dev_tx_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -3775,16 +3866,74 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, return err; } +static __rte_always_inline int +virtio_dev_tx_async_packed_batch(struct virtio_net *dev, + struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t slot_idx, + uint16_t dma_id, uint16_t vchan_id) +{ + uint16_t avail_idx = vq->last_avail_idx; + uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); + struct vhost_async *async = vq->async; + struct async_inflight_info *pkts_info = async->pkts_info; + struct virtio_net_hdr *hdr; + uint32_t mbuf_offset = 0; + uintptr_t desc_addrs[PACKED_BATCH_SIZE]; + uint64_t desc_vva; + uint64_t lens[PACKED_BATCH_SIZE]; + void *host_iova[PACKED_BATCH_SIZE]; + uint64_t mapped_len[PACKED_BATCH_SIZE]; + uint16_t ids[PACKED_BATCH_SIZE]; + uint16_t i; + + if (vhost_async_tx_batch_packed_check(dev, vq, pkts, avail_idx, + desc_addrs, lens, ids, dma_id, vchan_id)) + return -1; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) + rte_prefetch0((void *)(uintptr_t)desc_addrs[i]); + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + host_iova[i] = (void *)(uintptr_t)gpa_to_first_hpa(dev, + desc_addrs[i] + buf_offset, pkts[i]->pkt_len, &mapped_len[i]); + } + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + async_iter_initialize(dev, async); + async_iter_add_iovec(dev, async, + host_iova[i], + (void *)(uintptr_t)rte_pktmbuf_iova_offset(pkts[i], mbuf_offset), + mapped_len[i]); + async->iter_idx++; + } + + if (virtio_net_with_host_offload(dev)) { + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + desc_vva = vhost_iova_to_vva(dev, vq, desc_addrs[i], + &lens[i], VHOST_ACCESS_RO); + hdr = (struct virtio_net_hdr *)(uintptr_t)desc_vva; + pkts_info[slot_idx + i].nethdr = *hdr; + } + } + + vq_inc_last_avail_packed(vq, PACKED_BATCH_SIZE); + + vhost_async_shadow_dequeue_packed_batch(vq, ids); + + return 0; +} + static __rte_always_inline uint16_t virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags) { - uint16_t pkt_idx; + uint32_t pkt_idx = 0; uint16_t slot_idx = 0; uint16_t nr_done_pkts = 0; uint16_t pkt_err = 0; uint32_t n_xfer; + uint16_t i; struct vhost_async *async = vq->async; struct async_inflight_info *pkts_info = async->pkts_info; struct rte_mbuf *pkts_prealloc[MAX_PKT_BURST]; @@ -3796,12 +3945,26 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) goto out; - for (pkt_idx = 0; pkt_idx < count; pkt_idx++) { + do { struct rte_mbuf *pkt = pkts_prealloc[pkt_idx]; rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]); slot_idx = (async->pkts_idx + pkt_idx) % vq->size; + if (count - pkt_idx >= PACKED_BATCH_SIZE) { + if (!virtio_dev_tx_async_packed_batch(dev, vq, &pkts_prealloc[pkt_idx], + slot_idx, dma_id, vchan_id)) { + for (i = 0; i < PACKED_BATCH_SIZE; i++) { + slot_idx = (async->pkts_idx + pkt_idx) % vq->size; + pkts_info[slot_idx].descs = 1; + pkts_info[slot_idx].nr_buffers = 1; + pkts_info[slot_idx].mbuf = pkts_prealloc[pkt_idx]; + pkt_idx++; + } + continue; + } + } + if (unlikely(virtio_dev_tx_async_single_packed(dev, vq, mbuf_pool, pkt, slot_idx, legacy_ol_flags))) { rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx], count - pkt_idx); @@ -3815,7 +3978,8 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } pkts_info[slot_idx].mbuf = pkt; - } + pkt_idx++; + } while (pkt_idx < count); n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx, async->iov_iter, pkt_idx);