From patchwork Wed Apr 14 06:13:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 91404 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E0FFA0524; Wed, 14 Apr 2021 08:27:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 52AE41616ED; Wed, 14 Apr 2021 08:27:07 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 2581C1616EB for ; Wed, 14 Apr 2021 08:27:05 +0200 (CEST) IronPort-SDR: qCUEBfxjGVGKfNF00vXtdyUVCmANoq7Icjl0Wslp20y4FY/H7+lwpCWv0xvtO71gfB5sRIRWKY ItILS+6XYubQ== X-IronPort-AV: E=McAfee;i="6200,9189,9953"; a="258543424" X-IronPort-AV: E=Sophos;i="5.82,221,1613462400"; d="scan'208";a="258543424" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 23:27:05 -0700 IronPort-SDR: QuvmVKspfs7ek3G4+JGBM2HYOtEOnAbys43JPVqVZuz3wjYQCbiKIUbpvTAbKHnYpHQkuRvY6b at6UB/KQPVMw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,221,1613462400"; d="scan'208";a="424588484" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 23:27:03 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, yvonnex.yang@intel.com, yinan.wang@intel.com, yong.liu@intel.com, Cheng Jiang Date: Wed, 14 Apr 2021 06:13:42 +0000 Message-Id: <20210414061343.54919-4-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210414061343.54919-1-Cheng1.jiang@intel.com> References: <20210317085426.10119-1-Cheng1.jiang@intel.com> <20210414061343.54919-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v7 3/4] vhost: add batch datapath for async vhost packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add batch datapath for async vhost packed ring to improve the performance of small packet processing. Signed-off-by: Cheng Jiang --- lib/librte_vhost/virtio_net.c | 41 +++++++++++++++++++++++++++++++---- 1 file changed, 37 insertions(+), 4 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 54e11e3a5..7ba186585 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1725,6 +1725,29 @@ vhost_update_used_packed(struct vhost_virtqueue *vq, vq->desc_packed[head_idx].flags = head_flags; } +static __rte_always_inline int +virtio_dev_rx_async_batch_packed(struct virtio_net *dev, + struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, + struct rte_mbuf **comp_pkts, uint32_t *pkt_done) +{ + uint16_t i; + uint32_t cpy_threshold = vq->async_threshold; + + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + if (unlikely(pkts[i]->pkt_len >= cpy_threshold)) + return -1; + } + if (!virtio_dev_rx_batch_packed(dev, vq, pkts)) { + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) + comp_pkts[(*pkt_done)++] = pkts[i]; + + return 0; + } + + return -1; +} + static __rte_always_inline int vhost_enqueue_async_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -1875,6 +1898,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct rte_mbuf **comp_pkts, uint32_t *comp_count) { uint32_t pkt_idx = 0, pkt_burst_idx = 0; + uint32_t remained = count; uint16_t async_descs_idx = 0; uint16_t num_buffers; uint16_t num_desc; @@ -1892,9 +1916,17 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, uint32_t num_async_pkts = 0, num_done_pkts = 0; struct vring_packed_desc async_descs[vq->size]; - rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]); + do { + rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]); + if (remained >= PACKED_BATCH_SIZE) { + if (!virtio_dev_rx_async_batch_packed(dev, vq, + &pkts[pkt_idx], comp_pkts, &num_done_pkts)) { + pkt_idx += PACKED_BATCH_SIZE; + remained -= PACKED_BATCH_SIZE; + continue; + } + } - for (pkt_idx = 0; pkt_idx < count; pkt_idx++) { if (unlikely(virtio_dev_rx_async_single_packed(dev, vq, pkts[pkt_idx], &num_desc, &num_buffers, &async_descs[async_descs_idx], @@ -1937,6 +1969,8 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, comp_pkts[num_done_pkts++] = pkts[pkt_idx]; } + pkt_idx++; + remained--; vq_inc_last_avail_packed(vq, num_desc); /* @@ -1961,13 +1995,12 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, */ pkt_err = pkt_burst_idx - n_pkts; pkt_burst_idx = 0; - pkt_idx++; break; } pkt_burst_idx = 0; } - } + } while (pkt_idx < count); if (pkt_burst_idx) { n_pkts = vq->async_ops.transfer_data(dev->vid, queue_id, tdes, 0, pkt_burst_idx);