From patchwork Thu Sep 19 16:36:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59381 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DDD431C1F3; Thu, 19 Sep 2019 10:56:28 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 5C0F81C1C6 for ; Thu, 19 Sep 2019 10:56:25 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146076" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:23 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:28 +0800 Message-Id: <20190919163643.24130-2-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 01/16] vhost: add single packet enqueue function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add vhost enqueue function for single packet and meanwhile left space for flush used ring function. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 5b85b832d..2b5c47145 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -774,6 +774,70 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, return error; } +/* + * Returns -1 on fail, 0 on success + */ +static __rte_always_inline int +vhost_enqueue_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf *pkt, struct buf_vector *buf_vec, uint16_t *nr_descs) +{ + uint16_t nr_vec = 0; + + uint16_t avail_idx; + uint16_t max_tries, tries = 0; + + uint16_t buf_id = 0; + uint32_t len = 0; + uint16_t desc_count; + + uint32_t size = pkt->pkt_len + dev->vhost_hlen; + avail_idx = vq->last_avail_idx; + + if (rxvq_is_mergeable(dev)) + max_tries = vq->size - 1; + else + max_tries = 1; + + uint16_t num_buffers = 0; + + while (size > 0) { + /* + * if we tried all available ring items, and still + * can't get enough buf, it means something abnormal + * happened. + */ + if (unlikely(++tries > max_tries)) + return -1; + + if (unlikely(fill_vec_buf_packed(dev, vq, + avail_idx, &desc_count, + buf_vec, &nr_vec, + &buf_id, &len, + VHOST_ACCESS_RW) < 0)) { + return -1; + } + + len = RTE_MIN(len, size); + + size -= len; + + avail_idx += desc_count; + if (avail_idx >= vq->size) + avail_idx -= vq->size; + + *nr_descs += desc_count; + num_buffers += 1; + } + + if (copy_mbuf_to_desc(dev, vq, pkt, + buf_vec, nr_vec, + num_buffers) < 0) { + return 0; + } + + return 0; +} + static __rte_noinline uint32_t virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) @@ -831,6 +895,35 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, return pkt_idx; } +static __rte_unused int16_t +virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf *pkt) +{ + struct buf_vector buf_vec[BUF_VECTOR_MAX]; + uint16_t nr_descs = 0; + + rte_smp_rmb(); + if (unlikely(vhost_enqueue_single_packed(dev, vq, pkt, buf_vec, + &nr_descs) < 0)) { + VHOST_LOG_DEBUG(VHOST_DATA, + "(%d) failed to get enough desc from vring\n", + dev->vid); + return -1; + } + + VHOST_LOG_DEBUG(VHOST_DATA, "(%d) current index %d | end index %d\n", + dev->vid, vq->last_avail_idx, + vq->last_avail_idx + nr_descs); + + vq->last_avail_idx += nr_descs; + if (vq->last_avail_idx >= vq->size) { + vq->last_avail_idx -= vq->size; + vq->avail_wrap_counter ^= 1; + } + + return 0; +} + static __rte_noinline uint32_t virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) From patchwork Thu Sep 19 16:36:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59382 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 368F51C229; Thu, 19 Sep 2019 10:56:34 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 686231C1E1 for ; Thu, 19 Sep 2019 10:56:28 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146085" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:24 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:29 +0800 Message-Id: <20190919163643.24130-3-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 02/16] vhost: unify unroll pragma parameter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add macro for unifying Clang/ICC/GCC unroll pragma format. Burst functions were contained of several small loops which optimized by compiler’s loop unrolling pragma. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile index 8623e91c0..30839a001 100644 --- a/lib/librte_vhost/Makefile +++ b/lib/librte_vhost/Makefile @@ -16,6 +16,24 @@ CFLAGS += -I vhost_user CFLAGS += -fno-strict-aliasing LDLIBS += -lpthread +ifeq ($(RTE_TOOLCHAIN), gcc) +ifeq ($(shell test $(GCC_VERSION) -ge 83 && echo 1), 1) +CFLAGS += -DSUPPORT_GCC_UNROLL_PRAGMA +endif +endif + +ifeq ($(RTE_TOOLCHAIN), clang) +ifeq ($(shell test $(CLANG_MAJOR_VERSION)$(CLANG_MINOR_VERSION) -ge 37 && echo 1), 1) +CFLAGS += -DSUPPORT_CLANG_UNROLL_PRAGMA +endif +endif + +ifeq ($(RTE_TOOLCHAIN), icc) +ifeq ($(shell test $(ICC_MAJOR_VERSION) -ge 16 && echo 1), 1) +CFLAGS += -DSUPPORT_ICC_UNROLL_PRAGMA +endif +endif + ifeq ($(CONFIG_RTE_LIBRTE_VHOST_NUMA),y) LDLIBS += -lnuma endif diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 884befa85..5074226f0 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -39,6 +39,24 @@ #define VHOST_LOG_CACHE_NR 32 +#ifdef SUPPORT_GCC_UNROLL_PRAGMA +#define PRAGMA_PARAM "GCC unroll 4" +#endif + +#ifdef SUPPORT_CLANG_UNROLL_PRAGMA +#define PRAGMA_PARAM "unroll 4" +#endif + +#ifdef SUPPORT_ICC_UNROLL_PRAGMA +#define PRAGMA_PARAM "unroll (4)" +#endif + +#ifdef PRAGMA_PARAM +#define UNROLL_PRAGMA(param) _Pragma(param) +#else +#define UNROLL_PRAGMA(param) do {} while(0); +#endif + /** * Structure contains buffer address, length and descriptor index * from vring to do scatter RX. From patchwork Thu Sep 19 16:36:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59383 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 34B571C2FB; Thu, 19 Sep 2019 10:56:36 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id F3AEB1C201 for ; Thu, 19 Sep 2019 10:56:29 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146094" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:27 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:30 +0800 Message-Id: <20190919163643.24130-4-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 03/16] vhost: add burst enqueue function for packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Burst enqueue function will first check whether descriptors are cache aligned. It will also check prerequisites in the beginning. Burst enqueue function not support chained mbufs, single packet enqueue function will handle it. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 5074226f0..67889c80a 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -39,6 +39,9 @@ #define VHOST_LOG_CACHE_NR 32 +#define PACKED_DESCS_BURST (RTE_CACHE_LINE_SIZE / \ + sizeof(struct vring_packed_desc)) + #ifdef SUPPORT_GCC_UNROLL_PRAGMA #define PRAGMA_PARAM "GCC unroll 4" #endif @@ -57,6 +60,8 @@ #define UNROLL_PRAGMA(param) do {} while(0); #endif +#define PACKED_BURST_MASK (PACKED_DESCS_BURST - 1) + /** * Structure contains buffer address, length and descriptor index * from vring to do scatter RX. diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 2b5c47145..c664b27c5 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -895,6 +895,84 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, return pkt_idx; } +static __rte_unused __rte_always_inline int +virtio_dev_rx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts) +{ + bool wrap_counter = vq->avail_wrap_counter; + struct vring_packed_desc *descs = vq->desc_packed; + uint16_t avail_idx = vq->last_avail_idx; + + uint64_t desc_addrs[PACKED_DESCS_BURST]; + struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_DESCS_BURST]; + uint32_t buf_offset = dev->vhost_hlen; + uint64_t lens[PACKED_DESCS_BURST]; + + uint16_t i; + + if (unlikely(avail_idx & PACKED_BURST_MASK)) + return -1; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (unlikely(pkts[i]->next != NULL)) + return -1; + if (unlikely(!desc_is_avail(&descs[avail_idx + i], + wrap_counter))) + return -1; + } + + rte_smp_rmb(); + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + lens[i] = descs[avail_idx + i].len; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (unlikely(pkts[i]->pkt_len > (lens[i] - buf_offset))) + return -1; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + desc_addrs[i] = vhost_iova_to_vva(dev, vq, + descs[avail_idx + i].addr, + &lens[i], + VHOST_ACCESS_RW); + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (unlikely(lens[i] != descs[avail_idx + i].len)) + return -1; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + rte_prefetch0((void *)(uintptr_t)desc_addrs[i]); + hdrs[i] = (struct virtio_net_hdr_mrg_rxbuf *)desc_addrs[i]; + lens[i] = pkts[i]->pkt_len + dev->vhost_hlen; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + virtio_enqueue_offload(pkts[i], &hdrs[i]->hdr); + + vq->last_avail_idx += PACKED_DESCS_BURST; + if (vq->last_avail_idx >= vq->size) { + vq->last_avail_idx -= vq->size; + vq->avail_wrap_counter ^= 1; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + rte_memcpy((void *)(uintptr_t)(desc_addrs[i] + buf_offset), + rte_pktmbuf_mtod_offset(pkts[i], void *, 0), + pkts[i]->pkt_len); + } + + return 0; +} + static __rte_unused int16_t virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt) From patchwork Thu Sep 19 16:36:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59384 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AB8E81D17F; Thu, 19 Sep 2019 10:56:38 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 3F8561C20F for ; Thu, 19 Sep 2019 10:56:31 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146102" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:29 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:31 +0800 Message-Id: <20190919163643.24130-5-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 04/16] vhost: add single packet dequeue function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add vhost single packet dequeue function for packed ring and meanwhile left space for shadow used ring update function. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index c664b27c5..047fa7dc8 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1580,6 +1580,61 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, return i; } +static __rte_always_inline int +vhost_dequeue_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t *buf_id, + uint16_t *desc_count) +{ + struct buf_vector buf_vec[BUF_VECTOR_MAX]; + uint32_t dummy_len; + uint16_t nr_vec = 0; + int err; + + if (unlikely(fill_vec_buf_packed(dev, vq, + vq->last_avail_idx, desc_count, + buf_vec, &nr_vec, + buf_id, &dummy_len, + VHOST_ACCESS_RO) < 0)) { + return -1; + } + + *pkts = rte_pktmbuf_alloc(mbuf_pool); + if (unlikely(*pkts == NULL)) { + RTE_LOG(ERR, VHOST_DATA, + "Failed to allocate memory for mbuf.\n"); + return -1; + } + + err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, *pkts, + mbuf_pool); + if (unlikely(err)) { + rte_pktmbuf_free(*pkts); + return -1; + } + + return 0; +} + +static __rte_unused int +virtio_dev_tx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts) +{ + + uint16_t buf_id, desc_count; + + if (vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id, + &desc_count)) + return -1; + + vq->last_avail_idx += desc_count; + if (vq->last_avail_idx >= vq->size) { + vq->last_avail_idx -= vq->size; + vq->avail_wrap_counter ^= 1; + } + + return 0; +} + static __rte_noinline uint16_t virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) From patchwork Thu Sep 19 16:36:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59385 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0A67D1E4DE; Thu, 19 Sep 2019 10:56:41 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id B532F1C226 for ; Thu, 19 Sep 2019 10:56:32 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146110" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:30 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:32 +0800 Message-Id: <20190919163643.24130-6-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 05/16] vhost: add burst dequeue function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add burst dequeue function like enqueue function for packed ring, burst dequeue function will not support chained descritpors, single packet dequeue function will handle it. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 67889c80a..9fa3c8adf 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -61,6 +61,7 @@ #endif #define PACKED_BURST_MASK (PACKED_DESCS_BURST - 1) +#define DESC_SINGLE_DEQUEUE (VRING_DESC_F_NEXT | VRING_DESC_F_INDIRECT) /** * Structure contains buffer address, length and descriptor index diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 047fa7dc8..23c0f4685 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1580,6 +1580,121 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, return i; } +static __rte_always_inline int +vhost_dequeue_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, + uint16_t avail_idx, uintptr_t *desc_addrs, uint16_t *ids) +{ + bool wrap_counter = vq->avail_wrap_counter; + struct vring_packed_desc *descs = vq->desc_packed; + uint64_t lens[PACKED_DESCS_BURST]; + uint64_t buf_lens[PACKED_DESCS_BURST]; + uint32_t buf_offset = dev->vhost_hlen; + uint16_t i; + + if (unlikely(avail_idx & PACKED_BURST_MASK)) + return -1; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (unlikely(!desc_is_avail(&descs[avail_idx + i], + wrap_counter))) + return -1; + if (unlikely(descs[avail_idx + i].flags & + DESC_SINGLE_DEQUEUE)) + return -1; + } + + rte_smp_rmb(); + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + lens[i] = descs[avail_idx + i].len; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + desc_addrs[i] = vhost_iova_to_vva(dev, vq, + descs[avail_idx + i].addr, + &lens[i], VHOST_ACCESS_RW); + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (unlikely((lens[i] != descs[avail_idx + i].len))) + return -1; + } + + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, PACKED_DESCS_BURST)) + return -1; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + buf_lens[i] = pkts[i]->buf_len - pkts[i]->data_off; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (unlikely(buf_lens[i] < (lens[i] - buf_offset))) + goto free_buf; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + pkts[i]->pkt_len = descs[avail_idx + i].len - buf_offset; + pkts[i]->data_len = pkts[i]->pkt_len; + ids[i] = descs[avail_idx + i].id; + } + + return 0; + +free_buf: + for (i = 0; i < PACKED_DESCS_BURST; i++) + rte_pktmbuf_free(pkts[i]); + + return -1; +} + +static __rte_unused int +virtio_dev_tx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts) +{ + uint16_t avail_idx = vq->last_avail_idx; + uint32_t buf_offset = dev->vhost_hlen; + uintptr_t desc_addrs[PACKED_DESCS_BURST]; + uint16_t ids[PACKED_DESCS_BURST]; + int ret; + struct virtio_net_hdr *hdr; + uint16_t i; + + ret = vhost_dequeue_burst_packed(dev, vq, mbuf_pool, pkts, avail_idx, + desc_addrs, ids); + + if (ret) + return ret; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + rte_prefetch0((void *)(uintptr_t)desc_addrs[i]); + rte_memcpy(rte_pktmbuf_mtod_offset(pkts[i], void *, 0), + (void *)(uintptr_t)(desc_addrs[i] + buf_offset), + pkts[i]->pkt_len); + } + + if (virtio_net_with_host_offload(dev)) { + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + hdr = (struct virtio_net_hdr *)(desc_addrs[i]); + vhost_dequeue_offload(hdr, pkts[i]); + } + } + + vq->last_avail_idx += PACKED_DESCS_BURST; + if (vq->last_avail_idx >= vq->size) { + vq->last_avail_idx -= vq->size; + vq->avail_wrap_counter ^= 1; + } + return 0; +} + static __rte_always_inline int vhost_dequeue_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t *buf_id, From patchwork Thu Sep 19 16:36:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59386 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 81D741E8FE; Thu, 19 Sep 2019 10:56:46 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 368D41C226 for ; Thu, 19 Sep 2019 10:56:34 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146119" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:32 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:33 +0800 Message-Id: <20190919163643.24130-7-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 06/16] vhost: rename flush shadow used ring functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Simplify flush shadow used ring function names as all shadow rings are reflect to used rings. No need to emphasize ring type. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 23c0f4685..ebd6c175d 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -38,7 +38,7 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring) } static __rte_always_inline void -do_flush_shadow_used_ring_split(struct virtio_net *dev, +do_flush_shadow_split(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t to, uint16_t from, uint16_t size) { @@ -51,22 +51,22 @@ do_flush_shadow_used_ring_split(struct virtio_net *dev, } static __rte_always_inline void -flush_shadow_used_ring_split(struct virtio_net *dev, struct vhost_virtqueue *vq) +flush_shadow_split(struct virtio_net *dev, struct vhost_virtqueue *vq) { uint16_t used_idx = vq->last_used_idx & (vq->size - 1); if (used_idx + vq->shadow_used_idx <= vq->size) { - do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, + do_flush_shadow_split(dev, vq, used_idx, 0, vq->shadow_used_idx); } else { uint16_t size; /* update used ring interval [used_idx, vq->size] */ size = vq->size - used_idx; - do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, size); + do_flush_shadow_split(dev, vq, used_idx, 0, size); /* update the left half used ring interval [0, left_size] */ - do_flush_shadow_used_ring_split(dev, vq, 0, size, + do_flush_shadow_split(dev, vq, 0, size, vq->shadow_used_idx - size); } vq->last_used_idx += vq->shadow_used_idx; @@ -82,7 +82,7 @@ flush_shadow_used_ring_split(struct virtio_net *dev, struct vhost_virtqueue *vq) } static __rte_always_inline void -update_shadow_used_ring_split(struct vhost_virtqueue *vq, +update_shadow_split(struct vhost_virtqueue *vq, uint16_t desc_idx, uint32_t len) { uint16_t i = vq->shadow_used_idx++; @@ -92,7 +92,7 @@ update_shadow_used_ring_split(struct vhost_virtqueue *vq, } static __rte_always_inline void -flush_shadow_used_ring_packed(struct virtio_net *dev, +flush_shadow_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) { int i; @@ -159,7 +159,7 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, } static __rte_always_inline void -update_shadow_used_ring_packed(struct vhost_virtqueue *vq, +update_shadow_packed(struct vhost_virtqueue *vq, uint16_t desc_idx, uint32_t len, uint16_t count) { uint16_t i = vq->shadow_used_idx++; @@ -421,7 +421,7 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, VHOST_ACCESS_RW) < 0)) return -1; len = RTE_MIN(len, size); - update_shadow_used_ring_split(vq, head_idx, len); + update_shadow_split(vq, head_idx, len); size -= len; cur_idx++; @@ -597,7 +597,7 @@ reserve_avail_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, return -1; len = RTE_MIN(len, size); - update_shadow_used_ring_packed(vq, buf_id, len, desc_count); + update_shadow_packed(vq, buf_id, len, desc_count); size -= len; avail_idx += desc_count; @@ -888,7 +888,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_enqueue(dev, vq); if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring_split(dev, vq); + flush_shadow_split(dev, vq); vhost_vring_call_split(dev, vq); } @@ -1046,7 +1046,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_enqueue(dev, vq); if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring_packed(dev, vq); + flush_shadow_packed(dev, vq); vhost_vring_call_packed(dev, vq); } @@ -1475,8 +1475,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, next = TAILQ_NEXT(zmbuf, next); if (mbuf_is_consumed(zmbuf->mbuf)) { - update_shadow_used_ring_split(vq, - zmbuf->desc_idx, 0); + update_shadow_split(vq, zmbuf->desc_idx, 0); TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next); restore_mbuf(zmbuf->mbuf); rte_pktmbuf_free(zmbuf->mbuf); @@ -1486,7 +1485,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, } if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring_split(dev, vq); + flush_shadow_split(dev, vq); vhost_vring_call_split(dev, vq); } } @@ -1526,7 +1525,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, break; if (likely(dev->dequeue_zero_copy == 0)) - update_shadow_used_ring_split(vq, head_idx, 0); + update_shadow_split(vq, head_idx, 0); pkts[i] = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(pkts[i] == NULL)) { @@ -1572,7 +1571,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(i < count)) vq->shadow_used_idx = i; if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring_split(dev, vq); + flush_shadow_split(dev, vq); vhost_vring_call_split(dev, vq); } } @@ -1764,7 +1763,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, next = TAILQ_NEXT(zmbuf, next); if (mbuf_is_consumed(zmbuf->mbuf)) { - update_shadow_used_ring_packed(vq, + update_shadow_packed(vq, zmbuf->desc_idx, 0, zmbuf->desc_count); @@ -1778,7 +1777,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring_packed(dev, vq); + flush_shadow_packed(dev, vq); vhost_vring_call_packed(dev, vq); } } @@ -1804,7 +1803,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, break; if (likely(dev->dequeue_zero_copy == 0)) - update_shadow_used_ring_packed(vq, buf_id, 0, + update_shadow_packed(vq, buf_id, 0, desc_count); pkts[i] = rte_pktmbuf_alloc(mbuf_pool); @@ -1857,7 +1856,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(i < count)) vq->shadow_used_idx = i; if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring_packed(dev, vq); + flush_shadow_packed(dev, vq); vhost_vring_call_packed(dev, vq); } } From patchwork Thu Sep 19 16:36:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59387 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8F1841E936; Thu, 19 Sep 2019 10:56:48 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id B862A1C2B6 for ; Thu, 19 Sep 2019 10:56:35 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146126" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:33 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:34 +0800 Message-Id: <20190919163643.24130-8-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 07/16] vhost: flush vhost enqueue shadow ring by burst X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Buffer vhost enqueue shadow ring update, flush shadow ring until buffered descriptors number exceed one burst. Thus virtio can receive packets at a faster frequency. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 9fa3c8adf..000648dd4 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -163,6 +163,7 @@ struct vhost_virtqueue { struct vring_used_elem_packed *shadow_used_packed; }; uint16_t shadow_used_idx; + uint16_t enqueue_shadow_count; struct vhost_vring_addr ring_addrs; struct batch_copy_elem *batch_copy_elems; diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index ebd6c175d..e2787b72e 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -169,6 +169,24 @@ update_shadow_packed(struct vhost_virtqueue *vq, vq->shadow_used_packed[i].count = count; } +static __rte_always_inline void +update_enqueue_shadow_packed(struct vhost_virtqueue *vq, uint16_t desc_idx, + uint32_t len, uint16_t count) +{ + /* enqueue shadow flush action aligned with burst num */ + if (!vq->shadow_used_idx) + vq->enqueue_shadow_count = vq->last_used_idx & + PACKED_BURST_MASK; + + uint16_t i = vq->shadow_used_idx++; + + vq->shadow_used_packed[i].id = desc_idx; + vq->shadow_used_packed[i].len = len; + vq->shadow_used_packed[i].count = count; + + vq->enqueue_shadow_count += count; +} + static inline void do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq) { @@ -198,6 +216,21 @@ do_data_copy_dequeue(struct vhost_virtqueue *vq) vq->batch_copy_nb_elems = 0; } +static __rte_always_inline void +flush_enqueue_packed(struct virtio_net *dev, + struct vhost_virtqueue *vq, uint32_t len[], uint16_t id[], + uint16_t count[], uint16_t num_buffers) +{ + int i; + for (i = 0; i < num_buffers; i++) { + update_enqueue_shadow_packed(vq, id[i], len[i], count[i]); + + if (vq->enqueue_shadow_count >= PACKED_DESCS_BURST) { + do_data_copy_enqueue(dev, vq); + flush_shadow_packed(dev, vq); + } + } +} /* avoid write operation when necessary, to lessen cache issues */ #define ASSIGN_UNLESS_EQUAL(var, val) do { \ if ((var) != (val)) \ @@ -799,6 +832,9 @@ vhost_enqueue_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, max_tries = 1; uint16_t num_buffers = 0; + uint32_t buffer_len[max_tries]; + uint16_t buffer_buf_id[max_tries]; + uint16_t buffer_desc_count[max_tries]; while (size > 0) { /* @@ -821,6 +857,10 @@ vhost_enqueue_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, size -= len; + buffer_len[num_buffers] = len; + buffer_buf_id[num_buffers] = buf_id; + buffer_desc_count[num_buffers] = desc_count; + avail_idx += desc_count; if (avail_idx >= vq->size) avail_idx -= vq->size; @@ -835,6 +875,9 @@ vhost_enqueue_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, return 0; } + flush_enqueue_packed(dev, vq, buffer_len, buffer_buf_id, + buffer_desc_count, num_buffers); + return 0; } From patchwork Thu Sep 19 16:36:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59388 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 12DCA1E981; Thu, 19 Sep 2019 10:56:51 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 340AF1C2B6 for ; Thu, 19 Sep 2019 10:56:37 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146132" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:35 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:35 +0800 Message-Id: <20190919163643.24130-9-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 08/16] vhost: add flush function for burst enqueue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Flush used flags when burst enqueue function is finished. Descriptor's flags are pre-calculated as them will be reset by vhost. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 000648dd4..9c42c7db0 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -39,6 +39,9 @@ #define VHOST_LOG_CACHE_NR 32 +#define VIRTIO_RX_USED_FLAG (0ULL | VRING_DESC_F_AVAIL | VRING_DESC_F_USED \ + | VRING_DESC_F_WRITE) +#define VIRTIO_RX_USED_WRAP_FLAG (VRING_DESC_F_WRITE) #define PACKED_DESCS_BURST (RTE_CACHE_LINE_SIZE / \ sizeof(struct vring_packed_desc)) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index e2787b72e..8e4036204 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -169,6 +169,51 @@ update_shadow_packed(struct vhost_virtqueue *vq, vq->shadow_used_packed[i].count = count; } +static __rte_always_inline void +flush_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + uint64_t *lens, uint16_t *ids, uint16_t flags) +{ + uint16_t i; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + vq->desc_packed[vq->last_used_idx + i].id = ids[i]; + vq->desc_packed[vq->last_used_idx + i].len = lens[i]; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + rte_smp_wmb(); + vq->desc_packed[vq->last_used_idx + i].flags = flags; + } + + vhost_log_cache_used_vring(dev, vq, vq->last_used_idx * + sizeof(struct vring_packed_desc), + sizeof(struct vring_packed_desc) * + PACKED_DESCS_BURST); + vhost_log_cache_sync(dev, vq); + + vq->last_used_idx += PACKED_DESCS_BURST; + if (vq->last_used_idx >= vq->size) { + vq->used_wrap_counter ^= 1; + vq->last_used_idx -= vq->size; + } +} + +static __rte_always_inline void +flush_enqueue_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + uint64_t *lens, uint16_t *ids) +{ + uint16_t flags = 0; + + if (vq->used_wrap_counter) + flags = VIRTIO_RX_USED_FLAG; + else + flags = VIRTIO_RX_USED_WRAP_FLAG; + + flush_burst_packed(dev, vq, lens, ids, flags); +} + static __rte_always_inline void update_enqueue_shadow_packed(struct vhost_virtqueue *vq, uint16_t desc_idx, uint32_t len, uint16_t count) @@ -950,6 +995,7 @@ virtio_dev_rx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_DESCS_BURST]; uint32_t buf_offset = dev->vhost_hlen; uint64_t lens[PACKED_DESCS_BURST]; + uint16_t ids[PACKED_DESCS_BURST]; uint16_t i; @@ -1013,6 +1059,12 @@ virtio_dev_rx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, pkts[i]->pkt_len); } + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + ids[i] = descs[avail_idx + i].id; + + flush_enqueue_burst_packed(dev, vq, lens, ids); + return 0; } From patchwork Thu Sep 19 16:36:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59389 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7B6AD1EA9E; Thu, 19 Sep 2019 10:56:53 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id AFD2B1D416 for ; Thu, 19 Sep 2019 10:56:38 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146137" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:36 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:36 +0800 Message-Id: <20190919163643.24130-10-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 09/16] vhost: buffer vhost dequeue shadow ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Buffer used ring updates as many as possible in vhost dequeue function for coordinating with virtio driver. For supporting buffer, shadow used ring element should contain descriptor index and its wrap counter. First shadowed ring index is recorded for calculating buffered number. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 9c42c7db0..14e87f670 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -42,6 +42,8 @@ #define VIRTIO_RX_USED_FLAG (0ULL | VRING_DESC_F_AVAIL | VRING_DESC_F_USED \ | VRING_DESC_F_WRITE) #define VIRTIO_RX_USED_WRAP_FLAG (VRING_DESC_F_WRITE) +#define VIRTIO_TX_USED_FLAG (0ULL | VRING_DESC_F_AVAIL | VRING_DESC_F_USED) +#define VIRTIO_TX_USED_WRAP_FLAG (0x0) #define PACKED_DESCS_BURST (RTE_CACHE_LINE_SIZE / \ sizeof(struct vring_packed_desc)) @@ -110,9 +112,11 @@ struct log_cache_entry { }; struct vring_used_elem_packed { + uint16_t used_idx; uint16_t id; uint32_t len; uint32_t count; + uint16_t used_wrap_counter; }; /** @@ -167,6 +171,7 @@ struct vhost_virtqueue { }; uint16_t shadow_used_idx; uint16_t enqueue_shadow_count; + uint16_t dequeue_shadow_head; struct vhost_vring_addr ring_addrs; struct batch_copy_elem *batch_copy_elems; diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 8e4036204..94c1b8dc7 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -232,6 +232,43 @@ update_enqueue_shadow_packed(struct vhost_virtqueue *vq, uint16_t desc_idx, vq->enqueue_shadow_count += count; } +static __rte_always_inline void +update_dequeue_shadow_packed(struct vhost_virtqueue *vq, uint16_t buf_id, + uint16_t count) +{ + if (!vq->shadow_used_idx) { + vq->dequeue_shadow_head = vq->last_used_idx; + + vq->shadow_used_packed[0].id = buf_id; + vq->shadow_used_packed[0].len = 0; + vq->shadow_used_packed[0].count = count; + vq->shadow_used_packed[0].used_idx = vq->last_used_idx; + vq->shadow_used_packed[0].used_wrap_counter = + vq->used_wrap_counter; + + vq->shadow_used_idx = 1; + } else { + vq->desc_packed[vq->last_used_idx].id = buf_id; + vq->desc_packed[vq->last_used_idx].len = 0; + + if (vq->used_wrap_counter) + vq->desc_packed[vq->last_used_idx].flags = + VIRTIO_TX_USED_FLAG; + else + vq->desc_packed[vq->last_used_idx].flags = + VIRTIO_TX_USED_WRAP_FLAG; + + } + + vq->last_used_idx += count; + + if (vq->last_used_idx >= vq->size) { + vq->used_wrap_counter ^= 1; + vq->last_used_idx -= vq->size; + } +} + + static inline void do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq) { @@ -1835,6 +1872,8 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, &desc_count)) return -1; + update_dequeue_shadow_packed(vq, buf_id, desc_count); + vq->last_avail_idx += desc_count; if (vq->last_avail_idx >= vq->size) { vq->last_avail_idx -= vq->size; From patchwork Thu Sep 19 16:36:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59390 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7ABD91EAC6; Thu, 19 Sep 2019 10:56:56 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 32C981D50F for ; Thu, 19 Sep 2019 10:56:40 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146141" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:38 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:37 +0800 Message-Id: <20190919163643.24130-11-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 10/16] vhost: split enqueue and dequeue flush functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Vhost enqueue descriptors are updated by burst number, while vhost dequeue descriptors are buffered. Meanwhile in dequeue function only first descriptor is buffered. Due to these differences, split vhost enqueue and dequeue flush functions. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 94c1b8dc7..c5c86c219 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -92,8 +92,7 @@ update_shadow_split(struct vhost_virtqueue *vq, } static __rte_always_inline void -flush_shadow_packed(struct virtio_net *dev, - struct vhost_virtqueue *vq) +flush_enqueue_shadow_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) { int i; uint16_t used_idx = vq->last_used_idx; @@ -158,6 +157,31 @@ flush_shadow_packed(struct virtio_net *dev, vhost_log_cache_sync(dev, vq); } +static __rte_always_inline void +flush_dequeue_shadow_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) +{ + uint16_t head_idx = vq->dequeue_shadow_head; + uint16_t head_flags; + struct vring_used_elem_packed *used_elem = &vq->shadow_used_packed[0]; + + if (used_elem->used_wrap_counter) + head_flags = VIRTIO_TX_USED_FLAG; + else + head_flags = VIRTIO_TX_USED_WRAP_FLAG; + + vq->desc_packed[head_idx].id = used_elem->id; + + rte_smp_wmb(); + vq->desc_packed[head_idx].flags = head_flags; + + vhost_log_cache_used_vring(dev, vq, head_idx * + sizeof(struct vring_packed_desc), + sizeof(struct vring_packed_desc)); + + vq->shadow_used_idx = 0; + vhost_log_cache_sync(dev, vq); +} + static __rte_always_inline void update_shadow_packed(struct vhost_virtqueue *vq, uint16_t desc_idx, uint32_t len, uint16_t count) @@ -200,6 +224,51 @@ flush_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } } +static __rte_always_inline void +update_dequeue_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + uint16_t *ids) +{ + uint16_t flags = 0; + uint16_t i; + + if (vq->used_wrap_counter) + flags = VIRTIO_TX_USED_FLAG; + else + flags = VIRTIO_TX_USED_WRAP_FLAG; + + if (!vq->shadow_used_idx) { + vq->dequeue_shadow_head = vq->last_used_idx; + + vq->shadow_used_packed[0].id = ids[0]; + vq->shadow_used_packed[0].len = 0; + vq->shadow_used_packed[0].count = 1; + vq->shadow_used_packed[0].used_idx = vq->last_used_idx; + vq->shadow_used_packed[0].used_wrap_counter = + vq->used_wrap_counter; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 1; i < PACKED_DESCS_BURST; i++) + vq->desc_packed[vq->last_used_idx + i].id = ids[i]; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 1; i < PACKED_DESCS_BURST; i++) { + rte_smp_wmb(); + vq->desc_packed[vq->last_used_idx + i].flags = flags; + } + + vq->shadow_used_idx = 1; + + vq->last_used_idx += PACKED_DESCS_BURST; + if (vq->last_used_idx >= vq->size) { + vq->used_wrap_counter ^= 1; + vq->last_used_idx -= vq->size; + } + } else { + uint64_t lens[PACKED_DESCS_BURST] = {0}; + flush_burst_packed(dev, vq, lens, ids, flags); + } +} + static __rte_always_inline void flush_enqueue_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t *lens, uint16_t *ids) @@ -309,10 +378,29 @@ flush_enqueue_packed(struct virtio_net *dev, if (vq->enqueue_shadow_count >= PACKED_DESCS_BURST) { do_data_copy_enqueue(dev, vq); - flush_shadow_packed(dev, vq); + flush_enqueue_shadow_packed(dev, vq); } } } + +static __rte_unused __rte_always_inline void +flush_dequeue_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) +{ + if (!vq->shadow_used_idx) + return; + + int16_t shadow_count = vq->last_used_idx - vq->dequeue_shadow_head; + if (shadow_count <= 0) + shadow_count += vq->size; + + /* buffer used descs as many as possible when doing dequeue */ + if ((uint16_t)shadow_count >= (vq->size - MAX_PKT_BURST)) { + do_data_copy_dequeue(vq); + flush_dequeue_shadow_packed(dev, vq); + vhost_vring_call_packed(dev, vq); + } +} + /* avoid write operation when necessary, to lessen cache issues */ #define ASSIGN_UNLESS_EQUAL(var, val) do { \ if ((var) != (val)) \ @@ -1178,7 +1266,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_enqueue(dev, vq); if (likely(vq->shadow_used_idx)) { - flush_shadow_packed(dev, vq); + flush_enqueue_shadow_packed(dev, vq); vhost_vring_call_packed(dev, vq); } @@ -1810,6 +1898,7 @@ virtio_dev_tx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, pkts[i]->pkt_len); } + update_dequeue_burst_packed(dev, vq, ids); if (virtio_net_with_host_offload(dev)) { UNROLL_PRAGMA(PRAGMA_PARAM) for (i = 0; i < PACKED_DESCS_BURST; i++) { @@ -1911,7 +2000,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } if (likely(vq->shadow_used_idx)) { - flush_shadow_packed(dev, vq); + flush_dequeue_shadow_packed(dev, vq); vhost_vring_call_packed(dev, vq); } } @@ -1990,7 +2079,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(i < count)) vq->shadow_used_idx = i; if (likely(vq->shadow_used_idx)) { - flush_shadow_packed(dev, vq); + flush_dequeue_shadow_packed(dev, vq); vhost_vring_call_packed(dev, vq); } } From patchwork Thu Sep 19 16:36:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59391 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 30F241EACE; Thu, 19 Sep 2019 10:56:59 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 9EADF1E8BF for ; Thu, 19 Sep 2019 10:56:41 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146147" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:39 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:38 +0800 Message-Id: <20190919163643.24130-12-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 11/16] vhost: optimize enqueue function of packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Optimize vhost device Tx datapath by separate functions. Packets can be filled into one descriptor will be handled by burst and others will be handled one by one as before. Pre-fetch descriptors in next two cache lines as hardware will load two cache line data automatically. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index c5c86c219..2418b4e45 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -758,64 +758,6 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, return 0; } -/* - * Returns -1 on fail, 0 on success - */ -static inline int -reserve_avail_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, - uint32_t size, struct buf_vector *buf_vec, - uint16_t *nr_vec, uint16_t *num_buffers, - uint16_t *nr_descs) -{ - uint16_t avail_idx; - uint16_t vec_idx = 0; - uint16_t max_tries, tries = 0; - - uint16_t buf_id = 0; - uint32_t len = 0; - uint16_t desc_count; - - *num_buffers = 0; - avail_idx = vq->last_avail_idx; - - if (rxvq_is_mergeable(dev)) - max_tries = vq->size - 1; - else - max_tries = 1; - - while (size > 0) { - /* - * if we tried all available ring items, and still - * can't get enough buf, it means something abnormal - * happened. - */ - if (unlikely(++tries > max_tries)) - return -1; - - if (unlikely(fill_vec_buf_packed(dev, vq, - avail_idx, &desc_count, - buf_vec, &vec_idx, - &buf_id, &len, - VHOST_ACCESS_RW) < 0)) - return -1; - - len = RTE_MIN(len, size); - update_shadow_packed(vq, buf_id, len, desc_count); - size -= len; - - avail_idx += desc_count; - if (avail_idx >= vq->size) - avail_idx -= vq->size; - - *nr_descs += desc_count; - *num_buffers += 1; - } - - *nr_vec = vec_idx; - - return 0; -} - static __rte_noinline void copy_vnet_hdr_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct buf_vector *buf_vec, @@ -1108,7 +1050,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, return pkt_idx; } -static __rte_unused __rte_always_inline int +static __rte_always_inline int virtio_dev_rx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts) { @@ -1193,7 +1135,7 @@ virtio_dev_rx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, return 0; } -static __rte_unused int16_t +static __rte_always_inline int16_t virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt) { @@ -1227,46 +1169,41 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf **pkts, uint32_t count) { uint32_t pkt_idx = 0; - uint16_t num_buffers; - struct buf_vector buf_vec[BUF_VECTOR_MAX]; - - for (pkt_idx = 0; pkt_idx < count; pkt_idx++) { - uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen; - uint16_t nr_vec = 0; - uint16_t nr_descs = 0; + uint32_t remained = count; + uint16_t fetch_idx; + int ret; + struct vring_packed_desc *descs = vq->desc_packed; - if (unlikely(reserve_avail_buf_packed(dev, vq, - pkt_len, buf_vec, &nr_vec, - &num_buffers, &nr_descs) < 0)) { - VHOST_LOG_DEBUG(VHOST_DATA, - "(%d) failed to get enough desc from vring\n", - dev->vid); - vq->shadow_used_idx -= num_buffers; - break; + do { + if ((vq->last_avail_idx & 0x7) == 0) { + fetch_idx = vq->last_avail_idx + 8; + rte_prefetch0((void *)(uintptr_t)&descs[fetch_idx]); } - VHOST_LOG_DEBUG(VHOST_DATA, "(%d) current index %d | end index %d\n", - dev->vid, vq->last_avail_idx, - vq->last_avail_idx + num_buffers); + if (remained >= PACKED_DESCS_BURST) { + ret = virtio_dev_rx_burst_packed(dev, vq, pkts); - if (copy_mbuf_to_desc(dev, vq, pkts[pkt_idx], - buf_vec, nr_vec, - num_buffers) < 0) { - vq->shadow_used_idx -= num_buffers; - break; + if (!ret) { + pkt_idx += PACKED_DESCS_BURST; + remained -= PACKED_DESCS_BURST; + continue; + } } - vq->last_avail_idx += nr_descs; - if (vq->last_avail_idx >= vq->size) { - vq->last_avail_idx -= vq->size; - vq->avail_wrap_counter ^= 1; - } - } + if (virtio_dev_rx_single_packed(dev, vq, pkts[pkt_idx])) + break; - do_data_copy_enqueue(dev, vq); + pkt_idx++; + remained--; + + } while (pkt_idx < count); + + if (pkt_idx) { + if (vq->shadow_used_idx) { + do_data_copy_enqueue(dev, vq); + flush_enqueue_shadow_packed(dev, vq); + } - if (likely(vq->shadow_used_idx)) { - flush_enqueue_shadow_packed(dev, vq); vhost_vring_call_packed(dev, vq); } From patchwork Thu Sep 19 16:36:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59392 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 39B481EADD; Thu, 19 Sep 2019 10:57:02 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 0F3601E8BF for ; Thu, 19 Sep 2019 10:56:42 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146153" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:41 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:39 +0800 Message-Id: <20190919163643.24130-13-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 12/16] vhost: add burst and single zero dequeue functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Optimize vhost zero copy dequeue path like normal dequeue path. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 2418b4e45..a8df74f87 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1909,6 +1909,144 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, return 0; } +static __rte_unused __rte_always_inline int +virtio_dev_tx_burst_packed_zmbuf(struct virtio_net *dev, + struct vhost_virtqueue *vq, + struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts) +{ + struct zcopy_mbuf *zmbufs[PACKED_DESCS_BURST]; + uintptr_t desc_addrs[PACKED_DESCS_BURST]; + uint16_t ids[PACKED_DESCS_BURST]; + int ret; + uint16_t i; + + uint16_t avail_idx = vq->last_avail_idx; + + ret = vhost_dequeue_burst_packed(dev, vq, mbuf_pool, pkts, avail_idx, + desc_addrs, ids); + + if (ret) + return ret; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + zmbufs[i] = get_zmbuf(vq); + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (!zmbufs[i]) + goto free_pkt; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + zmbufs[i]->mbuf = pkts[i]; + zmbufs[i]->desc_idx = avail_idx + i; + zmbufs[i]->desc_count = 1; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + rte_mbuf_refcnt_update(pkts[i], 1); + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + TAILQ_INSERT_TAIL(&vq->zmbuf_list, zmbufs[i], next); + + vq->nr_zmbuf += PACKED_DESCS_BURST; + vq->last_avail_idx += PACKED_DESCS_BURST; + if (vq->last_avail_idx >= vq->size) { + vq->last_avail_idx -= vq->size; + vq->avail_wrap_counter ^= 1; + } + + return 0; + +free_pkt: + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + rte_pktmbuf_free(pkts[i]); + + return -1; +} + +static __rte_unused int +virtio_dev_tx_single_packed_zmbuf(struct virtio_net *dev, + struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, + struct rte_mbuf **pkts) +{ + uint16_t buf_id, desc_count; + struct zcopy_mbuf *zmbuf; + + if (vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id, + &desc_count)) + return -1; + + zmbuf = get_zmbuf(vq); + if (!zmbuf) { + rte_pktmbuf_free(*pkts); + return -1; + } + zmbuf->mbuf = *pkts; + zmbuf->desc_idx = vq->last_avail_idx; + zmbuf->desc_count = desc_count; + + rte_mbuf_refcnt_update(*pkts, 1); + + vq->nr_zmbuf += 1; + TAILQ_INSERT_TAIL(&vq->zmbuf_list, zmbuf, next); + + vq->last_avail_idx += desc_count; + if (vq->last_avail_idx >= vq->size) { + vq->last_avail_idx -= vq->size; + vq->avail_wrap_counter ^= 1; + } + + return 0; +} + +static __rte_unused void +free_zmbuf(struct vhost_virtqueue *vq) +{ + struct zcopy_mbuf *next = NULL; + struct zcopy_mbuf *zmbuf; + + for (zmbuf = TAILQ_FIRST(&vq->zmbuf_list); + zmbuf != NULL; zmbuf = next) { + next = TAILQ_NEXT(zmbuf, next); + + uint16_t last_used_idx = vq->last_used_idx; + + if (mbuf_is_consumed(zmbuf->mbuf)) { + uint16_t flags = 0; + + if (vq->used_wrap_counter) + flags = VIRTIO_TX_USED_FLAG; + else + flags = VIRTIO_TX_USED_WRAP_FLAG; + + vq->desc_packed[last_used_idx].id = zmbuf->desc_idx; + vq->desc_packed[last_used_idx].len = 0; + + rte_smp_wmb(); + vq->desc_packed[last_used_idx].flags = flags; + + vq->last_used_idx += zmbuf->desc_count; + if (vq->last_used_idx >= vq->size) { + vq->used_wrap_counter ^= 1; + vq->last_used_idx -= vq->size; + } + + TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next); + restore_mbuf(zmbuf->mbuf); + rte_pktmbuf_free(zmbuf->mbuf); + put_zmbuf(zmbuf); + vq->nr_zmbuf -= 1; + } + } +} + static __rte_noinline uint16_t virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) From patchwork Thu Sep 19 16:36:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59393 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 092A31EB62; Thu, 19 Sep 2019 10:57:06 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 87EFF1E8FE for ; Thu, 19 Sep 2019 10:56:44 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146156" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:42 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:40 +0800 Message-Id: <20190919163643.24130-14-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 13/16] vhost: optimize dequeue function of packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Optimize vhost device Rx datapath by separate functions. No-chained and direct descriptors will be handled by burst and other will be handled one by one as before. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index a8df74f87..066514e43 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -182,17 +182,6 @@ flush_dequeue_shadow_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) vhost_log_cache_sync(dev, vq); } -static __rte_always_inline void -update_shadow_packed(struct vhost_virtqueue *vq, - uint16_t desc_idx, uint32_t len, uint16_t count) -{ - uint16_t i = vq->shadow_used_idx++; - - vq->shadow_used_packed[i].id = desc_idx; - vq->shadow_used_packed[i].len = len; - vq->shadow_used_packed[i].count = count; -} - static __rte_always_inline void flush_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t *lens, uint16_t *ids, uint16_t flags) @@ -383,7 +372,7 @@ flush_enqueue_packed(struct virtio_net *dev, } } -static __rte_unused __rte_always_inline void +static __rte_always_inline void flush_dequeue_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) { if (!vq->shadow_used_idx) @@ -1809,7 +1798,7 @@ vhost_dequeue_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, return -1; } -static __rte_unused int +static __rte_always_inline int virtio_dev_tx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts) { @@ -1887,7 +1876,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, return 0; } -static __rte_unused int +static __rte_always_inline int virtio_dev_tx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts) { @@ -1909,7 +1898,7 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, return 0; } -static __rte_unused __rte_always_inline int +static __rte_always_inline int virtio_dev_tx_burst_packed_zmbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, @@ -1971,7 +1960,7 @@ virtio_dev_tx_burst_packed_zmbuf(struct virtio_net *dev, return -1; } -static __rte_unused int +static __rte_always_inline int virtio_dev_tx_single_packed_zmbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts) @@ -2006,7 +1995,7 @@ virtio_dev_tx_single_packed_zmbuf(struct virtio_net *dev, return 0; } -static __rte_unused void +static __rte_always_inline void free_zmbuf(struct vhost_virtqueue *vq) { struct zcopy_mbuf *next = NULL; @@ -2048,120 +2037,97 @@ free_zmbuf(struct vhost_virtqueue *vq) } static __rte_noinline uint16_t -virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, - struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) +virtio_dev_tx_packed_zmbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint32_t count) { - uint16_t i; - - if (unlikely(dev->dequeue_zero_copy)) { - struct zcopy_mbuf *zmbuf, *next; + uint32_t pkt_idx = 0; + uint32_t remained = count; + int ret; - for (zmbuf = TAILQ_FIRST(&vq->zmbuf_list); - zmbuf != NULL; zmbuf = next) { - next = TAILQ_NEXT(zmbuf, next); + free_zmbuf(vq); - if (mbuf_is_consumed(zmbuf->mbuf)) { - update_shadow_packed(vq, - zmbuf->desc_idx, - 0, - zmbuf->desc_count); + do { + if (remained >= PACKED_DESCS_BURST) { + ret = virtio_dev_tx_burst_packed_zmbuf(dev, vq, + mbuf_pool, + &pkts[pkt_idx]); - TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next); - restore_mbuf(zmbuf->mbuf); - rte_pktmbuf_free(zmbuf->mbuf); - put_zmbuf(zmbuf); - vq->nr_zmbuf -= 1; + if (!ret) { + pkt_idx += PACKED_DESCS_BURST; + remained -= PACKED_DESCS_BURST; + continue; } } - if (likely(vq->shadow_used_idx)) { - flush_dequeue_shadow_packed(dev, vq); - vhost_vring_call_packed(dev, vq); - } - } - - VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__); - - count = RTE_MIN(count, MAX_PKT_BURST); - VHOST_LOG_DEBUG(VHOST_DATA, "(%d) about to dequeue %u buffers\n", - dev->vid, count); + if (virtio_dev_tx_single_packed_zmbuf(dev, vq, mbuf_pool, + &pkts[pkt_idx])) + break; - for (i = 0; i < count; i++) { - struct buf_vector buf_vec[BUF_VECTOR_MAX]; - uint16_t buf_id; - uint32_t dummy_len; - uint16_t desc_count, nr_vec = 0; - int err; + pkt_idx++; + remained--; + } while (remained); - if (unlikely(fill_vec_buf_packed(dev, vq, - vq->last_avail_idx, &desc_count, - buf_vec, &nr_vec, - &buf_id, &dummy_len, - VHOST_ACCESS_RO) < 0)) - break; + if (pkt_idx) + vhost_vring_call_packed(dev, vq); - if (likely(dev->dequeue_zero_copy == 0)) - update_shadow_packed(vq, buf_id, 0, - desc_count); + return pkt_idx; +} - pkts[i] = rte_pktmbuf_alloc(mbuf_pool); - if (unlikely(pkts[i] == NULL)) { - RTE_LOG(ERR, VHOST_DATA, - "Failed to allocate memory for mbuf.\n"); - break; - } +static __rte_noinline uint16_t +virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint32_t count) +{ + uint32_t pkt_idx = 0; + uint32_t remained = count; + uint16_t fetch_idx; + int ret; + struct vring_packed_desc *descs = vq->desc_packed; - err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], - mbuf_pool); - if (unlikely(err)) { - rte_pktmbuf_free(pkts[i]); - break; + do { + if ((vq->last_avail_idx & 0x7) == 0) { + fetch_idx = vq->last_avail_idx + 8; + rte_prefetch0((void *)(uintptr_t)&descs[fetch_idx]); } - if (unlikely(dev->dequeue_zero_copy)) { - struct zcopy_mbuf *zmbuf; + if (remained >= PACKED_DESCS_BURST) { + ret = virtio_dev_tx_burst_packed(dev, vq, mbuf_pool, + &pkts[pkt_idx]); - zmbuf = get_zmbuf(vq); - if (!zmbuf) { - rte_pktmbuf_free(pkts[i]); - break; + if (!ret) { + flush_dequeue_packed(dev, vq); + pkt_idx += PACKED_DESCS_BURST; + remained -= PACKED_DESCS_BURST; + continue; } - zmbuf->mbuf = pkts[i]; - zmbuf->desc_idx = buf_id; - zmbuf->desc_count = desc_count; + } - /* - * Pin lock the mbuf; we will check later to see - * whether the mbuf is freed (when we are the last - * user) or not. If that's the case, we then could - * update the used ring safely. - */ - rte_mbuf_refcnt_update(pkts[i], 1); + /* + * If remained descs can't bundled into one burst, just skip to + * next round. + */ + if (((vq->last_avail_idx & PACKED_BURST_MASK) + remained) < + PACKED_DESCS_BURST) + break; - vq->nr_zmbuf += 1; - TAILQ_INSERT_TAIL(&vq->zmbuf_list, zmbuf, next); - } + if (virtio_dev_tx_single_packed(dev, vq, mbuf_pool, + &pkts[pkt_idx])) + break; - vq->last_avail_idx += desc_count; - if (vq->last_avail_idx >= vq->size) { - vq->last_avail_idx -= vq->size; - vq->avail_wrap_counter ^= 1; - } - } + pkt_idx++; + remained--; + flush_dequeue_packed(dev, vq); - if (likely(dev->dequeue_zero_copy == 0)) { - do_data_copy_dequeue(vq); - if (unlikely(i < count)) - vq->shadow_used_idx = i; - if (likely(vq->shadow_used_idx)) { - flush_dequeue_shadow_packed(dev, vq); - vhost_vring_call_packed(dev, vq); - } + } while (remained); + + if (pkt_idx) { + if (vq->shadow_used_idx) + do_data_copy_dequeue(vq); } - return i; + return pkt_idx; } + uint16_t rte_vhost_dequeue_burst(int vid, uint16_t queue_id, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) @@ -2235,9 +2201,14 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, count -= 1; } - if (vq_is_packed(dev)) - count = virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count); - else + if (vq_is_packed(dev)) { + if (unlikely(dev->dequeue_zero_copy)) + count = virtio_dev_tx_packed_zmbuf(dev, vq, mbuf_pool, + pkts, count); + else + count = virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, + count); + } else count = virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count); out: From patchwork Thu Sep 19 16:36:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59394 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AC2651EB82; Thu, 19 Sep 2019 10:57:08 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id CC6081E8FE for ; Thu, 19 Sep 2019 10:56:45 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146161" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:44 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:41 +0800 Message-Id: <20190919163643.24130-15-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 14/16] vhost: cache address translation result X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Cache address translation result and use it in next translation. Due to limited regions are supported, buffers are most likely in same region when doing data transmission. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index 7fb172912..d90235cd6 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -91,10 +91,18 @@ struct rte_vhost_mem_region { int fd; }; +struct rte_vhost_mem_region_cache { + uint64_t guest_phys_addr; + uint64_t guest_phys_addr_end; + int64_t host_user_addr_offset; + uint64_t size; +}; + /** * Memory structure includes region and mapping information. */ struct rte_vhost_memory { + struct rte_vhost_mem_region_cache cache_region; uint32_t nregions; struct rte_vhost_mem_region regions[]; }; @@ -232,11 +240,30 @@ rte_vhost_va_from_guest_pa(struct rte_vhost_memory *mem, struct rte_vhost_mem_region *r; uint32_t i; + struct rte_vhost_mem_region_cache *r_cache; + /* check with cached region */ + r_cache = &mem->cache_region; + if (likely(gpa >= r_cache->guest_phys_addr && gpa < + r_cache->guest_phys_addr_end)) { + if (unlikely(*len > r_cache->guest_phys_addr_end - gpa)) + *len = r_cache->guest_phys_addr_end - gpa; + + return gpa - r_cache->host_user_addr_offset; + } + + for (i = 0; i < mem->nregions; i++) { r = &mem->regions[i]; if (gpa >= r->guest_phys_addr && gpa < r->guest_phys_addr + r->size) { + r_cache->guest_phys_addr = r->guest_phys_addr; + r_cache->guest_phys_addr_end = r->guest_phys_addr + + r->size; + r_cache->size = r->size; + r_cache->host_user_addr_offset = r->guest_phys_addr - + r->host_user_addr; + if (unlikely(*len > r->guest_phys_addr + r->size - gpa)) *len = r->guest_phys_addr + r->size - gpa; From patchwork Thu Sep 19 16:36:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59395 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0A1681EB91; Thu, 19 Sep 2019 10:57:14 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 3CA0D1E91F for ; Thu, 19 Sep 2019 10:56:47 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146166" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:45 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:42 +0800 Message-Id: <20190919163643.24130-16-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 15/16] vhost: check whether disable software pre-fetch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Disable software pre-fetch actions on Skylake and Cascadelake platforms. Hardware can fetch needed data for vhost, additional software pre-fetch will have impact on performance. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile index 30839a001..5f3b42e56 100644 --- a/lib/librte_vhost/Makefile +++ b/lib/librte_vhost/Makefile @@ -16,6 +16,12 @@ CFLAGS += -I vhost_user CFLAGS += -fno-strict-aliasing LDLIBS += -lpthread +AVX512_SUPPORT=$(shell $(CC) -march=native -dM -E - avail->ring[vq->last_avail_idx & (vq->size - 1)]); +#endif for (pkt_idx = 0; pkt_idx < count; pkt_idx++) { uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen; @@ -1093,7 +1097,9 @@ virtio_dev_rx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, UNROLL_PRAGMA(PRAGMA_PARAM) for (i = 0; i < PACKED_DESCS_BURST; i++) { +#ifndef DISABLE_SWPREFETCH rte_prefetch0((void *)(uintptr_t)desc_addrs[i]); +#endif hdrs[i] = (struct virtio_net_hdr_mrg_rxbuf *)desc_addrs[i]; lens[i] = pkts[i]->pkt_len + dev->vhost_hlen; } @@ -1647,7 +1653,9 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, */ rte_smp_rmb(); +#ifndef DISABLE_SWPREFETCH rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); +#endif VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__); @@ -1818,7 +1826,9 @@ virtio_dev_tx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, UNROLL_PRAGMA(PRAGMA_PARAM) for (i = 0; i < PACKED_DESCS_BURST; i++) { +#ifndef DISABLE_SWPREFETCH rte_prefetch0((void *)(uintptr_t)desc_addrs[i]); +#endif rte_memcpy(rte_pktmbuf_mtod_offset(pkts[i], void *, 0), (void *)(uintptr_t)(desc_addrs[i] + buf_offset), pkts[i]->pkt_len); From patchwork Thu Sep 19 16:36:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59396 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A9DD01EB97; Thu, 19 Sep 2019 10:57:16 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id ACA391E93D for ; Thu, 19 Sep 2019 10:56:48 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146169" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:46 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:43 +0800 Message-Id: <20190919163643.24130-17-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 16/16] vhost: optimize packed ring dequeue when in-order X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When VIRTIO_F_IN_ORDER feature is negotiated, vhost can optimize dequeue function by only update first used descriptor. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 357517cdd..a7bb4ec79 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -31,6 +31,12 @@ rxvq_is_mergeable(struct virtio_net *dev) return dev->features & (1ULL << VIRTIO_NET_F_MRG_RXBUF); } +static __rte_always_inline bool +virtio_net_is_inorder(struct virtio_net *dev) +{ + return dev->features & (1ULL << VIRTIO_F_IN_ORDER); +} + static bool is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring) { @@ -213,6 +219,30 @@ flush_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } } +static __rte_always_inline void +update_dequeue_burst_packed_inorder(struct vhost_virtqueue *vq, uint16_t id) +{ + vq->shadow_used_packed[0].id = id; + + if (!vq->shadow_used_idx) { + vq->dequeue_shadow_head = vq->last_used_idx; + vq->shadow_used_packed[0].len = 0; + vq->shadow_used_packed[0].count = 1; + vq->shadow_used_packed[0].used_idx = vq->last_used_idx; + vq->shadow_used_packed[0].used_wrap_counter = + vq->used_wrap_counter; + + vq->shadow_used_idx = 1; + + } + + vq->last_used_idx += PACKED_DESCS_BURST; + if (vq->last_used_idx >= vq->size) { + vq->used_wrap_counter ^= 1; + vq->last_used_idx -= vq->size; + } +} + static __rte_always_inline void update_dequeue_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t *ids) @@ -315,7 +345,6 @@ update_dequeue_shadow_packed(struct vhost_virtqueue *vq, uint16_t buf_id, else vq->desc_packed[vq->last_used_idx].flags = VIRTIO_TX_USED_WRAP_FLAG; - } vq->last_used_idx += count; @@ -326,6 +355,31 @@ update_dequeue_shadow_packed(struct vhost_virtqueue *vq, uint16_t buf_id, } } +static __rte_always_inline void +update_dequeue_shadow_packed_inorder(struct vhost_virtqueue *vq, + uint16_t buf_id, uint16_t count) +{ + vq->shadow_used_packed[0].id = buf_id; + + if (!vq->shadow_used_idx) { + vq->dequeue_shadow_head = vq->last_used_idx; + + vq->shadow_used_packed[0].len = 0; + vq->shadow_used_packed[0].count = count; + vq->shadow_used_packed[0].used_idx = vq->last_used_idx; + vq->shadow_used_packed[0].used_wrap_counter = + vq->used_wrap_counter; + + vq->shadow_used_idx = 1; + } + + vq->last_used_idx += count; + + if (vq->last_used_idx >= vq->size) { + vq->used_wrap_counter ^= 1; + vq->last_used_idx -= vq->size; + } +} static inline void do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq) @@ -1834,7 +1888,12 @@ virtio_dev_tx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, pkts[i]->pkt_len); } - update_dequeue_burst_packed(dev, vq, ids); + if (virtio_net_is_inorder(dev)) + update_dequeue_burst_packed_inorder(vq, + ids[PACKED_BURST_MASK]); + else + update_dequeue_burst_packed(dev, vq, ids); + if (virtio_net_with_host_offload(dev)) { UNROLL_PRAGMA(PRAGMA_PARAM) for (i = 0; i < PACKED_DESCS_BURST; i++) { @@ -1897,7 +1956,10 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, &desc_count)) return -1; - update_dequeue_shadow_packed(vq, buf_id, desc_count); + if (virtio_net_is_inorder(dev)) + update_dequeue_shadow_packed_inorder(vq, buf_id, desc_count); + else + update_dequeue_shadow_packed(vq, buf_id, desc_count); vq->last_avail_idx += desc_count; if (vq->last_avail_idx >= vq->size) {