From patchwork Thu Apr 23 12:30:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 69170 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F6A7A00C2; Thu, 23 Apr 2020 06:56:03 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8D2461D408; Thu, 23 Apr 2020 06:55:55 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 86EE71D150 for ; Thu, 23 Apr 2020 06:55:51 +0200 (CEST) IronPort-SDR: rg8Vt1QqJUCtvmUimqy/jIX4DbjbvEfgWsTiKIeyG64RZAN20ZA9o2pWiK6Iue+z8wkpnNUaxu 6Qpoa+bp84nw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2020 21:55:50 -0700 IronPort-SDR: ER+TxvHP7UfMDvPGrhS7exf3ERYxLKRmPXFHbquor0MA0M8FW+a30Bg8Z7/wuLM+xBbRuLA7IR vcOEjSczqaJg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,305,1583222400"; d="scan'208";a="456772681" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga005.fm.intel.com with ESMTP; 22 Apr 2020 21:55:48 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: harry.van.haaren@intel.com, dev@dpdk.org, Marvin Liu Date: Thu, 23 Apr 2020 20:30:58 +0800 Message-Id: <20200423123106.78513-2-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200423123106.78513-1-yong.liu@intel.com> References: <20200313174230.74661-1-yong.liu@intel.com> <20200423123106.78513-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v8 1/9] net/virtio: add Rx free threshold setting X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce free threshold setting in Rx queue, default value of it is 32. Limiated threshold size to multiple of four as only vectorized packed Rx function will utilize it. Virtio driver will rearm Rx queue when more than rx_free_thresh descs were dequeued. Signed-off-by: Marvin Liu Reviewed-by: Maxime Coquelin diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 060410577..94ba7a3ec 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -936,6 +936,7 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, struct virtio_hw *hw = dev->data->dev_private; struct virtqueue *vq = hw->vqs[vtpci_queue_idx]; struct virtnet_rx *rxvq; + uint16_t rx_free_thresh; PMD_INIT_FUNC_TRACE(); @@ -944,6 +945,28 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } + rx_free_thresh = rx_conf->rx_free_thresh; + if (rx_free_thresh == 0) + rx_free_thresh = + RTE_MIN(vq->vq_nentries / 4, DEFAULT_RX_FREE_THRESH); + + if (rx_free_thresh & 0x3) { + RTE_LOG(ERR, PMD, "rx_free_thresh must be multiples of four." + " (rx_free_thresh=%u port=%u queue=%u)\n", + rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + + if (rx_free_thresh >= vq->vq_nentries) { + RTE_LOG(ERR, PMD, "rx_free_thresh must be less than the " + "number of RX entries (%u)." + " (rx_free_thresh=%u port=%u queue=%u)\n", + vq->vq_nentries, + rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + vq->vq_free_thresh = rx_free_thresh; + if (nb_desc == 0 || nb_desc > vq->vq_nentries) nb_desc = vq->vq_nentries; vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 58ad7309a..6301c56b2 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -18,6 +18,8 @@ struct rte_mbuf; +#define DEFAULT_RX_FREE_THRESH 32 + /* * Per virtio_ring.h in Linux. * For virtio_pci on SMP, we don't need to order with respect to MMIO From patchwork Thu Apr 23 12:30:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 69171 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37A35A00C2; Thu, 23 Apr 2020 06:56:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 149941D508; Thu, 23 Apr 2020 06:55:57 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 9C3D41D16F for ; Thu, 23 Apr 2020 06:55:53 +0200 (CEST) IronPort-SDR: hMcBm86G/TJWu4VLldRxpb6yk5Q/hJtTOYANPPdfsbVA65eRZLRmvMRXtVixKWYf3Xi49Mamrn KNnXH1CgsLpA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2020 21:55:52 -0700 IronPort-SDR: WTDiJLJvE1vHQlxHpaQGm1Msep/sfPcnrWuQTq6bo22kso0mwgrULpD/hN2f4tCCxvNijcAtmC qsc6fSvtyE7w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,305,1583222400"; d="scan'208";a="456772691" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga005.fm.intel.com with ESMTP; 22 Apr 2020 21:55:50 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: harry.van.haaren@intel.com, dev@dpdk.org, Marvin Liu Date: Thu, 23 Apr 2020 20:30:59 +0800 Message-Id: <20200423123106.78513-3-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200423123106.78513-1-yong.liu@intel.com> References: <20200313174230.74661-1-yong.liu@intel.com> <20200423123106.78513-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v8 2/9] net/virtio: enable vectorized path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Previously, virtio split ring vectorized path is enabled as default. This is not suitable for everyone because of that path not follow virtio spec. Add new config for virtio vectorized path selection. By default vectorized path is disabled. Signed-off-by: Marvin Liu diff --git a/config/common_base b/config/common_base index 00d8d0792..334a26a17 100644 --- a/config/common_base +++ b/config/common_base @@ -456,6 +456,7 @@ CONFIG_RTE_LIBRTE_VIRTIO_PMD=y CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_RX=n CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_TX=n CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DUMP=n +CONFIG_RTE_LIBRTE_VIRTIO_INC_VECTOR=n # # Compile virtio device emulation inside virtio PMD driver diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile index c9edb84ee..4b69827ab 100644 --- a/drivers/net/virtio/Makefile +++ b/drivers/net/virtio/Makefile @@ -28,6 +28,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx.c SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_ethdev.c SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple.c +ifeq ($(CONFIG_RTE_LIBRTE_VIRTIO_INC_VECTOR),y) ifeq ($(CONFIG_RTE_ARCH_X86),y) SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple_sse.c else ifeq ($(CONFIG_RTE_ARCH_PPC_64),y) @@ -35,6 +36,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple_altivec.c else ifneq ($(filter y,$(CONFIG_RTE_ARCH_ARM) $(CONFIG_RTE_ARCH_ARM64)),) SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple_neon.c endif +endif ifeq ($(CONFIG_RTE_VIRTIO_USER),y) SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_user/vhost_user.c diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.build index 15150eea1..ce3525ef5 100644 --- a/drivers/net/virtio/meson.build +++ b/drivers/net/virtio/meson.build @@ -8,6 +8,7 @@ sources += files('virtio_ethdev.c', 'virtqueue.c') deps += ['kvargs', 'bus_pci'] +dpdk_conf.set('RTE_LIBRTE_VIRTIO_INC_VECTOR', 1) if arch_subdir == 'x86' sources += files('virtio_rxtx_simple_sse.c') elif arch_subdir == 'ppc' From patchwork Thu Apr 23 12:31:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 69172 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F35CA00C2; Thu, 23 Apr 2020 06:56:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 89AD61D526; Thu, 23 Apr 2020 06:55:58 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id C07E11D181 for ; Thu, 23 Apr 2020 06:55:54 +0200 (CEST) IronPort-SDR: Z0VRHKwu7sJR7SWA09303n84LoQ7k74yqkLnG03JX00B1pspmVskWJDJRgRLXl/FHI/QCbFlz4 SGPnkKj0gvdQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2020 21:55:54 -0700 IronPort-SDR: F4oVHtsaV2kgy804KIsEzdI1TEB5CYGkDm63ySiZDcygD4iUWwCiCYyG1kOUXTyDwb8exqKaEQ AbbT2ZjlJbww== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,305,1583222400"; d="scan'208";a="456772704" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga005.fm.intel.com with ESMTP; 22 Apr 2020 21:55:52 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: harry.van.haaren@intel.com, dev@dpdk.org, Marvin Liu Date: Thu, 23 Apr 2020 20:31:00 +0800 Message-Id: <20200423123106.78513-4-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200423123106.78513-1-yong.liu@intel.com> References: <20200313174230.74661-1-yong.liu@intel.com> <20200423123106.78513-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v8 3/9] net/virtio: inorder should depend on feature bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Ring initialzation is different when inorder feature negotiated. This action should dependent on negotiated feature bits. Signed-off-by: Marvin Liu Reviewed-by: Maxime Coquelin diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 94ba7a3ec..e450477e8 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -989,6 +989,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) struct rte_mbuf *m; uint16_t desc_idx; int error, nbufs, i; + bool in_order = vtpci_with_feature(hw, VIRTIO_F_IN_ORDER); PMD_INIT_FUNC_TRACE(); @@ -1018,7 +1019,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) virtio_rxq_rearm_vec(rxvq); nbufs += RTE_VIRTIO_VPMD_RX_REARM_THRESH; } - } else if (hw->use_inorder_rx) { + } else if (!vtpci_packed_queue(vq->hw) && in_order) { if ((!virtqueue_full(vq))) { uint16_t free_cnt = vq->vq_free_cnt; struct rte_mbuf *pkts[free_cnt]; @@ -1133,7 +1134,7 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev, PMD_INIT_FUNC_TRACE(); if (!vtpci_packed_queue(hw)) { - if (hw->use_inorder_tx) + if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) vq->vq_split.ring.desc[vq->vq_nentries - 1].next = 0; } @@ -2046,7 +2047,7 @@ virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, struct virtio_hw *hw = vq->hw; uint16_t hdr_size = hw->vtnet_hdr_size; uint16_t nb_tx = 0; - bool in_order = hw->use_inorder_tx; + bool in_order = vtpci_with_feature(hw, VIRTIO_F_IN_ORDER); if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) return nb_tx; From patchwork Thu Apr 23 12:31:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 69173 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD1A2A00C2; Thu, 23 Apr 2020 06:56:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 429651D53C; Thu, 23 Apr 2020 06:56:00 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id B893D1D44F for ; Thu, 23 Apr 2020 06:55:56 +0200 (CEST) IronPort-SDR: c0991XGWgzyRcZ76Hp9uJfHL1VW79BBdIAK1nlEs8ErTUm2fMyGTR7Nnj5dFNIp+uAAzcqdsJO 44a06jhCkI6Q== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2020 21:55:56 -0700 IronPort-SDR: 6jTbt6mdA71hiAmMOJOOIWE1EWqIpsqP69ep7n6tIf9tb3F3F2pJvXAtiiWRWFWC7A5eJEO91s 001nnxcteFZw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,305,1583222400"; d="scan'208";a="456772715" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga005.fm.intel.com with ESMTP; 22 Apr 2020 21:55:54 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: harry.van.haaren@intel.com, dev@dpdk.org, Marvin Liu Date: Thu, 23 Apr 2020 20:31:01 +0800 Message-Id: <20200423123106.78513-5-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200423123106.78513-1-yong.liu@intel.com> References: <20200313174230.74661-1-yong.liu@intel.com> <20200423123106.78513-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v8 4/9] net/virtio-user: add vectorized path parameter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add new parameter "vectorized" which can select vectorized path explicitly. This parameter will work when RTE_LIBRTE_VIRTIO_INC_VECTOR option is yes. When "vectorized" is set, driver will check both compiling environment and running environment when selecting path. Signed-off-by: Marvin Liu diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 37766cbb6..361c834a9 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1551,8 +1551,8 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) eth_dev->rx_pkt_burst = &virtio_recv_pkts_packed; } } else { - if (hw->use_simple_rx) { - PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u", + if (hw->use_vec_rx) { + PMD_INIT_LOG(INFO, "virtio: using vectorized Rx path on port %u", eth_dev->data->port_id); eth_dev->rx_pkt_burst = virtio_recv_pkts_vec; } else if (hw->use_inorder_rx) { @@ -2257,33 +2257,33 @@ virtio_dev_configure(struct rte_eth_dev *dev) return -EBUSY; } - hw->use_simple_rx = 1; + hw->use_vec_rx = 1; if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) { hw->use_inorder_tx = 1; hw->use_inorder_rx = 1; - hw->use_simple_rx = 0; + hw->use_vec_rx = 0; } if (vtpci_packed_queue(hw)) { - hw->use_simple_rx = 0; + hw->use_vec_rx = 0; hw->use_inorder_rx = 0; } #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) { - hw->use_simple_rx = 0; + hw->use_vec_rx = 0; } #endif if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { - hw->use_simple_rx = 0; + hw->use_vec_rx = 0; } if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_TCP_LRO | DEV_RX_OFFLOAD_VLAN_STRIP)) - hw->use_simple_rx = 0; + hw->use_vec_rx = 0; return 0; } diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h index bd89357e4..668e688e1 100644 --- a/drivers/net/virtio/virtio_pci.h +++ b/drivers/net/virtio/virtio_pci.h @@ -253,7 +253,8 @@ struct virtio_hw { uint8_t vlan_strip; uint8_t use_msix; uint8_t modern; - uint8_t use_simple_rx; + uint8_t use_vec_rx; + uint8_t use_vec_tx; uint8_t use_inorder_rx; uint8_t use_inorder_tx; uint8_t weak_barriers; diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index e450477e8..84f4cf946 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -996,7 +996,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) /* Allocate blank mbufs for the each rx descriptor */ nbufs = 0; - if (hw->use_simple_rx) { + if (hw->use_vec_rx && !vtpci_packed_queue(hw)) { for (desc_idx = 0; desc_idx < vq->vq_nentries; desc_idx++) { vq->vq_split.ring.avail->ring[desc_idx] = desc_idx; @@ -1014,7 +1014,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) &rxvq->fake_mbuf; } - if (hw->use_simple_rx) { + if (hw->use_vec_rx && !vtpci_packed_queue(hw)) { while (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) { virtio_rxq_rearm_vec(rxvq); nbufs += RTE_VIRTIO_VPMD_RX_REARM_THRESH; diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c index 953f00d72..5c338cf44 100644 --- a/drivers/net/virtio/virtio_user_ethdev.c +++ b/drivers/net/virtio/virtio_user_ethdev.c @@ -452,6 +452,8 @@ static const char *valid_args[] = { VIRTIO_USER_ARG_PACKED_VQ, #define VIRTIO_USER_ARG_SPEED "speed" VIRTIO_USER_ARG_SPEED, +#define VIRTIO_USER_ARG_VECTORIZED "vectorized" + VIRTIO_USER_ARG_VECTORIZED, NULL }; @@ -525,7 +527,8 @@ virtio_user_eth_dev_alloc(struct rte_vdev_device *vdev) */ hw->use_msix = 1; hw->modern = 0; - hw->use_simple_rx = 0; + hw->use_vec_rx = 0; + hw->use_vec_tx = 0; hw->use_inorder_rx = 0; hw->use_inorder_tx = 0; hw->virtio_user_dev = dev; @@ -559,6 +562,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev) uint64_t mrg_rxbuf = 1; uint64_t in_order = 1; uint64_t packed_vq = 0; + uint64_t vectorized = 0; char *path = NULL; char *ifname = NULL; char *mac_addr = NULL; @@ -675,6 +679,17 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev) } } +#ifdef RTE_LIBRTE_VIRTIO_INC_VECTOR + if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_VECTORIZED) == 1) { + if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_VECTORIZED, + &get_integer_arg, &vectorized) < 0) { + PMD_INIT_LOG(ERR, "error to parse %s", + VIRTIO_USER_ARG_VECTORIZED); + goto end; + } + } +#endif + if (queues > 1 && cq == 0) { PMD_INIT_LOG(ERR, "multi-q requires ctrl-q"); goto end; @@ -727,6 +742,23 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev) goto end; } + if (vectorized) { + if (packed_vq) { +#if defined(CC_AVX512_SUPPORT) + hw->use_vec_rx = 1; + hw->use_vec_tx = 1; +#else + PMD_INIT_LOG(INFO, + "building environment do not match packed ring vectorized requirement"); +#endif + } else { + hw->use_vec_rx = 1; + } + } else { + hw->use_vec_rx = 0; + hw->use_vec_tx = 0; + } + rte_eth_dev_probing_finish(eth_dev); ret = 0; @@ -785,4 +817,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_virtio_user, "mrg_rxbuf=<0|1> " "in_order=<0|1> " "packed_vq=<0|1> " - "speed="); + "speed= " + "vectorized=<0|1>"); diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c index 0b4e3bf3e..ca23180de 100644 --- a/drivers/net/virtio/virtqueue.c +++ b/drivers/net/virtio/virtqueue.c @@ -32,7 +32,8 @@ virtqueue_detach_unused(struct virtqueue *vq) end = (vq->vq_avail_idx + vq->vq_free_cnt) & (vq->vq_nentries - 1); for (idx = 0; idx < vq->vq_nentries; idx++) { - if (hw->use_simple_rx && type == VTNET_RQ) { + if (hw->use_vec_rx && !vtpci_packed_queue(hw) && + type == VTNET_RQ) { if (start <= end && idx >= start && idx < end) continue; if (start > end && (idx >= start || idx < end)) @@ -97,7 +98,7 @@ virtqueue_rxvq_flush_split(struct virtqueue *vq) for (i = 0; i < nb_used; i++) { used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1); uep = &vq->vq_split.ring.used->ring[used_idx]; - if (hw->use_simple_rx) { + if (hw->use_vec_rx) { desc_idx = used_idx; rte_pktmbuf_free(vq->sw_ring[desc_idx]); vq->vq_free_cnt++; @@ -121,7 +122,7 @@ virtqueue_rxvq_flush_split(struct virtqueue *vq) vq->vq_used_cons_idx++; } - if (hw->use_simple_rx) { + if (hw->use_vec_rx) { while (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) { virtio_rxq_rearm_vec(rxq); if (virtqueue_kick_prepare(vq)) From patchwork Thu Apr 23 12:31:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 69174 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C18E5A00C2; Thu, 23 Apr 2020 06:56:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4D4791D54C; Thu, 23 Apr 2020 06:56:03 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 110601D535 for ; Thu, 23 Apr 2020 06:55:58 +0200 (CEST) IronPort-SDR: IwHmwbYKKYH6fpVbtIvIsaIGNjUcae9TkZTcDr5ZIqH/920SMuMIMnmCLYwn06ztD0IfXP4tL0 lpcnDEUkNpsw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2020 21:55:58 -0700 IronPort-SDR: ald0cmeS+9mlmiL3fYWNQZvzUV+BKJMi4o5mRUHaC3QB48BCqHdOBGfmsYbeyKvRokb1AREIl4 d0eXgtp42mEQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,305,1583222400"; d="scan'208";a="456772723" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga005.fm.intel.com with ESMTP; 22 Apr 2020 21:55:56 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: harry.van.haaren@intel.com, dev@dpdk.org, Marvin Liu Date: Thu, 23 Apr 2020 20:31:02 +0800 Message-Id: <20200423123106.78513-6-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200423123106.78513-1-yong.liu@intel.com> References: <20200313174230.74661-1-yong.liu@intel.com> <20200423123106.78513-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v8 5/9] net/virtio: add vectorized packed ring Rx path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Optimize packed ring Rx path when AVX512 enabled and mergeable buffer/Rx LRO offloading are not required. Solution of optimization is pretty like vhost, is that split path into batch and single functions. Batch function is further optimized by vector instructions. Also pad desc extra structure to 16 bytes aligned, thus four elements will be saved in one batch. Signed-off-by: Marvin Liu diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile index 4b69827ab..de0b00e50 100644 --- a/drivers/net/virtio/Makefile +++ b/drivers/net/virtio/Makefile @@ -36,6 +36,41 @@ SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple_altivec.c else ifneq ($(filter y,$(CONFIG_RTE_ARCH_ARM) $(CONFIG_RTE_ARCH_ARM64)),) SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple_neon.c endif + +ifneq ($(FORCE_DISABLE_AVX512), y) + CC_AVX512_SUPPORT=\ + $(shell $(CC) -march=native -dM -E - &1 | \ + sed '/./{H;$$!d} ; x ; /AVX512F/!d; /AVX512BW/!d; /AVX512VL/!d' | \ + grep -q AVX512 && echo 1) +endif + +ifeq ($(CC_AVX512_SUPPORT), 1) +CFLAGS += -DCC_AVX512_SUPPORT +SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_packed_avx.c + +ifeq ($(RTE_TOOLCHAIN), gcc) +ifeq ($(shell test $(GCC_VERSION) -ge 83 && echo 1), 1) +CFLAGS += -DVIRTIO_GCC_UNROLL_PRAGMA +endif +endif + +ifeq ($(RTE_TOOLCHAIN), clang) +ifeq ($(shell test $(CLANG_MAJOR_VERSION)$(CLANG_MINOR_VERSION) -ge 37 && echo 1), 1) +CFLAGS += -DVIRTIO_CLANG_UNROLL_PRAGMA +endif +endif + +ifeq ($(RTE_TOOLCHAIN), icc) +ifeq ($(shell test $(ICC_MAJOR_VERSION) -ge 16 && echo 1), 1) +CFLAGS += -DVIRTIO_ICC_UNROLL_PRAGMA +endif +endif + +CFLAGS_virtio_rxtx_packed_avx.o += -mavx512f -mavx512bw -mavx512vl +ifeq ($(shell test $(GCC_VERSION) -ge 100 && echo 1), 1) +CFLAGS_virtio_rxtx_packed_avx.o += -Wno-zero-length-bounds +endif +endif endif ifeq ($(CONFIG_RTE_VIRTIO_USER),y) diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.build index ce3525ef5..39b3605d9 100644 --- a/drivers/net/virtio/meson.build +++ b/drivers/net/virtio/meson.build @@ -10,6 +10,20 @@ deps += ['kvargs', 'bus_pci'] dpdk_conf.set('RTE_LIBRTE_VIRTIO_INC_VECTOR', 1) if arch_subdir == 'x86' + if '-mno-avx512f' not in machine_args + if cc.has_argument('-mavx512f') and cc.has_argument('-mavx512vl') and cc.has_argument('-mavx512bw') + cflags += ['-mavx512f', '-mavx512bw', '-mavx512vl'] + cflags += ['-DCC_AVX512_SUPPORT'] + if (toolchain == 'gcc' and cc.version().version_compare('>=8.3.0')) + cflags += '-DVHOST_GCC_UNROLL_PRAGMA' + elif (toolchain == 'clang' and cc.version().version_compare('>=3.7.0')) + cflags += '-DVHOST_CLANG_UNROLL_PRAGMA' + elif (toolchain == 'icc' and cc.version().version_compare('>=16.0.0')) + cflags += '-DVHOST_ICC_UNROLL_PRAGMA' + endif + sources += files('virtio_rxtx_packed_avx.c') + endif + endif sources += files('virtio_rxtx_simple_sse.c') elif arch_subdir == 'ppc' sources += files('virtio_rxtx_simple_altivec.c') diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index febaf17a8..5c112cac7 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -105,6 +105,9 @@ uint16_t virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t virtio_recv_pkts_packed_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + int eth_virtio_dev_init(struct rte_eth_dev *eth_dev); void virtio_interrupt_handler(void *param); diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 84f4cf946..7b65d0b0a 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -1246,7 +1246,6 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr) return 0; } -#define VIRTIO_MBUF_BURST_SZ 64 #define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc)) uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -2329,3 +2328,11 @@ virtio_xmit_pkts_inorder(void *tx_queue, return nb_tx; } + +__rte_weak uint16_t +virtio_recv_pkts_packed_vec(void *rx_queue __rte_unused, + struct rte_mbuf **rx_pkts __rte_unused, + uint16_t nb_pkts __rte_unused) +{ + return 0; +} diff --git a/drivers/net/virtio/virtio_rxtx_packed_avx.c b/drivers/net/virtio/virtio_rxtx_packed_avx.c new file mode 100644 index 000000000..3380f1da5 --- /dev/null +++ b/drivers/net/virtio/virtio_rxtx_packed_avx.c @@ -0,0 +1,374 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2020 Intel Corporation + */ + +#include +#include +#include +#include +#include + +#include + +#include "virtio_logs.h" +#include "virtio_ethdev.h" +#include "virtio_pci.h" +#include "virtqueue.h" + +#define BYTE_SIZE 8 +/* flag bits offset in packed ring desc higher 64bits */ +#define FLAGS_BITS_OFFSET ((offsetof(struct vring_packed_desc, flags) - \ + offsetof(struct vring_packed_desc, len)) * BYTE_SIZE) + +#define PACKED_FLAGS_MASK ((0ULL | VRING_PACKED_DESC_F_AVAIL_USED) << \ + FLAGS_BITS_OFFSET) + +#define PACKED_BATCH_SIZE (RTE_CACHE_LINE_SIZE / \ + sizeof(struct vring_packed_desc)) +#define PACKED_BATCH_MASK (PACKED_BATCH_SIZE - 1) + +#ifdef VIRTIO_GCC_UNROLL_PRAGMA +#define virtio_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll 4") \ + for (iter = val; iter < size; iter++) +#endif + +#ifdef VIRTIO_CLANG_UNROLL_PRAGMA +#define virtio_for_each_try_unroll(iter, val, size) _Pragma("unroll 4") \ + for (iter = val; iter < size; iter++) +#endif + +#ifdef VIRTIO_ICC_UNROLL_PRAGMA +#define virtio_for_each_try_unroll(iter, val, size) _Pragma("unroll (4)") \ + for (iter = val; iter < size; iter++) +#endif + +#ifndef virtio_for_each_try_unroll +#define virtio_for_each_try_unroll(iter, val, num) \ + for (iter = val; iter < num; iter++) +#endif + + +static inline void +virtio_update_batch_stats(struct virtnet_stats *stats, + uint16_t pkt_len1, + uint16_t pkt_len2, + uint16_t pkt_len3, + uint16_t pkt_len4) +{ + stats->bytes += pkt_len1; + stats->bytes += pkt_len2; + stats->bytes += pkt_len3; + stats->bytes += pkt_len4; +} +/* Optionally fill offload information in structure */ +static inline int +virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr) +{ + struct rte_net_hdr_lens hdr_lens; + uint32_t hdrlen, ptype; + int l4_supported = 0; + + /* nothing to do */ + if (hdr->flags == 0) + return 0; + + /* GSO not support in vec path, skip check */ + m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN; + + ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK); + m->packet_type = ptype; + if ((ptype & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP || + (ptype & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP || + (ptype & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_SCTP) + l4_supported = 1; + + if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) { + hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len; + if (hdr->csum_start <= hdrlen && l4_supported) { + m->ol_flags |= PKT_RX_L4_CKSUM_NONE; + } else { + /* Unknown proto or tunnel, do sw cksum. We can assume + * the cksum field is in the first segment since the + * buffers we provided to the host are large enough. + * In case of SCTP, this will be wrong since it's a CRC + * but there's nothing we can do. + */ + uint16_t csum = 0, off; + + rte_raw_cksum_mbuf(m, hdr->csum_start, + rte_pktmbuf_pkt_len(m) - hdr->csum_start, + &csum); + if (likely(csum != 0xffff)) + csum = ~csum; + off = hdr->csum_offset + hdr->csum_start; + if (rte_pktmbuf_data_len(m) >= off + 1) + *rte_pktmbuf_mtod_offset(m, uint16_t *, + off) = csum; + } + } else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) { + m->ol_flags |= PKT_RX_L4_CKSUM_GOOD; + } + + return 0; +} + +static inline uint16_t +virtqueue_dequeue_batch_packed_vec(struct virtnet_rx *rxvq, + struct rte_mbuf **rx_pkts) +{ + struct virtqueue *vq = rxvq->vq; + struct virtio_hw *hw = vq->hw; + uint16_t hdr_size = hw->vtnet_hdr_size; + uint64_t addrs[PACKED_BATCH_SIZE]; + uint16_t id = vq->vq_used_cons_idx; + uint8_t desc_stats; + uint16_t i; + void *desc_addr; + + if (id & PACKED_BATCH_MASK) + return -1; + + if (unlikely((id + PACKED_BATCH_SIZE) > vq->vq_nentries)) + return -1; + + /* only care avail/used bits */ + __m512i v_mask = _mm512_maskz_set1_epi64(0xaa, PACKED_FLAGS_MASK); + desc_addr = &vq->vq_packed.ring.desc[id]; + + __m512i v_desc = _mm512_loadu_si512(desc_addr); + __m512i v_flag = _mm512_and_epi64(v_desc, v_mask); + + __m512i v_used_flag = _mm512_setzero_si512(); + if (vq->vq_packed.used_wrap_counter) + v_used_flag = _mm512_maskz_set1_epi64(0xaa, PACKED_FLAGS_MASK); + + /* Check all descs are used */ + desc_stats = _mm512_cmpneq_epu64_mask(v_flag, v_used_flag); + if (desc_stats) + return -1; + + virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + rx_pkts[i] = (struct rte_mbuf *)vq->vq_descx[id + i].cookie; + rte_packet_prefetch(rte_pktmbuf_mtod(rx_pkts[i], void *)); + + addrs[i] = (uint64_t)rx_pkts[i]->rx_descriptor_fields1; + } + + /* + * load len from desc, store into mbuf pkt_len and data_len + * len limiated by l6bit buf_len, pkt_len[16:31] can be ignored + */ + const __mmask16 mask = 0x6 | 0x6 << 4 | 0x6 << 8 | 0x6 << 12; + __m512i values = _mm512_maskz_shuffle_epi32(mask, v_desc, 0xAA); + + /* reduce hdr_len from pkt_len and data_len */ + __m512i mbuf_len_offset = _mm512_maskz_set1_epi32(mask, + (uint32_t)-hdr_size); + + __m512i v_value = _mm512_add_epi32(values, mbuf_len_offset); + + /* assert offset of data_len */ + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) != + offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); + + __m512i v_index = _mm512_set_epi64(addrs[3] + 8, addrs[3], + addrs[2] + 8, addrs[2], + addrs[1] + 8, addrs[1], + addrs[0] + 8, addrs[0]); + /* batch store into mbufs */ + _mm512_i64scatter_epi64(0, v_index, v_value, 1); + + if (hw->has_rx_offload) { + virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + char *addr = (char *)rx_pkts[i]->buf_addr + + RTE_PKTMBUF_HEADROOM - hdr_size; + virtio_vec_rx_offload(rx_pkts[i], + (struct virtio_net_hdr *)addr); + } + } + + virtio_update_batch_stats(&rxvq->stats, rx_pkts[0]->pkt_len, + rx_pkts[1]->pkt_len, rx_pkts[2]->pkt_len, + rx_pkts[3]->pkt_len); + + vq->vq_free_cnt += PACKED_BATCH_SIZE; + + vq->vq_used_cons_idx += PACKED_BATCH_SIZE; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + + return 0; +} + +static uint16_t +virtqueue_dequeue_single_packed_vec(struct virtnet_rx *rxvq, + struct rte_mbuf **rx_pkts) +{ + uint16_t used_idx, id; + uint32_t len; + struct virtqueue *vq = rxvq->vq; + struct virtio_hw *hw = vq->hw; + uint32_t hdr_size = hw->vtnet_hdr_size; + struct virtio_net_hdr *hdr; + struct vring_packed_desc *desc; + struct rte_mbuf *cookie; + + desc = vq->vq_packed.ring.desc; + used_idx = vq->vq_used_cons_idx; + if (!desc_is_used(&desc[used_idx], vq)) + return -1; + + len = desc[used_idx].len; + id = desc[used_idx].id; + cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie; + if (unlikely(cookie == NULL)) { + PMD_DRV_LOG(ERR, "vring descriptor with no mbuf cookie at %u", + vq->vq_used_cons_idx); + return -1; + } + rte_prefetch0(cookie); + rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *)); + + cookie->data_off = RTE_PKTMBUF_HEADROOM; + cookie->ol_flags = 0; + cookie->pkt_len = (uint32_t)(len - hdr_size); + cookie->data_len = (uint32_t)(len - hdr_size); + + hdr = (struct virtio_net_hdr *)((char *)cookie->buf_addr + + RTE_PKTMBUF_HEADROOM - hdr_size); + if (hw->has_rx_offload) + virtio_vec_rx_offload(cookie, hdr); + + *rx_pkts = cookie; + + rxvq->stats.bytes += cookie->pkt_len; + + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + + return 0; +} + +static inline void +virtio_recv_refill_packed_vec(struct virtnet_rx *rxvq, + struct rte_mbuf **cookie, + uint16_t num) +{ + struct virtqueue *vq = rxvq->vq; + struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + uint16_t flags = vq->vq_packed.cached_flags; + struct virtio_hw *hw = vq->hw; + struct vq_desc_extra *dxp; + uint16_t idx, i; + uint16_t batch_num, total_num = 0; + uint16_t head_idx = vq->vq_avail_idx; + uint16_t head_flag = vq->vq_packed.cached_flags; + uint64_t addr; + + do { + idx = vq->vq_avail_idx; + + batch_num = PACKED_BATCH_SIZE; + if (unlikely((idx + PACKED_BATCH_SIZE) > vq->vq_nentries)) + batch_num = vq->vq_nentries - idx; + if (unlikely((total_num + batch_num) > num)) + batch_num = num - total_num; + + virtio_for_each_try_unroll(i, 0, batch_num) { + dxp = &vq->vq_descx[idx + i]; + dxp->cookie = (void *)cookie[total_num + i]; + + addr = VIRTIO_MBUF_ADDR(cookie[total_num + i], vq) + + RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; + start_dp[idx + i].addr = addr; + start_dp[idx + i].len = cookie[total_num + i]->buf_len + - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; + if (total_num || i) { + virtqueue_store_flags_packed(&start_dp[idx + i], + flags, hw->weak_barriers); + } + } + + vq->vq_avail_idx += batch_num; + if (vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= + VRING_PACKED_DESC_F_AVAIL_USED; + flags = vq->vq_packed.cached_flags; + } + total_num += batch_num; + } while (total_num < num); + + virtqueue_store_flags_packed(&start_dp[head_idx], head_flag, + hw->weak_barriers); + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); +} + +uint16_t +virtio_recv_pkts_packed_vec(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct virtnet_rx *rxvq = rx_queue; + struct virtqueue *vq = rxvq->vq; + struct virtio_hw *hw = vq->hw; + uint16_t num, nb_rx = 0; + uint32_t nb_enqueued = 0; + uint16_t free_cnt = vq->vq_free_thresh; + + if (unlikely(hw->started == 0)) + return nb_rx; + + num = RTE_MIN(VIRTIO_MBUF_BURST_SZ, nb_pkts); + if (likely(num > PACKED_BATCH_SIZE)) + num = num - ((vq->vq_used_cons_idx + num) % PACKED_BATCH_SIZE); + + while (num) { + if (!virtqueue_dequeue_batch_packed_vec(rxvq, + &rx_pkts[nb_rx])) { + nb_rx += PACKED_BATCH_SIZE; + num -= PACKED_BATCH_SIZE; + continue; + } + if (!virtqueue_dequeue_single_packed_vec(rxvq, + &rx_pkts[nb_rx])) { + nb_rx++; + num--; + continue; + } + break; + }; + + PMD_RX_LOG(DEBUG, "dequeue:%d", num); + + rxvq->stats.packets += nb_rx; + + if (likely(vq->vq_free_cnt >= free_cnt)) { + struct rte_mbuf *new_pkts[free_cnt]; + if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, + free_cnt) == 0)) { + virtio_recv_refill_packed_vec(rxvq, new_pkts, + free_cnt); + nb_enqueued += free_cnt; + } else { + struct rte_eth_dev *dev = + &rte_eth_devices[rxvq->port_id]; + dev->data->rx_mbuf_alloc_failed += free_cnt; + } + } + + if (likely(nb_enqueued)) { + if (unlikely(virtqueue_kick_prepare_packed(vq))) { + virtqueue_notify(vq); + PMD_RX_LOG(DEBUG, "Notified"); + } + } + + return nb_rx; +} diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 6301c56b2..43e305ecc 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -20,6 +20,7 @@ struct rte_mbuf; #define DEFAULT_RX_FREE_THRESH 32 +#define VIRTIO_MBUF_BURST_SZ 64 /* * Per virtio_ring.h in Linux. * For virtio_pci on SMP, we don't need to order with respect to MMIO @@ -236,7 +237,8 @@ struct vq_desc_extra { void *cookie; uint16_t ndescs; uint16_t next; -}; + uint8_t padding[4]; +} __rte_packed __rte_aligned(16); struct virtqueue { struct virtio_hw *hw; /**< virtio_hw structure pointer. */ From patchwork Thu Apr 23 12:31:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 69175 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 382AEA00C2; Thu, 23 Apr 2020 06:56:58 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 975DD1D562; Thu, 23 Apr 2020 06:56:04 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 0E9E91D544 for ; Thu, 23 Apr 2020 06:56:00 +0200 (CEST) IronPort-SDR: 7tAdGk746yeQIlROZjczlEwbaMG6Vd1gWKAxH7CQKxiRNVfz4Cu9lLZQoVvGxnCpAnhm5rR+z0 Oxhiw4YRbIzg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2020 21:56:00 -0700 IronPort-SDR: j/Xz7C3gGVbZwMzDErnsb6jyTEVJIQTKxG0F9kBpXlSi6Y0VhWQ0FjH1rvZiqa5ggjclBppPBf PIih+9210zXQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,305,1583222400"; d="scan'208";a="456772731" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga005.fm.intel.com with ESMTP; 22 Apr 2020 21:55:58 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: harry.van.haaren@intel.com, dev@dpdk.org, Marvin Liu Date: Thu, 23 Apr 2020 20:31:03 +0800 Message-Id: <20200423123106.78513-7-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200423123106.78513-1-yong.liu@intel.com> References: <20200313174230.74661-1-yong.liu@intel.com> <20200423123106.78513-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v8 6/9] net/virtio: reuse packed ring xmit functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move xmit offload and packed ring xmit enqueue function to header file. These functions will be reused by packed ring vectorized Tx function. Signed-off-by: Marvin Liu diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 7b65d0b0a..cf18fe564 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -264,10 +264,6 @@ virtqueue_dequeue_rx_inorder(struct virtqueue *vq, return i; } -#ifndef DEFAULT_TX_FREE_THRESH -#define DEFAULT_TX_FREE_THRESH 32 -#endif - static void virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num) { @@ -562,68 +558,7 @@ virtio_tso_fix_cksum(struct rte_mbuf *m) } -/* avoid write operation when necessary, to lessen cache issues */ -#define ASSIGN_UNLESS_EQUAL(var, val) do { \ - if ((var) != (val)) \ - (var) = (val); \ -} while (0) - -#define virtqueue_clear_net_hdr(_hdr) do { \ - ASSIGN_UNLESS_EQUAL((_hdr)->csum_start, 0); \ - ASSIGN_UNLESS_EQUAL((_hdr)->csum_offset, 0); \ - ASSIGN_UNLESS_EQUAL((_hdr)->flags, 0); \ - ASSIGN_UNLESS_EQUAL((_hdr)->gso_type, 0); \ - ASSIGN_UNLESS_EQUAL((_hdr)->gso_size, 0); \ - ASSIGN_UNLESS_EQUAL((_hdr)->hdr_len, 0); \ -} while (0) - -static inline void -virtqueue_xmit_offload(struct virtio_net_hdr *hdr, - struct rte_mbuf *cookie, - bool offload) -{ - if (offload) { - if (cookie->ol_flags & PKT_TX_TCP_SEG) - cookie->ol_flags |= PKT_TX_TCP_CKSUM; - - switch (cookie->ol_flags & PKT_TX_L4_MASK) { - case PKT_TX_UDP_CKSUM: - hdr->csum_start = cookie->l2_len + cookie->l3_len; - hdr->csum_offset = offsetof(struct rte_udp_hdr, - dgram_cksum); - hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; - break; - - case PKT_TX_TCP_CKSUM: - hdr->csum_start = cookie->l2_len + cookie->l3_len; - hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum); - hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; - break; - - default: - ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); - ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); - ASSIGN_UNLESS_EQUAL(hdr->flags, 0); - break; - } - /* TCP Segmentation Offload */ - if (cookie->ol_flags & PKT_TX_TCP_SEG) { - hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ? - VIRTIO_NET_HDR_GSO_TCPV6 : - VIRTIO_NET_HDR_GSO_TCPV4; - hdr->gso_size = cookie->tso_segsz; - hdr->hdr_len = - cookie->l2_len + - cookie->l3_len + - cookie->l4_len; - } else { - ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); - ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); - ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0); - } - } -} static inline void virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, @@ -725,102 +660,6 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, virtqueue_store_flags_packed(dp, flags, vq->hw->weak_barriers); } -static inline void -virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, - uint16_t needed, int can_push, int in_order) -{ - struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr; - struct vq_desc_extra *dxp; - struct virtqueue *vq = txvq->vq; - struct vring_packed_desc *start_dp, *head_dp; - uint16_t idx, id, head_idx, head_flags; - int16_t head_size = vq->hw->vtnet_hdr_size; - struct virtio_net_hdr *hdr; - uint16_t prev; - bool prepend_header = false; - - id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; - - dxp = &vq->vq_descx[id]; - dxp->ndescs = needed; - dxp->cookie = cookie; - - head_idx = vq->vq_avail_idx; - idx = head_idx; - prev = head_idx; - start_dp = vq->vq_packed.ring.desc; - - head_dp = &vq->vq_packed.ring.desc[idx]; - head_flags = cookie->next ? VRING_DESC_F_NEXT : 0; - head_flags |= vq->vq_packed.cached_flags; - - if (can_push) { - /* prepend cannot fail, checked by caller */ - hdr = rte_pktmbuf_mtod_offset(cookie, struct virtio_net_hdr *, - -head_size); - prepend_header = true; - - /* if offload disabled, it is not zeroed below, do it now */ - if (!vq->hw->has_tx_offload) - virtqueue_clear_net_hdr(hdr); - } else { - /* setup first tx ring slot to point to header - * stored in reserved region. - */ - start_dp[idx].addr = txvq->virtio_net_hdr_mem + - RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); - start_dp[idx].len = vq->hw->vtnet_hdr_size; - hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr; - idx++; - if (idx >= vq->vq_nentries) { - idx -= vq->vq_nentries; - vq->vq_packed.cached_flags ^= - VRING_PACKED_DESC_F_AVAIL_USED; - } - } - - virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload); - - do { - uint16_t flags; - - start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); - start_dp[idx].len = cookie->data_len; - if (prepend_header) { - start_dp[idx].addr -= head_size; - start_dp[idx].len += head_size; - prepend_header = false; - } - - if (likely(idx != head_idx)) { - flags = cookie->next ? VRING_DESC_F_NEXT : 0; - flags |= vq->vq_packed.cached_flags; - start_dp[idx].flags = flags; - } - prev = idx; - idx++; - if (idx >= vq->vq_nentries) { - idx -= vq->vq_nentries; - vq->vq_packed.cached_flags ^= - VRING_PACKED_DESC_F_AVAIL_USED; - } - } while ((cookie = cookie->next) != NULL); - - start_dp[prev].id = id; - - vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); - vq->vq_avail_idx = idx; - - if (!in_order) { - vq->vq_desc_head_idx = dxp->next; - if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) - vq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END; - } - - virtqueue_store_flags_packed(head_dp, head_flags, - vq->hw->weak_barriers); -} - static inline void virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, uint16_t needed, int use_indirect, int can_push, diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 43e305ecc..18ae34789 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -18,6 +18,7 @@ struct rte_mbuf; +#define DEFAULT_TX_FREE_THRESH 32 #define DEFAULT_RX_FREE_THRESH 32 #define VIRTIO_MBUF_BURST_SZ 64 @@ -562,4 +563,165 @@ virtqueue_notify(struct virtqueue *vq) #define VIRTQUEUE_DUMP(vq) do { } while (0) #endif +/* avoid write operation when necessary, to lessen cache issues */ +#define ASSIGN_UNLESS_EQUAL(var, val) do { \ + typeof(var) var_ = (var); \ + typeof(val) val_ = (val); \ + if ((var_) != (val_)) \ + (var_) = (val_); \ +} while (0) + +#define virtqueue_clear_net_hdr(hdr) do { \ + typeof(hdr) hdr_ = (hdr); \ + ASSIGN_UNLESS_EQUAL((hdr_)->csum_start, 0); \ + ASSIGN_UNLESS_EQUAL((hdr_)->csum_offset, 0); \ + ASSIGN_UNLESS_EQUAL((hdr_)->flags, 0); \ + ASSIGN_UNLESS_EQUAL((hdr_)->gso_type, 0); \ + ASSIGN_UNLESS_EQUAL((hdr_)->gso_size, 0); \ + ASSIGN_UNLESS_EQUAL((hdr_)->hdr_len, 0); \ +} while (0) + +static inline void +virtqueue_xmit_offload(struct virtio_net_hdr *hdr, + struct rte_mbuf *cookie, + bool offload) +{ + if (offload) { + if (cookie->ol_flags & PKT_TX_TCP_SEG) + cookie->ol_flags |= PKT_TX_TCP_CKSUM; + + switch (cookie->ol_flags & PKT_TX_L4_MASK) { + case PKT_TX_UDP_CKSUM: + hdr->csum_start = cookie->l2_len + cookie->l3_len; + hdr->csum_offset = offsetof(struct rte_udp_hdr, + dgram_cksum); + hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; + break; + + case PKT_TX_TCP_CKSUM: + hdr->csum_start = cookie->l2_len + cookie->l3_len; + hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum); + hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; + break; + + default: + ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); + ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); + ASSIGN_UNLESS_EQUAL(hdr->flags, 0); + break; + } + + /* TCP Segmentation Offload */ + if (cookie->ol_flags & PKT_TX_TCP_SEG) { + hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ? + VIRTIO_NET_HDR_GSO_TCPV6 : + VIRTIO_NET_HDR_GSO_TCPV4; + hdr->gso_size = cookie->tso_segsz; + hdr->hdr_len = + cookie->l2_len + + cookie->l3_len + + cookie->l4_len; + } else { + ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); + ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); + ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0); + } + } +} + +static inline void +virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, + uint16_t needed, int can_push, int in_order) +{ + struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr; + struct vq_desc_extra *dxp; + struct virtqueue *vq = txvq->vq; + struct vring_packed_desc *start_dp, *head_dp; + uint16_t idx, id, head_idx, head_flags; + int16_t head_size = vq->hw->vtnet_hdr_size; + struct virtio_net_hdr *hdr; + uint16_t prev; + bool prepend_header = false; + + id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + + dxp = &vq->vq_descx[id]; + dxp->ndescs = needed; + dxp->cookie = cookie; + + head_idx = vq->vq_avail_idx; + idx = head_idx; + prev = head_idx; + start_dp = vq->vq_packed.ring.desc; + + head_dp = &vq->vq_packed.ring.desc[idx]; + head_flags = cookie->next ? VRING_DESC_F_NEXT : 0; + head_flags |= vq->vq_packed.cached_flags; + + if (can_push) { + /* prepend cannot fail, checked by caller */ + hdr = rte_pktmbuf_mtod_offset(cookie, struct virtio_net_hdr *, + -head_size); + prepend_header = true; + + /* if offload disabled, it is not zeroed below, do it now */ + if (!vq->hw->has_tx_offload) + virtqueue_clear_net_hdr(hdr); + } else { + /* setup first tx ring slot to point to header + * stored in reserved region. + */ + start_dp[idx].addr = txvq->virtio_net_hdr_mem + + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); + start_dp[idx].len = vq->hw->vtnet_hdr_size; + hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= + VRING_PACKED_DESC_F_AVAIL_USED; + } + } + + virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload); + + do { + uint16_t flags; + + start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); + start_dp[idx].len = cookie->data_len; + if (prepend_header) { + start_dp[idx].addr -= head_size; + start_dp[idx].len += head_size; + prepend_header = false; + } + + if (likely(idx != head_idx)) { + flags = cookie->next ? VRING_DESC_F_NEXT : 0; + flags |= vq->vq_packed.cached_flags; + start_dp[idx].flags = flags; + } + prev = idx; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= + VRING_PACKED_DESC_F_AVAIL_USED; + } + } while ((cookie = cookie->next) != NULL); + + start_dp[prev].id = id; + + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); + vq->vq_avail_idx = idx; + + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END; + } + + virtqueue_store_flags_packed(head_dp, head_flags, + vq->hw->weak_barriers); +} #endif /* _VIRTQUEUE_H_ */ From patchwork Thu Apr 23 12:31:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 69176 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 75A55A00C2; Thu, 23 Apr 2020 06:57:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8763B1D582; Thu, 23 Apr 2020 06:56:08 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 24DF71D55A for ; Thu, 23 Apr 2020 06:56:02 +0200 (CEST) IronPort-SDR: I33+mrnM+a+Otrt/FIroqJrgmh6ytq5I/4WR54YWiUA1B03T1J07xM/UtItcoNWjrhJNAMKTpS NGxOirgeuOyg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2020 21:56:02 -0700 IronPort-SDR: KqzQYwQ5MUMDx2H5H7g26tvMzHR1Akl7BZD++bbFnif0I9nQTpyv56XkvTOIwzd5YzZjSABXDe YWuo+hybUrhw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,305,1583222400"; d="scan'208";a="456772743" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga005.fm.intel.com with ESMTP; 22 Apr 2020 21:56:00 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: harry.van.haaren@intel.com, dev@dpdk.org, Marvin Liu Date: Thu, 23 Apr 2020 20:31:04 +0800 Message-Id: <20200423123106.78513-8-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200423123106.78513-1-yong.liu@intel.com> References: <20200313174230.74661-1-yong.liu@intel.com> <20200423123106.78513-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v8 7/9] net/virtio: add vectorized packed ring Tx path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Optimize packed ring Tx path alike Rx path. Split Tx path into batch and single Tx functions. Batch function is further optimized by vector instructions. Signed-off-by: Marvin Liu diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index 5c112cac7..b7d52d497 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -108,6 +108,9 @@ uint16_t virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t virtio_recv_pkts_packed_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t virtio_xmit_pkts_packed_vec(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + int eth_virtio_dev_init(struct rte_eth_dev *eth_dev); void virtio_interrupt_handler(void *param); diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index cf18fe564..f82fe8d64 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -2175,3 +2175,11 @@ virtio_recv_pkts_packed_vec(void *rx_queue __rte_unused, { return 0; } + +__rte_weak uint16_t +virtio_xmit_pkts_packed_vec(void *tx_queue __rte_unused, + struct rte_mbuf **tx_pkts __rte_unused, + uint16_t nb_pkts __rte_unused) +{ + return 0; +} diff --git a/drivers/net/virtio/virtio_rxtx_packed_avx.c b/drivers/net/virtio/virtio_rxtx_packed_avx.c index 3380f1da5..c023ace4e 100644 --- a/drivers/net/virtio/virtio_rxtx_packed_avx.c +++ b/drivers/net/virtio/virtio_rxtx_packed_avx.c @@ -23,6 +23,24 @@ #define PACKED_FLAGS_MASK ((0ULL | VRING_PACKED_DESC_F_AVAIL_USED) << \ FLAGS_BITS_OFFSET) +/* reference count offset in mbuf rearm data */ +#define REFCNT_BITS_OFFSET ((offsetof(struct rte_mbuf, refcnt) - \ + offsetof(struct rte_mbuf, rearm_data)) * BYTE_SIZE) +/* segment number offset in mbuf rearm data */ +#define SEG_NUM_BITS_OFFSET ((offsetof(struct rte_mbuf, nb_segs) - \ + offsetof(struct rte_mbuf, rearm_data)) * BYTE_SIZE) + +/* default rearm data */ +#define DEFAULT_REARM_DATA (1ULL << SEG_NUM_BITS_OFFSET | \ + 1ULL << REFCNT_BITS_OFFSET) + +/* id bits offset in packed ring desc higher 64bits */ +#define ID_BITS_OFFSET ((offsetof(struct vring_packed_desc, id) - \ + offsetof(struct vring_packed_desc, len)) * BYTE_SIZE) + +/* net hdr short size mask */ +#define NET_HDR_MASK 0x3F + #define PACKED_BATCH_SIZE (RTE_CACHE_LINE_SIZE / \ sizeof(struct vring_packed_desc)) #define PACKED_BATCH_MASK (PACKED_BATCH_SIZE - 1) @@ -47,6 +65,47 @@ for (iter = val; iter < num; iter++) #endif +static inline void +virtio_xmit_cleanup_packed_vec(struct virtqueue *vq) +{ + struct vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct vq_desc_extra *dxp; + uint16_t used_idx, id, curr_id, free_cnt = 0; + uint16_t size = vq->vq_nentries; + struct rte_mbuf *mbufs[size]; + uint16_t nb_mbuf = 0, i; + + used_idx = vq->vq_used_cons_idx; + + if (!desc_is_used(&desc[used_idx], vq)) + return; + + id = desc[used_idx].id; + + do { + curr_id = used_idx; + dxp = &vq->vq_descx[used_idx]; + used_idx += dxp->ndescs; + free_cnt += dxp->ndescs; + + if (dxp->cookie != NULL) { + mbufs[nb_mbuf] = dxp->cookie; + dxp->cookie = NULL; + nb_mbuf++; + } + + if (used_idx >= size) { + used_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + } while (curr_id != id); + + for (i = 0; i < nb_mbuf; i++) + rte_pktmbuf_free(mbufs[i]); + + vq->vq_used_cons_idx = used_idx; + vq->vq_free_cnt += free_cnt; +} static inline void virtio_update_batch_stats(struct virtnet_stats *stats, @@ -60,6 +119,238 @@ virtio_update_batch_stats(struct virtnet_stats *stats, stats->bytes += pkt_len3; stats->bytes += pkt_len4; } + +static inline int +virtqueue_enqueue_batch_packed_vec(struct virtnet_tx *txvq, + struct rte_mbuf **tx_pkts) +{ + struct virtqueue *vq = txvq->vq; + uint16_t head_size = vq->hw->vtnet_hdr_size; + uint16_t idx = vq->vq_avail_idx; + struct virtio_net_hdr *hdr; + uint16_t i, cmp; + + if (vq->vq_avail_idx & PACKED_BATCH_MASK) + return -1; + + if (unlikely((idx + PACKED_BATCH_SIZE) > vq->vq_nentries)) + return -1; + + /* Load four mbufs rearm data */ + RTE_BUILD_BUG_ON(REFCNT_BITS_OFFSET >= 64); + RTE_BUILD_BUG_ON(SEG_NUM_BITS_OFFSET >= 64); + __m256i mbufs = _mm256_set_epi64x(*tx_pkts[3]->rearm_data, + *tx_pkts[2]->rearm_data, + *tx_pkts[1]->rearm_data, + *tx_pkts[0]->rearm_data); + + /* refcnt=1 and nb_segs=1 */ + __m256i mbuf_ref = _mm256_set1_epi64x(DEFAULT_REARM_DATA); + __m256i head_rooms = _mm256_set1_epi16(head_size); + + /* Check refcnt and nb_segs */ + const __mmask16 mask = 0x6 | 0x6 << 4 | 0x6 << 8 | 0x6 << 12; + cmp = _mm256_mask_cmpneq_epu16_mask(mask, mbufs, mbuf_ref); + if (unlikely(cmp)) + return -1; + + /* Check headroom is enough */ + const __mmask16 data_mask = 0x1 | 0x1 << 4 | 0x1 << 8 | 0x1 << 12; + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) != + offsetof(struct rte_mbuf, rearm_data)); + cmp = _mm256_mask_cmplt_epu16_mask(data_mask, mbufs, head_rooms); + if (unlikely(cmp)) + return -1; + + __m512i v_descx = _mm512_set_epi64(0x1, (uint64_t)tx_pkts[3], + 0x1, (uint64_t)tx_pkts[2], + 0x1, (uint64_t)tx_pkts[1], + 0x1, (uint64_t)tx_pkts[0]); + + _mm512_storeu_si512((void *)&vq->vq_descx[idx], v_descx); + + virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + tx_pkts[i]->data_off -= head_size; + tx_pkts[i]->data_len += head_size; + } + +#ifdef RTE_VIRTIO_USER + __m512i descs_base = _mm512_set_epi64(tx_pkts[3]->data_len, + (uint64_t)(*(uintptr_t *)((uintptr_t)tx_pkts[3])), + tx_pkts[2]->data_len, + (uint64_t)(*(uintptr_t *)((uintptr_t)tx_pkts[2])), + tx_pkts[1]->data_len, + (uint64_t)(*(uintptr_t *)((uintptr_t)tx_pkts[1])), + tx_pkts[0]->data_len, + (uint64_t)(*(uintptr_t *)((uintptr_t)tx_pkts[0]))); +#else + __m512i descs_base = _mm512_set_epi64(tx_pkts[3]->data_len, + tx_pkts[3]->buf_iova, + tx_pkts[2]->data_len, + tx_pkts[2]->buf_iova, + tx_pkts[1]->data_len, + tx_pkts[1]->buf_iova, + tx_pkts[0]->data_len, + tx_pkts[0]->buf_iova); +#endif + + /* id offset and data offset */ + __m512i data_offsets = _mm512_set_epi64((uint64_t)3 << ID_BITS_OFFSET, + tx_pkts[3]->data_off, + (uint64_t)2 << ID_BITS_OFFSET, + tx_pkts[2]->data_off, + (uint64_t)1 << ID_BITS_OFFSET, + tx_pkts[1]->data_off, + 0, tx_pkts[0]->data_off); + + __m512i new_descs = _mm512_add_epi64(descs_base, data_offsets); + + uint64_t flags_temp = (uint64_t)idx << ID_BITS_OFFSET | + (uint64_t)vq->vq_packed.cached_flags << FLAGS_BITS_OFFSET; + + /* flags offset and guest virtual address offset */ +#ifdef RTE_VIRTIO_USER + __m128i flag_offset = _mm_set_epi64x(flags_temp, (uint64_t)vq->offset); +#else + __m128i flag_offset = _mm_set_epi64x(flags_temp, 0); +#endif + __m512i v_offset = _mm512_broadcast_i32x4(flag_offset); + + __m512i v_desc = _mm512_add_epi64(new_descs, v_offset); + + if (!vq->hw->has_tx_offload) { + __m128i all_mask = _mm_set1_epi16(0xFFFF); + virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + hdr = rte_pktmbuf_mtod_offset(tx_pkts[i], + struct virtio_net_hdr *, -head_size); + __m128i v_hdr = _mm_loadu_si128((void *)hdr); + if (unlikely(_mm_mask_test_epi16_mask(NET_HDR_MASK, + v_hdr, all_mask))) { + __m128i all_zero = _mm_setzero_si128(); + _mm_mask_storeu_epi16((void *)hdr, + NET_HDR_MASK, all_zero); + } + } + } else { + virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + hdr = rte_pktmbuf_mtod_offset(tx_pkts[i], + struct virtio_net_hdr *, -head_size); + virtqueue_xmit_offload(hdr, tx_pkts[i], true); + } + } + + /* Enqueue Packet buffers */ + _mm512_storeu_si512((void *)&vq->vq_packed.ring.desc[idx], v_desc); + + virtio_update_batch_stats(&txvq->stats, tx_pkts[0]->pkt_len, + tx_pkts[1]->pkt_len, tx_pkts[2]->pkt_len, + tx_pkts[3]->pkt_len); + + vq->vq_avail_idx += PACKED_BATCH_SIZE; + vq->vq_free_cnt -= PACKED_BATCH_SIZE; + + if (vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= + VRING_PACKED_DESC_F_AVAIL_USED; + } + + return 0; +} + +static inline int +virtqueue_enqueue_single_packed_vec(struct virtnet_tx *txvq, + struct rte_mbuf *txm) +{ + struct virtqueue *vq = txvq->vq; + struct virtio_hw *hw = vq->hw; + uint16_t hdr_size = hw->vtnet_hdr_size; + uint16_t slots, can_push; + int16_t need; + + /* How many main ring entries are needed to this Tx? + * any_layout => number of segments + * default => number of segments + 1 + */ + can_push = rte_mbuf_refcnt_read(txm) == 1 && + RTE_MBUF_DIRECT(txm) && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= hdr_size; + + slots = txm->nb_segs + !can_push; + need = slots - vq->vq_free_cnt; + + /* Positive value indicates it need free vring descriptors */ + if (unlikely(need > 0)) { + virtio_xmit_cleanup_packed_vec(vq); + need = slots - vq->vq_free_cnt; + if (unlikely(need > 0)) { + PMD_TX_LOG(ERR, + "No free tx descriptors to transmit"); + return -1; + } + } + + /* Enqueue Packet buffers */ + virtqueue_enqueue_xmit_packed(txvq, txm, slots, can_push, 1); + + txvq->stats.bytes += txm->pkt_len; + return 0; +} + +uint16_t +virtio_xmit_pkts_packed_vec(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct virtnet_tx *txvq = tx_queue; + struct virtqueue *vq = txvq->vq; + struct virtio_hw *hw = vq->hw; + uint16_t nb_tx = 0; + uint16_t remained; + + if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) + return nb_tx; + + if (unlikely(nb_pkts < 1)) + return nb_pkts; + + PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts); + + if (vq->vq_free_cnt <= vq->vq_nentries - vq->vq_free_thresh) + virtio_xmit_cleanup_packed_vec(vq); + + remained = RTE_MIN(nb_pkts, vq->vq_free_cnt); + + while (remained) { + if (remained >= PACKED_BATCH_SIZE) { + if (!virtqueue_enqueue_batch_packed_vec(txvq, + &tx_pkts[nb_tx])) { + nb_tx += PACKED_BATCH_SIZE; + remained -= PACKED_BATCH_SIZE; + continue; + } + } + if (!virtqueue_enqueue_single_packed_vec(txvq, + tx_pkts[nb_tx])) { + nb_tx++; + remained--; + continue; + } + break; + }; + + txvq->stats.packets += nb_tx; + + if (likely(nb_tx)) { + if (unlikely(virtqueue_kick_prepare_packed(vq))) { + virtqueue_notify(vq); + PMD_TX_LOG(DEBUG, "Notified backend after xmit"); + } + } + + return nb_tx; +} + /* Optionally fill offload information in structure */ static inline int virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr) From patchwork Thu Apr 23 12:31:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 69177 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99B72A00C2; Thu, 23 Apr 2020 06:57:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9BCB01D590; Thu, 23 Apr 2020 06:56:09 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id E96651D56C for ; Thu, 23 Apr 2020 06:56:04 +0200 (CEST) IronPort-SDR: IaKPZ4cuD3UQEeQbJb5FoybDPxauXmXArk5iMCPm+HplOZoD5bNozLtrVwQ6oK5sfv3pBo3ZhG nUnRNWZkA7qg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2020 21:56:04 -0700 IronPort-SDR: +UIx6D27Qzq0PyXTZrHnejouXivYvAZjbFJxSvIbVyE5pJ6emxvCRhE6a8EcW6HWc2x+Bmn/Aj PlO2Ms/Q8WdQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,305,1583222400"; d="scan'208";a="456772753" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga005.fm.intel.com with ESMTP; 22 Apr 2020 21:56:02 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: harry.van.haaren@intel.com, dev@dpdk.org, Marvin Liu Date: Thu, 23 Apr 2020 20:31:05 +0800 Message-Id: <20200423123106.78513-9-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200423123106.78513-1-yong.liu@intel.com> References: <20200313174230.74661-1-yong.liu@intel.com> <20200423123106.78513-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v8 8/9] net/virtio: add election for vectorized path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rewrite vectorized path selection logic. Default setting comes from RTE_LIBRTE_VIRTIO_INC_VECTOR option. Paths criteria will be checked as listed below. Packed ring vectorized path will be selected when: vectorized option is enabled AVX512F and required extensions are supported by compiler and host virtio VERSION_1 and IN_ORDER features are negotiated virtio mergeable feature is not negotiated LRO offloading is disabled Split ring vectorized rx path will be selected when: vectorized option is enabled virtio mergeable and IN_ORDER features are not negotiated LRO, chksum and vlan strip offloading are disabled Signed-off-by: Marvin Liu diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 361c834a9..c700af6be 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1522,9 +1522,12 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) if (vtpci_packed_queue(hw)) { PMD_INIT_LOG(INFO, "virtio: using packed ring %s Tx path on port %u", - hw->use_inorder_tx ? "inorder" : "standard", + hw->use_vec_tx ? "vectorized" : "standard", eth_dev->data->port_id); - eth_dev->tx_pkt_burst = virtio_xmit_pkts_packed; + if (hw->use_vec_tx) + eth_dev->tx_pkt_burst = virtio_xmit_pkts_packed_vec; + else + eth_dev->tx_pkt_burst = virtio_xmit_pkts_packed; } else { if (hw->use_inorder_tx) { PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u", @@ -1538,7 +1541,13 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) } if (vtpci_packed_queue(hw)) { - if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { + if (hw->use_vec_rx) { + PMD_INIT_LOG(INFO, + "virtio: using packed ring vectorized Rx path on port %u", + eth_dev->data->port_id); + eth_dev->rx_pkt_burst = + &virtio_recv_pkts_packed_vec; + } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { PMD_INIT_LOG(INFO, "virtio: using packed ring mergeable buffer Rx path on port %u", eth_dev->data->port_id); @@ -1950,6 +1959,10 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev) goto err_virtio_init; hw->opened = true; +#ifdef RTE_LIBRTE_VIRTIO_INC_VECTOR + hw->use_vec_rx = 1; + hw->use_vec_tx = 1; +#endif return 0; @@ -2257,33 +2270,63 @@ virtio_dev_configure(struct rte_eth_dev *dev) return -EBUSY; } - hw->use_vec_rx = 1; + if (vtpci_packed_queue(hw)) { +#if defined RTE_ARCH_X86 + if ((hw->use_vec_rx || hw->use_vec_tx) && + (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) || + !vtpci_with_feature(hw, VIRTIO_F_IN_ORDER) || + !vtpci_with_feature(hw, VIRTIO_F_VERSION_1))) { + PMD_DRV_LOG(INFO, + "disabled packed ring vectorization for requirements are not met"); + hw->use_vec_rx = 0; + hw->use_vec_tx = 0; + } +#endif - if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) { - hw->use_inorder_tx = 1; - hw->use_inorder_rx = 1; - hw->use_vec_rx = 0; - } + if (hw->use_vec_rx) { + if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { + PMD_DRV_LOG(INFO, + "disabled packed ring vectorized rx for mrg_rxbuf enabled"); + hw->use_vec_rx = 0; + } - if (vtpci_packed_queue(hw)) { - hw->use_vec_rx = 0; - hw->use_inorder_rx = 0; - } + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) { + PMD_DRV_LOG(INFO, + "disabled packed ring vectorized rx for TCP_LRO enabled"); + hw->use_vec_rx = 0; + } + } + } else { + if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) { + hw->use_inorder_tx = 1; + hw->use_inorder_rx = 1; + hw->use_vec_rx = 0; + } + if (hw->use_vec_rx) { #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM - if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) { - hw->use_vec_rx = 0; - } + if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) { + PMD_DRV_LOG(INFO, + "disabled split ring vectorization for requirements are not met"); + hw->use_vec_rx = 0; + } #endif - if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { - hw->use_vec_rx = 0; - } + if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { + PMD_DRV_LOG(INFO, + "disabled split ring vectorized rx for mrg_rxbuf enabled"); + hw->use_vec_rx = 0; + } - if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | - DEV_RX_OFFLOAD_TCP_CKSUM | - DEV_RX_OFFLOAD_TCP_LRO | - DEV_RX_OFFLOAD_VLAN_STRIP)) - hw->use_vec_rx = 0; + if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_TCP_LRO | + DEV_RX_OFFLOAD_VLAN_STRIP)) { + PMD_DRV_LOG(INFO, + "disabled split ring vectorized rx for offloading enabled"); + hw->use_vec_rx = 0; + } + } + } return 0; } From patchwork Thu Apr 23 12:31:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 69178 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5AEBFA00C2; Thu, 23 Apr 2020 06:57:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D58AB1D5A5; Thu, 23 Apr 2020 06:56:12 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 133D81D582 for ; Thu, 23 Apr 2020 06:56:06 +0200 (CEST) IronPort-SDR: sXv07l+LqhZXqzQ0kEcGewHvw7u5Ijb/Q/A9cSyCDfeE7T8wOcopVFbz6oHFbFQ0lcnTvLzmvh EGdiY7XO5vMw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2020 21:56:06 -0700 IronPort-SDR: T2xiXiX+pEV+H++U9y3OYoT0aREHou0fAMFQBfdqu4vAQZhuclL6L09moxeIet07Mm1UB4kcaI J0vI12bAlWGA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,305,1583222400"; d="scan'208";a="456772766" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga005.fm.intel.com with ESMTP; 22 Apr 2020 21:56:04 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com Cc: harry.van.haaren@intel.com, dev@dpdk.org, Marvin Liu Date: Thu, 23 Apr 2020 20:31:06 +0800 Message-Id: <20200423123106.78513-10-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200423123106.78513-1-yong.liu@intel.com> References: <20200313174230.74661-1-yong.liu@intel.com> <20200423123106.78513-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v8 9/9] doc: add packed vectorized path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Document packed virtqueue vectorized path selection logic in virtio net PMD. Signed-off-by: Marvin Liu diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst index 6286286db..4bd46f83e 100644 --- a/doc/guides/nics/virtio.rst +++ b/doc/guides/nics/virtio.rst @@ -417,6 +417,10 @@ Below devargs are supported by the virtio-user vdev: rte_eth_link_get_nowait function. (Default: 10000 (10G)) +#. ``vectorized``: + + It is used to enable virtio device vectorized path. + (Default: 0 (disabled)) Virtio paths Selection and Usage -------------------------------- @@ -469,6 +473,13 @@ according to below configuration: both negotiated, this path will be selected. #. Packed virtqueue in-order non-mergeable path: If in-order feature is negotiated and Rx mergeable is not negotiated, this path will be selected. +#. Packed virtqueue vectorized Rx path: If building and running environment support + AVX512 && in-order feature is negotiated && Rx mergeable is not negotiated && + TCP_LRO Rx offloading is disabled && vectorized option enabled, + this path will be selected. +#. Packed virtqueue vectorized Tx path: If building and running environment support + AVX512 && in-order feature is negotiated && vectorized option enabled, + this path will be selected. Rx/Tx callbacks of each Virtio path ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -491,6 +502,8 @@ are shown in below table: Packed virtqueue non-meregable path virtio_recv_pkts_packed virtio_xmit_pkts_packed Packed virtqueue in-order mergeable path virtio_recv_mergeable_pkts_packed virtio_xmit_pkts_packed Packed virtqueue in-order non-mergeable path virtio_recv_pkts_packed virtio_xmit_pkts_packed + Packed virtqueue vectorized Rx path virtio_recv_pkts_packed_vec virtio_xmit_pkts_packed + Packed virtqueue vectorized Tx path virtio_recv_pkts_packed virtio_xmit_pkts_packed_vec ============================================ ================================= ======================== Virtio paths Support Status from Release to Release @@ -508,20 +521,22 @@ All virtio paths support status are shown in below table: .. table:: Virtio Paths and Releases - ============================================ ============= ============= ============= - Virtio paths 16.11 ~ 18.05 18.08 ~ 18.11 19.02 ~ 19.11 - ============================================ ============= ============= ============= - Split virtqueue mergeable path Y Y Y - Split virtqueue non-mergeable path Y Y Y - Split virtqueue vectorized Rx path Y Y Y - Split virtqueue simple Tx path Y N N - Split virtqueue in-order mergeable path Y Y - Split virtqueue in-order non-mergeable path Y Y - Packed virtqueue mergeable path Y - Packed virtqueue non-mergeable path Y - Packed virtqueue in-order mergeable path Y - Packed virtqueue in-order non-mergeable path Y - ============================================ ============= ============= ============= + ============================================ ============= ============= ============= ======= + Virtio paths 16.11 ~ 18.05 18.08 ~ 18.11 19.02 ~ 19.11 20.05 ~ + ============================================ ============= ============= ============= ======= + Split virtqueue mergeable path Y Y Y Y + Split virtqueue non-mergeable path Y Y Y Y + Split virtqueue vectorized Rx path Y Y Y Y + Split virtqueue simple Tx path Y N N N + Split virtqueue in-order mergeable path Y Y Y + Split virtqueue in-order non-mergeable path Y Y Y + Packed virtqueue mergeable path Y Y + Packed virtqueue non-mergeable path Y Y + Packed virtqueue in-order mergeable path Y Y + Packed virtqueue in-order non-mergeable path Y Y + Packed virtqueue vectorized Rx path Y + Packed virtqueue vectorized Tx path Y + ============================================ ============= ============= ============= ======= QEMU Support Status ~~~~~~~~~~~~~~~~~~~