From patchwork Thu Sep 19 16:36:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 59383 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 34B571C2FB; Thu, 19 Sep 2019 10:56:36 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id F3AEB1C201 for ; Thu, 19 Sep 2019 10:56:29 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2019 01:56:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,523,1559545200"; d="scan'208";a="271146094" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by orsmga001.jf.intel.com with ESMTP; 19 Sep 2019 01:56:27 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Fri, 20 Sep 2019 00:36:30 +0800 Message-Id: <20190919163643.24130-4-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919163643.24130-1-yong.liu@intel.com> References: <20190905161421.55981-2-yong.liu@intel.com> <20190919163643.24130-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 03/16] vhost: add burst enqueue function for packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Burst enqueue function will first check whether descriptors are cache aligned. It will also check prerequisites in the beginning. Burst enqueue function not support chained mbufs, single packet enqueue function will handle it. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 5074226f0..67889c80a 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -39,6 +39,9 @@ #define VHOST_LOG_CACHE_NR 32 +#define PACKED_DESCS_BURST (RTE_CACHE_LINE_SIZE / \ + sizeof(struct vring_packed_desc)) + #ifdef SUPPORT_GCC_UNROLL_PRAGMA #define PRAGMA_PARAM "GCC unroll 4" #endif @@ -57,6 +60,8 @@ #define UNROLL_PRAGMA(param) do {} while(0); #endif +#define PACKED_BURST_MASK (PACKED_DESCS_BURST - 1) + /** * Structure contains buffer address, length and descriptor index * from vring to do scatter RX. diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 2b5c47145..c664b27c5 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -895,6 +895,84 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, return pkt_idx; } +static __rte_unused __rte_always_inline int +virtio_dev_rx_burst_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts) +{ + bool wrap_counter = vq->avail_wrap_counter; + struct vring_packed_desc *descs = vq->desc_packed; + uint16_t avail_idx = vq->last_avail_idx; + + uint64_t desc_addrs[PACKED_DESCS_BURST]; + struct virtio_net_hdr_mrg_rxbuf *hdrs[PACKED_DESCS_BURST]; + uint32_t buf_offset = dev->vhost_hlen; + uint64_t lens[PACKED_DESCS_BURST]; + + uint16_t i; + + if (unlikely(avail_idx & PACKED_BURST_MASK)) + return -1; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (unlikely(pkts[i]->next != NULL)) + return -1; + if (unlikely(!desc_is_avail(&descs[avail_idx + i], + wrap_counter))) + return -1; + } + + rte_smp_rmb(); + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + lens[i] = descs[avail_idx + i].len; + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (unlikely(pkts[i]->pkt_len > (lens[i] - buf_offset))) + return -1; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + desc_addrs[i] = vhost_iova_to_vva(dev, vq, + descs[avail_idx + i].addr, + &lens[i], + VHOST_ACCESS_RW); + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + if (unlikely(lens[i] != descs[avail_idx + i].len)) + return -1; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + rte_prefetch0((void *)(uintptr_t)desc_addrs[i]); + hdrs[i] = (struct virtio_net_hdr_mrg_rxbuf *)desc_addrs[i]; + lens[i] = pkts[i]->pkt_len + dev->vhost_hlen; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) + virtio_enqueue_offload(pkts[i], &hdrs[i]->hdr); + + vq->last_avail_idx += PACKED_DESCS_BURST; + if (vq->last_avail_idx >= vq->size) { + vq->last_avail_idx -= vq->size; + vq->avail_wrap_counter ^= 1; + } + + UNROLL_PRAGMA(PRAGMA_PARAM) + for (i = 0; i < PACKED_DESCS_BURST; i++) { + rte_memcpy((void *)(uintptr_t)(desc_addrs[i] + buf_offset), + rte_pktmbuf_mtod_offset(pkts[i], void *, 0), + pkts[i]->pkt_len); + } + + return 0; +} + static __rte_unused int16_t virtio_dev_rx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct rte_mbuf *pkt)