From patchwork Wed Oct 9 13:38:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 60742 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 143711C1C8; Wed, 9 Oct 2019 07:59:47 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id F1F9C1C132 for ; Wed, 9 Oct 2019 07:59:38 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Oct 2019 22:59:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,273,1566889200"; d="scan'208";a="223473412" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by fmsmga002.fm.intel.com with ESMTP; 08 Oct 2019 22:59:37 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com, stephen@networkplumber.org, gavin.hu@arm.com Cc: dev@dpdk.org, Marvin Liu Date: Wed, 9 Oct 2019 21:38:44 +0800 Message-Id: <20191009133849.69002-10-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191009133849.69002-1-yong.liu@intel.com> References: <20190925171329.63734-1-yong.liu@intel.com> <20191009133849.69002-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v4 09/14] vhost: split enqueue and dequeue flush functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Vhost enqueue descriptors are updated by batch number, while vhost dequeue descriptors are buffered. Meanwhile in dequeue function only first descriptor is buffered. Due to these differences, split vhost enqueue and dequeue flush functions. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 8f7209f83..1b0fa2c64 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -92,8 +92,8 @@ update_shadow_used_ring_split(struct vhost_virtqueue *vq, } static __rte_always_inline void -flush_shadow_used_ring_packed(struct virtio_net *dev, - struct vhost_virtqueue *vq) +flush_enqueue_shadow_used_ring_packed(struct virtio_net *dev, + struct vhost_virtqueue *vq) { int i; uint16_t used_idx = vq->last_used_idx; @@ -158,6 +158,32 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, vhost_log_cache_sync(dev, vq); } +static __rte_always_inline void +flush_dequeue_shadow_used_ring_packed(struct virtio_net *dev, + struct vhost_virtqueue *vq) +{ + uint16_t head_idx = vq->dequeue_shadow_head; + uint16_t head_flags; + struct vring_used_elem_packed *used_elem = &vq->shadow_used_packed[0]; + + if (used_elem->used_wrap_counter) + head_flags = PACKED_TX_USED_FLAG; + else + head_flags = PACKED_TX_USED_WRAP_FLAG; + + vq->desc_packed[head_idx].id = used_elem->id; + + rte_smp_wmb(); + vq->desc_packed[head_idx].flags = head_flags; + + vhost_log_cache_used_vring(dev, vq, head_idx * + sizeof(struct vring_packed_desc), + sizeof(struct vring_packed_desc)); + + vq->shadow_used_idx = 0; + vhost_log_cache_sync(dev, vq); +} + static __rte_always_inline void update_shadow_used_ring_packed(struct vhost_virtqueue *vq, uint16_t desc_idx, uint32_t len, uint16_t count) @@ -199,6 +225,47 @@ flush_used_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } } +static __rte_always_inline void +update_dequeue_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, + uint16_t *ids) +{ + uint16_t flags = 0; + uint16_t i; + + if (vq->used_wrap_counter) + flags = PACKED_TX_USED_FLAG; + else + flags = PACKED_TX_USED_WRAP_FLAG; + + if (!vq->shadow_used_idx) { + vq->dequeue_shadow_head = vq->last_used_idx; + vq->shadow_used_packed[0].id = ids[0]; + vq->shadow_used_packed[0].len = 0; + vq->shadow_used_packed[0].count = 1; + vq->shadow_used_packed[0].used_idx = vq->last_used_idx; + vq->shadow_used_packed[0].used_wrap_counter = + vq->used_wrap_counter; + + UNROLL_PRAGMA(UNROLL_PRAGMA_PARAM) + for (i = 1; i < PACKED_BATCH_SIZE; i++) + vq->desc_packed[vq->last_used_idx + i].id = ids[i]; + rte_smp_wmb(); + UNROLL_PRAGMA(UNROLL_PRAGMA_PARAM) + for (i = 1; i < PACKED_BATCH_SIZE; i++) + vq->desc_packed[vq->last_used_idx + i].flags = flags; + + vq->shadow_used_idx = 1; + vq->last_used_idx += PACKED_BATCH_SIZE; + if (vq->last_used_idx >= vq->size) { + vq->used_wrap_counter ^= 1; + vq->last_used_idx -= vq->size; + } + } else { + uint64_t lens[PACKED_BATCH_SIZE] = {0}; + flush_used_batch_packed(dev, vq, lens, ids, flags); + } +} + static __rte_always_inline void flush_enqueue_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint64_t *lens, uint16_t *ids) @@ -306,11 +373,29 @@ flush_enqueue_packed(struct virtio_net *dev, if (vq->enqueue_shadow_count >= PACKED_BATCH_SIZE) { do_data_copy_enqueue(dev, vq); - flush_shadow_used_ring_packed(dev, vq); + flush_enqueue_shadow_used_ring_packed(dev, vq); } } } +static __rte_unused void +flush_dequeue_packed(struct virtio_net *dev, struct vhost_virtqueue *vq) +{ + if (!vq->shadow_used_idx) + return; + + int16_t shadow_count = vq->last_used_idx - vq->dequeue_shadow_head; + if (shadow_count <= 0) + shadow_count += vq->size; + + /* buffer used descs as many as possible when doing dequeue */ + if ((uint16_t)shadow_count >= (vq->size - MAX_PKT_BURST)) { + do_data_copy_dequeue(vq); + flush_dequeue_shadow_used_ring_packed(dev, vq); + vhost_vring_call_packed(dev, vq); + } +} + /* avoid write operation when necessary, to lessen cache issues */ #define ASSIGN_UNLESS_EQUAL(var, val) do { \ if ((var) != (val)) \ @@ -1165,7 +1250,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, do_data_copy_enqueue(dev, vq); if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring_packed(dev, vq); + flush_enqueue_shadow_used_ring_packed(dev, vq); vhost_vring_call_packed(dev, vq); } @@ -1796,6 +1881,8 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, pkts[i]->pkt_len); } + update_dequeue_batch_packed(dev, vq, ids); + if (virtio_net_with_host_offload(dev)) { UNROLL_PRAGMA(UNROLL_PRAGMA_PARAM) for (i = 0; i < PACKED_BATCH_SIZE; i++) { @@ -1896,7 +1983,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring_packed(dev, vq); + flush_dequeue_shadow_used_ring_packed(dev, vq); vhost_vring_call_packed(dev, vq); } } @@ -1975,7 +2062,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(i < count)) vq->shadow_used_idx = i; if (likely(vq->shadow_used_idx)) { - flush_shadow_used_ring_packed(dev, vq); + flush_dequeue_shadow_used_ring_packed(dev, vq); vhost_vring_call_packed(dev, vq); } }