From patchwork Tue Aug 27 08:19:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joyce Kong X-Patchwork-Id: 58028 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 90FC51C0AE; Tue, 27 Aug 2019 10:20:37 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 6C4ED1C0AE for ; Tue, 27 Aug 2019 10:20:36 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E93FA337; Tue, 27 Aug 2019 01:20:35 -0700 (PDT) Received: from net-arm-thunderx2-01.test.ast.arm.com (net-arm-thunderx2-01.shanghai.arm.com [10.169.40.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 701213F246; Tue, 27 Aug 2019 01:20:34 -0700 (PDT) From: Joyce Kong To: dev@dpdk.org Cc: nd@arm.com, maxime.coquelin@redhat.com, thomas@monjalon.net, honnappa.nagarahalli@arm.com, gavin.hu@arm.com Date: Tue, 27 Aug 2019 16:19:38 +0800 Message-Id: <1566893979-3290-2-git-send-email-joyce.kong@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1566893979-3290-1-git-send-email-joyce.kong@arm.com> References: <1566893979-3290-1-git-send-email-joyce.kong@arm.com> Subject: [dpdk-dev] [RFC PATCH 1/2] virtio: one way barrier for packed vring desc avail flags X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend and backend are assumed to be implemented in software, that is they can run on identical CPUs in an SMP configuration. Thus a weak form of memory barriers like rte_smp_r/wmb, other than rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1) and yields better performance. For the above case, this patch helps yielding even better performance by replacing the two-way barriers with C11 one-way barriers. Meanwhile, a read barrier is required to ensure ordering between descriptor's flags and content reads[1]. With C11, load-acquire can enforce the ordering instead of rmb barrier. [1]https://patchwork.dpdk.org/patch/49109/ Signed-off-by: Joyce Kong --- drivers/net/virtio/virtio_rxtx.c | 26 ++++++++++++++++++------ drivers/net/virtio/virtio_user/virtio_user_dev.c | 6 +++++- lib/librte_vhost/vhost.h | 2 +- lib/librte_vhost/virtio_net.c | 11 +++++----- 4 files changed, 31 insertions(+), 14 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 27ead19..2a2153c 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -456,8 +456,14 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, vq->vq_desc_head_idx = dxp->next; if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) vq->vq_desc_tail_idx = vq->vq_desc_head_idx; - virtio_wmb(hw->weak_barriers); - start_dp[idx].flags = flags; + + if (hw->weak_barriers) + __atomic_store_n(&start_dp[idx].flags, flags, + __ATOMIC_RELEASE); + else { + rte_cio_wmb(); + start_dp[idx].flags = flags; + } if (++vq->vq_avail_idx >= vq->vq_nentries) { vq->vq_avail_idx -= vq->vq_nentries; vq->vq_packed.cached_flags ^= @@ -671,8 +677,12 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, vq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END; } - virtio_wmb(vq->hw->weak_barriers); - dp->flags = flags; + if (vq->hw->weak_barriers) + __atomic_store_n(&dp->flags, flags, __ATOMIC_RELEASE); + else { + rte_cio_wmb(); + dp->flags = flags; + } } static inline void @@ -763,8 +773,12 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, vq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END; } - virtio_wmb(vq->hw->weak_barriers); - head_dp->flags = head_flags; + if (vq->hw->weak_barriers) + __atomic_store_n(&head_dp->flags, head_flags, __ATOMIC_RELEASE); + else { + rte_cio_wmb(); + head_dp->flags = head_flags; + } } static inline void diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index fab87eb..7911c39 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -624,7 +624,7 @@ virtio_user_handle_ctrl_msg(struct virtio_user_dev *dev, struct vring *vring, static inline int desc_is_avail(struct vring_packed_desc *desc, bool wrap_counter) { - uint16_t flags = desc->flags; + uint16_t flags = __atomic_load_n(&desc->flags, __ATOMIC_ACQUIRE); return wrap_counter == !!(flags & VRING_PACKED_DESC_F_AVAIL) && wrap_counter != !!(flags & VRING_PACKED_DESC_F_USED); @@ -684,6 +684,10 @@ virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t queue_idx) struct vring_packed *vring = &dev->packed_vrings[queue_idx]; uint16_t n_descs, flags; + /* Perform a load-acquire barrier in desc_is_avail to + * enforce the ordering between desc flags and desc + * content. + */ while (desc_is_avail(&vring->desc[vq->used_idx], vq->used_wrap_counter)) { diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 884befa..d294ed1 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -344,7 +344,7 @@ vq_is_packed(struct virtio_net *dev) static inline bool desc_is_avail(struct vring_packed_desc *desc, bool wrap_counter) { - uint16_t flags = *((volatile uint16_t *) &desc->flags); + uint16_t flags = __atomic_load_n(&desc->flags, __ATOMIC_ACQUIRE); return wrap_counter == !!(flags & VRING_DESC_F_AVAIL) && wrap_counter != !!(flags & VRING_DESC_F_USED); diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 5b85b83..e7463ff 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -503,14 +503,13 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, if (avail_idx < vq->last_avail_idx) wrap_counter ^= 1; - if (unlikely(!desc_is_avail(&descs[avail_idx], wrap_counter))) - return -1; - /* - * The ordering between desc flags and desc - * content reads need to be enforced. + * Perform a load-acquire barrier in desc_is_avail to + * enforce the ordering between desc flags and desc + * content. */ - rte_smp_rmb(); + if (unlikely(!desc_is_avail(&descs[avail_idx], wrap_counter))) + return -1; *desc_count = 0; *len = 0; From patchwork Tue Aug 27 08:19:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joyce Kong X-Patchwork-Id: 58029 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DA6A91C0B7; Tue, 27 Aug 2019 10:20:40 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 544541C0B7 for ; Tue, 27 Aug 2019 10:20:39 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BA6BA360; Tue, 27 Aug 2019 01:20:38 -0700 (PDT) Received: from net-arm-thunderx2-01.test.ast.arm.com (net-arm-thunderx2-01.shanghai.arm.com [10.169.40.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 419F03F762; Tue, 27 Aug 2019 01:20:37 -0700 (PDT) From: Joyce Kong To: dev@dpdk.org Cc: nd@arm.com, maxime.coquelin@redhat.com, thomas@monjalon.net, honnappa.nagarahalli@arm.com, gavin.hu@arm.com Date: Tue, 27 Aug 2019 16:19:39 +0800 Message-Id: <1566893979-3290-3-git-send-email-joyce.kong@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1566893979-3290-1-git-send-email-joyce.kong@arm.com> References: <1566893979-3290-1-git-send-email-joyce.kong@arm.com> Subject: [dpdk-dev] [RFC PATCH 2/2] virtio: one way barrier for packed vring desc used flags X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In case VIRTIO_F_ORDER_PLATFORM(36) is not negotiated, then the frontend and backend are assumed to be implemented in software, that is they can run on identical CPUs in an SMP configuration. Thus a weak form of memory barriers like rte_smp_r/wmb, other than rte_cio_r/wmb, is sufficient for this case(vq->hw->weak_barriers == 1) and yields better performance. For the above case, this patch helps yielding even better performance by replacing the two-way barriers with C11 one-way barriers. Signed-off-by: Joyce Kong --- drivers/net/virtio/virtio_rxtx.c | 12 +++++++++--- drivers/net/virtio/virtio_user/virtio_user_dev.c | 4 ++-- drivers/net/virtio/virtqueue.h | 7 ++++++- lib/librte_vhost/virtio_net.c | 5 ++--- 4 files changed, 19 insertions(+), 9 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 2a2153c..1d818c8 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -122,9 +122,11 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq, for (i = 0; i < num; i++) { used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_cio_rmb inside + * and wait for used desc in virtqueue. + */ if (!desc_is_used(&desc[used_idx], vq)) return i; - virtio_rmb(vq->hw->weak_barriers); len[i] = desc[used_idx].len; id = desc[used_idx].id; cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie; @@ -233,8 +235,10 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num) struct vq_desc_extra *dxp; used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_cio_rmb inside + * and wait for used desc in virtqueue. + */ while (num > 0 && desc_is_used(&desc[used_idx], vq)) { - virtio_rmb(vq->hw->weak_barriers); id = desc[used_idx].id; do { curr_id = used_idx; @@ -265,8 +269,10 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num) struct vq_desc_extra *dxp; used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_cio_rmb inside + * and wait for used desc in virtqueue. + */ while (num-- && desc_is_used(&desc[used_idx], vq)) { - virtio_rmb(vq->hw->weak_barriers); id = desc[used_idx].id; dxp = &vq->vq_descx[id]; vq->vq_used_cons_idx += dxp->ndescs; diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index 7911c39..1c575d0 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -698,8 +698,8 @@ virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t queue_idx) if (vq->used_wrap_counter) flags |= VRING_PACKED_DESC_F_AVAIL_USED; - rte_smp_wmb(); - vring->desc[vq->used_idx].flags = flags; + __atomic_store_n(&vring->desc[vq->used_idx].flags, flags, + __ATOMIC_RELEASE); vq->used_idx += n_descs; if (vq->used_idx >= dev->queue_size) { diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index c6dd4a3..ee6fcbb 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -286,7 +286,12 @@ desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq) { uint16_t used, avail, flags; - flags = desc->flags; + if (vq->hw->weak_barriers) + flags = __atomic_load_n(&desc->flags, __ATOMIC_ACQUIRE); + else { + flags = desc->flags; + rte_cio_rmb(); + } used = !!(flags & VRING_PACKED_DESC_F_USED); avail = !!(flags & VRING_PACKED_DESC_F_AVAIL); diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index e7463ff..241d467 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -110,8 +110,6 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, used_idx -= vq->size; } - rte_smp_wmb(); - for (i = 0; i < vq->shadow_used_idx; i++) { uint16_t flags; @@ -147,7 +145,8 @@ flush_shadow_used_ring_packed(struct virtio_net *dev, } } - vq->desc_packed[head_idx].flags = head_flags; + __atomic_store_n(&vq->desc_packed[head_idx].flags, head_flags, + __ATOMIC_RELEASE); vhost_log_cache_used_vring(dev, vq, head_idx *