From patchwork Wed Apr 14 14:14:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 91463 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 41D04A0562; Wed, 14 Apr 2021 16:14:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F291D161B47; Wed, 14 Apr 2021 16:14:45 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id E2B5C161B41 for ; Wed, 14 Apr 2021 16:14:43 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from xuemingl@nvidia.com) with SMTP; 14 Apr 2021 17:14:41 +0300 Received: from nvidia.com ([172.27.8.56]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 13EEEacw027244; Wed, 14 Apr 2021 17:14:37 +0300 From: Xueming Li To: Cc: ".Xueming Li" , dev@dpdk.org, =?utf-8?b?6LCi5Y2O?= =?utf-8?b?5LyfICjmraTml7bmraTliLvvvIk=?= , jerin.jacob@caviumnetworks.com, drc@linux.vnet.ibm.com, stable@dpdk.org, Maxime Coquelin , Chenbo Xia , Jerin Jacob , Ruifeng Wang , Bruce Richardson , Konstantin Ananyev , Jianfeng Tan , Huawei Xie , Jianbo Liu , Yuanhan Liu Date: Wed, 14 Apr 2021 22:14:04 +0800 Message-Id: <20210414141404.9486-1-xuemingl@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210414042631.7041-1-xuemingl@nvidia.com> References: <20210414042631.7041-1-xuemingl@nvidia.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1] net/virtio: fix vectorized Rx queue stuck X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: ".Xueming Li" When Rx queue worked in vectorized mode and rxd <= 512, under traffic of high PPS rate, testpmd often start and receive packets of rxd without further growth. Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512) packets and drop. When Rx burst size >= Rx queue size, all descriptors in used queue consumed without rearm, device can't receive more packets. The next Rx burst returned at once since no used descriptors found, rearm logic was skipped, rx vq kept in starving state. To avoid rx vq starving, this patch always check the available queue, rearm if needed even no used descriptor reported by device. Fixes: fc3d66212fed ("virtio: add vector Rx") Cc: 谢华伟(此时此刻) Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler") Cc: jerin.jacob@caviumnetworks.com Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx") Cc: drc@linux.vnet.ibm.com Cc: stable@dpdk.org Signed-off-by: Xueming Li Reviewed-by: David Christensen Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------ drivers/net/virtio/virtio_rxtx_simple_neon.c | 12 ++++++------ drivers/net/virtio/virtio_rxtx_simple_sse.c | 12 ++++++------ 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c index 62e5100a48..7534974ef4 100644 --- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c +++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c @@ -85,6 +85,12 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP)) return 0; + if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) { + virtio_rxq_rearm_vec(rxvq); + if (unlikely(virtqueue_kick_prepare(vq))) + virtqueue_notify(vq); + } + nb_used = virtqueue_nused(vq); rte_compiler_barrier(); @@ -102,12 +108,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, rte_prefetch0(rused); - if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) { - virtio_rxq_rearm_vec(rxvq); - if (unlikely(virtqueue_kick_prepare(vq))) - virtqueue_notify(vq); - } - nb_total = nb_used; ref_rx_pkts = rx_pkts; for (nb_pkts_received = 0; diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c index c8e4b13a02..7fd92d1b0c 100644 --- a/drivers/net/virtio/virtio_rxtx_simple_neon.c +++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c @@ -84,6 +84,12 @@ virtio_recv_pkts_vec(void *rx_queue, if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP)) return 0; + if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) { + virtio_rxq_rearm_vec(rxvq); + if (unlikely(virtqueue_kick_prepare(vq))) + virtqueue_notify(vq); + } + /* virtqueue_nused has a load-acquire or rte_io_rmb inside */ nb_used = virtqueue_nused(vq); @@ -100,12 +106,6 @@ virtio_recv_pkts_vec(void *rx_queue, rte_prefetch_non_temporal(rused); - if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) { - virtio_rxq_rearm_vec(rxvq); - if (unlikely(virtqueue_kick_prepare(vq))) - virtqueue_notify(vq); - } - nb_total = nb_used; ref_rx_pkts = rx_pkts; for (nb_pkts_received = 0; diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c index ff4eba33d6..7577f5e86d 100644 --- a/drivers/net/virtio/virtio_rxtx_simple_sse.c +++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c @@ -85,6 +85,12 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP)) return 0; + if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) { + virtio_rxq_rearm_vec(rxvq); + if (unlikely(virtqueue_kick_prepare(vq))) + virtqueue_notify(vq); + } + nb_used = virtqueue_nused(vq); if (unlikely(nb_used == 0)) @@ -100,12 +106,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, rte_prefetch0(rused); - if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) { - virtio_rxq_rearm_vec(rxvq); - if (unlikely(virtqueue_kick_prepare(vq))) - virtqueue_notify(vq); - } - nb_total = nb_used; ref_rx_pkts = rx_pkts; for (nb_pkts_received = 0;