diff mbox series

[v1] net/virtio: fix vectorized Rx queue stuck

Message ID 20210414141404.9486-1-xuemingl@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Maxime Coquelin
Headers show
Series [v1] net/virtio: fix vectorized Rx queue stuck | expand

Checks

Context Check Description
ci/iol-testing fail Testing issues
ci/intel-Testing success Testing PASS
ci/Intel-compilation success Compilation OK
ci/github-robot success github build: passed
ci/travis-robot success travis build: passed
ci/checkpatch warning coding style issues

Commit Message

Xueming Li April 14, 2021, 2:14 p.m. UTC
From: ".Xueming Li" <xuemingl@nvidia.com>

When Rx queue worked in vectorized mode and rxd <= 512, under traffic of
high PPS rate, testpmd often start and receive packets of rxd without
further growth.

Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)
packets and drop. When Rx burst size >= Rx queue size, all descriptors
in used queue consumed without rearm, device can't receive more packets.
The next Rx burst returned at once since no used descriptors found,
rearm logic was skipped, rx vq kept in starving state.

To avoid rx vq starving, this patch always check the available queue,
rearm if needed even no used descriptor reported by device.

Fixes: fc3d66212fed ("virtio: add vector Rx")
Cc: 谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>
Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
Cc: jerin.jacob@caviumnetworks.com
Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
Cc: drc@linux.vnet.ibm.com
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
 drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
 drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
 3 files changed, 18 insertions(+), 18 deletions(-)

Comments

David Christensen April 16, 2021, 8:58 p.m. UTC | #1
> When Rx queue worked in vectorized mode and rxd <= 512, under traffic of
> high PPS rate, testpmd often start and receive packets of rxd without
> further growth.
> 
> Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)
> packets and drop. When Rx burst size >= Rx queue size, all descriptors
> in used queue consumed without rearm, device can't receive more packets.
> The next Rx burst returned at once since no used descriptors found,
> rearm logic was skipped, rx vq kept in starving state.
> 
> To avoid rx vq starving, this patch always check the available queue,
> rearm if needed even no used descriptor reported by device.
> 
> Fixes: fc3d66212fed ("virtio: add vector Rx")
> Cc: 谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>
> Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
> Cc: jerin.jacob@caviumnetworks.com
> Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
> Cc: drc@linux.vnet.ibm.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>   drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
>   drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
>   drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
>   3 files changed, 18 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> index 62e5100a48..7534974ef4 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> @@ -85,6 +85,12 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
>   	if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))
>   		return 0;
> 
> +	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> +		virtio_rxq_rearm_vec(rxvq);
> +		if (unlikely(virtqueue_kick_prepare(vq)))
> +			virtqueue_notify(vq);
> +	}
> +
>   	nb_used = virtqueue_nused(vq);
> 
>   	rte_compiler_barrier();
> @@ -102,12 +108,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
> 
>   	rte_prefetch0(rused);
> 
> -	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> -		virtio_rxq_rearm_vec(rxvq);
> -		if (unlikely(virtqueue_kick_prepare(vq)))
> -			virtqueue_notify(vq);
> -	}
> -
>   	nb_total = nb_used;
>   	ref_rx_pkts = rx_pkts;
>   	for (nb_pkts_received = 0;

Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>
Maxime Coquelin May 3, 2021, 2:53 p.m. UTC | #2
On 4/14/21 4:14 PM, Xueming Li wrote:
> From: ".Xueming Li" <xuemingl@nvidia.com>
> 
> When Rx queue worked in vectorized mode and rxd <= 512, under traffic of
> high PPS rate, testpmd often start and receive packets of rxd without
> further growth.
> 
> Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)
> packets and drop. When Rx burst size >= Rx queue size, all descriptors
> in used queue consumed without rearm, device can't receive more packets.
> The next Rx burst returned at once since no used descriptors found,
> rearm logic was skipped, rx vq kept in starving state.
> 
> To avoid rx vq starving, this patch always check the available queue,
> rearm if needed even no used descriptor reported by device.
> 
> Fixes: fc3d66212fed ("virtio: add vector Rx")
> Cc: 谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>
> Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
> Cc: jerin.jacob@caviumnetworks.com
> Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
> Cc: drc@linux.vnet.ibm.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>  drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
>  3 files changed, 18 insertions(+), 18 deletions(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime
Maxime Coquelin May 4, 2021, 8:26 a.m. UTC | #3
On 4/14/21 4:14 PM, Xueming Li wrote:
> From: ".Xueming Li" <xuemingl@nvidia.com>
> 
> When Rx queue worked in vectorized mode and rxd <= 512, under traffic of
> high PPS rate, testpmd often start and receive packets of rxd without
> further growth.
> 
> Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)
> packets and drop. When Rx burst size >= Rx queue size, all descriptors
> in used queue consumed without rearm, device can't receive more packets.
> The next Rx burst returned at once since no used descriptors found,
> rearm logic was skipped, rx vq kept in starving state.
> 
> To avoid rx vq starving, this patch always check the available queue,
> rearm if needed even no used descriptor reported by device.
> 
> Fixes: fc3d66212fed ("virtio: add vector Rx")
> Cc: 谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>
> Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
> Cc: jerin.jacob@caviumnetworks.com
> Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
> Cc: drc@linux.vnet.ibm.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>  drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
>  3 files changed, 18 insertions(+), 18 deletions(-)
> 

Applied to dpdk-next-virtio/main.

Thanks,
Maxime
diff mbox series

Patch

diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
index 62e5100a48..7534974ef4 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
@@ -85,6 +85,12 @@  virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 	if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))
 		return 0;
 
+	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+		virtio_rxq_rearm_vec(rxvq);
+		if (unlikely(virtqueue_kick_prepare(vq)))
+			virtqueue_notify(vq);
+	}
+
 	nb_used = virtqueue_nused(vq);
 
 	rte_compiler_barrier();
@@ -102,12 +108,6 @@  virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rte_prefetch0(rused);
 
-	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
-		virtio_rxq_rearm_vec(rxvq);
-		if (unlikely(virtqueue_kick_prepare(vq)))
-			virtqueue_notify(vq);
-	}
-
 	nb_total = nb_used;
 	ref_rx_pkts = rx_pkts;
 	for (nb_pkts_received = 0;
diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c
index c8e4b13a02..7fd92d1b0c 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_neon.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c
@@ -84,6 +84,12 @@  virtio_recv_pkts_vec(void *rx_queue,
 	if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))
 		return 0;
 
+	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+		virtio_rxq_rearm_vec(rxvq);
+		if (unlikely(virtqueue_kick_prepare(vq)))
+			virtqueue_notify(vq);
+	}
+
 	/* virtqueue_nused has a load-acquire or rte_io_rmb inside */
 	nb_used = virtqueue_nused(vq);
 
@@ -100,12 +106,6 @@  virtio_recv_pkts_vec(void *rx_queue,
 
 	rte_prefetch_non_temporal(rused);
 
-	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
-		virtio_rxq_rearm_vec(rxvq);
-		if (unlikely(virtqueue_kick_prepare(vq)))
-			virtqueue_notify(vq);
-	}
-
 	nb_total = nb_used;
 	ref_rx_pkts = rx_pkts;
 	for (nb_pkts_received = 0;
diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c
index ff4eba33d6..7577f5e86d 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_sse.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c
@@ -85,6 +85,12 @@  virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 	if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))
 		return 0;
 
+	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+		virtio_rxq_rearm_vec(rxvq);
+		if (unlikely(virtqueue_kick_prepare(vq)))
+			virtqueue_notify(vq);
+	}
+
 	nb_used = virtqueue_nused(vq);
 
 	if (unlikely(nb_used == 0))
@@ -100,12 +106,6 @@  virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rte_prefetch0(rused);
 
-	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
-		virtio_rxq_rearm_vec(rxvq);
-		if (unlikely(virtqueue_kick_prepare(vq)))
-			virtqueue_notify(vq);
-	}
-
 	nb_total = nb_used;
 	ref_rx_pkts = rx_pkts;
 	for (nb_pkts_received = 0;