[dpdk-dev,1/5] net/virtio: fix vector Rx break caused by rxq flushing

Message ID 20171207053059.19487-2-tiwei.bie@intel.com (mailing list archive)
State Superseded, archived
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Tiwei Bie Dec. 7, 2017, 5:30 a.m. UTC
  The vector Rx will be broken if backend has consumed all
the descs in the avail ring before the device is started.
Because in current implementation, vector Rx will return
immediately without refilling the avail ring if the used
ring is empty. So we have to refill the avail ring after
flushing the elements in the used ring for vector Rx.

Besides, vector Rx has a different ring layout assumption
and mbuf management. So we need to handle it differently.

Fixes: d8227497ec5c ("net/virtio: flush Rx queues on start")
Cc: stable@dpdk.org

Reported-by: Antonio Fischetti <antonio.fischetti@intel.com>
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/net/virtio/virtio_ethdev.c |  2 +-
 drivers/net/virtio/virtqueue.c     | 31 ++++++++++++++++++++++++-------
 drivers/net/virtio/virtqueue.h     |  2 +-
 3 files changed, 26 insertions(+), 9 deletions(-)
  

Comments

Maxime Coquelin Dec. 7, 2017, 9:14 a.m. UTC | #1
On 12/07/2017 06:30 AM, Tiwei Bie wrote:
> The vector Rx will be broken if backend has consumed all
> the descs in the avail ring before the device is started.
> Because in current implementation, vector Rx will return
> immediately without refilling the avail ring if the used
> ring is empty. So we have to refill the avail ring after
> flushing the elements in the used ring for vector Rx.
> 
> Besides, vector Rx has a different ring layout assumption
> and mbuf management. So we need to handle it differently.
> 
> Fixes: d8227497ec5c ("net/virtio: flush Rx queues on start")
> Cc: stable@dpdk.org
> 
> Reported-by: Antonio Fischetti <antonio.fischetti@intel.com>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>   drivers/net/virtio/virtio_ethdev.c |  2 +-
>   drivers/net/virtio/virtqueue.c     | 31 ++++++++++++++++++++++++-------
>   drivers/net/virtio/virtqueue.h     |  2 +-
>   3 files changed, 26 insertions(+), 9 deletions(-)

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime
  
Antonio Fischetti Dec. 7, 2017, 9:30 a.m. UTC | #2
Thanks Tiwei for working on this, I'll give it a try soon.

Antonio

> -----Original Message-----

> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]

> Sent: Thursday, December 7, 2017 9:15 AM

> To: Bie, Tiwei <tiwei.bie@intel.com>; yliu@fridaylinux.org; dev@dpdk.org

> Cc: Fischetti, Antonio <antonio.fischetti@intel.com>; stable@dpdk.org

> Subject: Re: [PATCH 1/5] net/virtio: fix vector Rx break caused by rxq

> flushing

> 

> 

> 

> On 12/07/2017 06:30 AM, Tiwei Bie wrote:

> > The vector Rx will be broken if backend has consumed all

> > the descs in the avail ring before the device is started.

> > Because in current implementation, vector Rx will return

> > immediately without refilling the avail ring if the used

> > ring is empty. So we have to refill the avail ring after

> > flushing the elements in the used ring for vector Rx.

> >

> > Besides, vector Rx has a different ring layout assumption

> > and mbuf management. So we need to handle it differently.

> >

> > Fixes: d8227497ec5c ("net/virtio: flush Rx queues on start")

> > Cc: stable@dpdk.org

> >

> > Reported-by: Antonio Fischetti <antonio.fischetti@intel.com>

> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>

> > ---

> >   drivers/net/virtio/virtio_ethdev.c |  2 +-

> >   drivers/net/virtio/virtqueue.c     | 31 ++++++++++++++++++++++++----

> ---

> >   drivers/net/virtio/virtqueue.h     |  2 +-

> >   3 files changed, 26 insertions(+), 9 deletions(-)

> 

> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

> 

> Thanks,

> Maxime
  
Kevin Traynor Dec. 7, 2017, 4 p.m. UTC | #3
On 12/07/2017 05:30 AM, Tiwei Bie wrote:
> The vector Rx will be broken if backend has consumed all
> the descs in the avail ring before the device is started.
> Because in current implementation, vector Rx will return
> immediately without refilling the avail ring if the used
> ring is empty. So we have to refill the avail ring after
> flushing the elements in the used ring for vector Rx.
> 
> Besides, vector Rx has a different ring layout assumption
> and mbuf management. So we need to handle it differently.
> 

Hi Tiwei, does the issue only occur with the vector rx? How about if the
simple rx is used because VIRTIO_NET_F_MRG_RXBUF is set?

Kevin.

> Fixes: d8227497ec5c ("net/virtio: flush Rx queues on start")
> Cc: stable@dpdk.org
> 
> Reported-by: Antonio Fischetti <antonio.fischetti@intel.com>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>  drivers/net/virtio/virtio_ethdev.c |  2 +-
>  drivers/net/virtio/virtqueue.c     | 31 ++++++++++++++++++++++++-------
>  drivers/net/virtio/virtqueue.h     |  2 +-
>  3 files changed, 26 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index e0328f61d..64a0cc608 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -1860,7 +1860,7 @@ virtio_dev_start(struct rte_eth_dev *dev)
>  	for (i = 0; i < dev->data->nb_rx_queues; i++) {
>  		rxvq = dev->data->rx_queues[i];
>  		/* Flush the old packets */
> -		virtqueue_flush(rxvq->vq);
> +		virtqueue_rxvq_flush(rxvq->vq);
>  		virtqueue_notify(rxvq->vq);
>  	}
>  
> diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
> index c3a536f8a..696d0e4a4 100644
> --- a/drivers/net/virtio/virtqueue.c
> +++ b/drivers/net/virtio/virtqueue.c
> @@ -37,6 +37,7 @@
>  #include "virtqueue.h"
>  #include "virtio_logs.h"
>  #include "virtio_pci.h"
> +#include "virtio_rxtx_simple.h"
>  
>  /*
>   * Two types of mbuf to be cleaned:
> @@ -62,8 +63,10 @@ virtqueue_detatch_unused(struct virtqueue *vq)
>  
>  /* Flush the elements in the used ring. */
>  void
> -virtqueue_flush(struct virtqueue *vq)
> +virtqueue_rxvq_flush(struct virtqueue *vq)
>  {
> +	struct virtnet_rx *rxq = &vq->rxq;
> +	struct virtio_hw *hw = vq->hw;
>  	struct vring_used_elem *uep;
>  	struct vq_desc_extra *dxp;
>  	uint16_t used_idx, desc_idx;
> @@ -74,13 +77,27 @@ virtqueue_flush(struct virtqueue *vq)
>  	for (i = 0; i < nb_used; i++) {
>  		used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
>  		uep = &vq->vq_ring.used->ring[used_idx];
> -		desc_idx = (uint16_t)uep->id;
> -		dxp = &vq->vq_descx[desc_idx];
> -		if (dxp->cookie != NULL) {
> -			rte_pktmbuf_free(dxp->cookie);
> -			dxp->cookie = NULL;
> +		if (hw->use_simple_rx) {
> +			desc_idx = used_idx;
> +			rte_pktmbuf_free(vq->sw_ring[desc_idx]);
> +			vq->vq_free_cnt++;
> +		} else {
> +			desc_idx = (uint16_t)uep->id;
> +			dxp = &vq->vq_descx[desc_idx];
> +			if (dxp->cookie != NULL) {
> +				rte_pktmbuf_free(dxp->cookie);
> +				dxp->cookie = NULL;
> +			}
> +			vq_ring_free_chain(vq, desc_idx);
>  		}
>  		vq->vq_used_cons_idx++;
> -		vq_ring_free_chain(vq, desc_idx);
> +	}
> +
> +	if (hw->use_simple_rx) {
> +		while (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> +			virtio_rxq_rearm_vec(rxq);
> +			if (virtqueue_kick_prepare(vq))
> +				virtqueue_notify(vq);
> +		}
>  	}
>  }
> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> index 2305d91a4..ab466c2db 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -304,7 +304,7 @@ void virtqueue_dump(struct virtqueue *vq);
>  struct rte_mbuf *virtqueue_detatch_unused(struct virtqueue *vq);
>  
>  /* Flush the elements in the used ring. */
> -void virtqueue_flush(struct virtqueue *vq);
> +void virtqueue_rxvq_flush(struct virtqueue *vq);
>  
>  static inline int
>  virtqueue_full(const struct virtqueue *vq)
>
  
Antonio Fischetti Dec. 7, 2017, 6:10 p.m. UTC | #4
Hi Tiwei, 
I've tested this patch on my 2 test cases described 
in the previous threads
 http://www.dpdk.org/ml/archives/dev/2017-November/081839.html
 http://www.dpdk.org/ml/archives/dev/2017-December/082801.html

 1. testpmd on the host and testpmd in the guest
 2. OvS-DPDK on the host and testpmd in the guest

and it works fine. 

I'm using DPDK v17.11 + this patch and I see traffic is now
flowing through the VM.


Tested-by: Antonio Fischetti <antonio.fischetti@intel.com>




> -----Original Message-----

> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Fischetti, Antonio

> Sent: Thursday, December 7, 2017 9:30 AM

> To: Maxime Coquelin <maxime.coquelin@redhat.com>; Bie, Tiwei

> <tiwei.bie@intel.com>; yliu@fridaylinux.org; dev@dpdk.org

> Cc: stable@dpdk.org

> Subject: Re: [dpdk-dev] [PATCH 1/5] net/virtio: fix vector Rx break

> caused by rxq flushing

> 

> Thanks Tiwei for working on this, I'll give it a try soon.

> 

> Antonio

> 

> > -----Original Message-----

> > From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]

> > Sent: Thursday, December 7, 2017 9:15 AM

> > To: Bie, Tiwei <tiwei.bie@intel.com>; yliu@fridaylinux.org;

> dev@dpdk.org

> > Cc: Fischetti, Antonio <antonio.fischetti@intel.com>; stable@dpdk.org

> > Subject: Re: [PATCH 1/5] net/virtio: fix vector Rx break caused by rxq

> > flushing

> >

> >

> >

> > On 12/07/2017 06:30 AM, Tiwei Bie wrote:

> > > The vector Rx will be broken if backend has consumed all

> > > the descs in the avail ring before the device is started.

> > > Because in current implementation, vector Rx will return

> > > immediately without refilling the avail ring if the used

> > > ring is empty. So we have to refill the avail ring after

> > > flushing the elements in the used ring for vector Rx.

> > >

> > > Besides, vector Rx has a different ring layout assumption

> > > and mbuf management. So we need to handle it differently.

> > >

> > > Fixes: d8227497ec5c ("net/virtio: flush Rx queues on start")

> > > Cc: stable@dpdk.org

> > >

> > > Reported-by: Antonio Fischetti <antonio.fischetti@intel.com>

> > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>

> > > ---

> > >   drivers/net/virtio/virtio_ethdev.c |  2 +-

> > >   drivers/net/virtio/virtqueue.c     | 31 ++++++++++++++++++++++++--

> --

> > ---

> > >   drivers/net/virtio/virtqueue.h     |  2 +-

> > >   3 files changed, 26 insertions(+), 9 deletions(-)

> >

> > Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

> >

> > Thanks,

> > Maxime
  
Tiwei Bie Dec. 8, 2017, 2:30 a.m. UTC | #5
On Thu, Dec 07, 2017 at 04:00:57PM +0000, Kevin Traynor wrote:
> On 12/07/2017 05:30 AM, Tiwei Bie wrote:
> > The vector Rx will be broken if backend has consumed all
> > the descs in the avail ring before the device is started.
> > Because in current implementation, vector Rx will return
> > immediately without refilling the avail ring if the used
> > ring is empty. So we have to refill the avail ring after
> > flushing the elements in the used ring for vector Rx.
> > 
> > Besides, vector Rx has a different ring layout assumption
> > and mbuf management. So we need to handle it differently.
> > 
> 
> Hi Tiwei, does the issue only occur with the vector rx? How about if the
> simple rx is used because VIRTIO_NET_F_MRG_RXBUF is set?

Hi Kevin,

Yes, you are right! The issue only occurs with the vector
Rx. The vector Rx (i.e. the simple Rx) won't be used if
VIRTIO_NET_F_MRG_RXBUF is set.

Best regards,
Tiwei Bie
  
Kevin Traynor Dec. 8, 2017, 10:17 a.m. UTC | #6
On 12/08/2017 02:30 AM, Tiwei Bie wrote:
> On Thu, Dec 07, 2017 at 04:00:57PM +0000, Kevin Traynor wrote:
>> On 12/07/2017 05:30 AM, Tiwei Bie wrote:
>>> The vector Rx will be broken if backend has consumed all
>>> the descs in the avail ring before the device is started.
>>> Because in current implementation, vector Rx will return
>>> immediately without refilling the avail ring if the used
>>> ring is empty. So we have to refill the avail ring after
>>> flushing the elements in the used ring for vector Rx.
>>>
>>> Besides, vector Rx has a different ring layout assumption
>>> and mbuf management. So we need to handle it differently.
>>>
>>
>> Hi Tiwei, does the issue only occur with the vector rx? How about if the
>> simple rx is used because VIRTIO_NET_F_MRG_RXBUF is set?
> 
> Hi Kevin,
> 
> Yes, you are right! The issue only occurs with the vector
> Rx. The vector Rx (i.e. the simple Rx) won't be used if
> VIRTIO_NET_F_MRG_RXBUF is set.
> 
> Best regards,
> Tiwei Bie
> 

Thanks for clarifying this Tiwei,

Kevin.
  

Patch

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e0328f61d..64a0cc608 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1860,7 +1860,7 @@  virtio_dev_start(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxvq = dev->data->rx_queues[i];
 		/* Flush the old packets */
-		virtqueue_flush(rxvq->vq);
+		virtqueue_rxvq_flush(rxvq->vq);
 		virtqueue_notify(rxvq->vq);
 	}
 
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index c3a536f8a..696d0e4a4 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -37,6 +37,7 @@ 
 #include "virtqueue.h"
 #include "virtio_logs.h"
 #include "virtio_pci.h"
+#include "virtio_rxtx_simple.h"
 
 /*
  * Two types of mbuf to be cleaned:
@@ -62,8 +63,10 @@  virtqueue_detatch_unused(struct virtqueue *vq)
 
 /* Flush the elements in the used ring. */
 void
-virtqueue_flush(struct virtqueue *vq)
+virtqueue_rxvq_flush(struct virtqueue *vq)
 {
+	struct virtnet_rx *rxq = &vq->rxq;
+	struct virtio_hw *hw = vq->hw;
 	struct vring_used_elem *uep;
 	struct vq_desc_extra *dxp;
 	uint16_t used_idx, desc_idx;
@@ -74,13 +77,27 @@  virtqueue_flush(struct virtqueue *vq)
 	for (i = 0; i < nb_used; i++) {
 		used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
 		uep = &vq->vq_ring.used->ring[used_idx];
-		desc_idx = (uint16_t)uep->id;
-		dxp = &vq->vq_descx[desc_idx];
-		if (dxp->cookie != NULL) {
-			rte_pktmbuf_free(dxp->cookie);
-			dxp->cookie = NULL;
+		if (hw->use_simple_rx) {
+			desc_idx = used_idx;
+			rte_pktmbuf_free(vq->sw_ring[desc_idx]);
+			vq->vq_free_cnt++;
+		} else {
+			desc_idx = (uint16_t)uep->id;
+			dxp = &vq->vq_descx[desc_idx];
+			if (dxp->cookie != NULL) {
+				rte_pktmbuf_free(dxp->cookie);
+				dxp->cookie = NULL;
+			}
+			vq_ring_free_chain(vq, desc_idx);
 		}
 		vq->vq_used_cons_idx++;
-		vq_ring_free_chain(vq, desc_idx);
+	}
+
+	if (hw->use_simple_rx) {
+		while (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+			virtio_rxq_rearm_vec(rxq);
+			if (virtqueue_kick_prepare(vq))
+				virtqueue_notify(vq);
+		}
 	}
 }
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 2305d91a4..ab466c2db 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -304,7 +304,7 @@  void virtqueue_dump(struct virtqueue *vq);
 struct rte_mbuf *virtqueue_detatch_unused(struct virtqueue *vq);
 
 /* Flush the elements in the used ring. */
-void virtqueue_flush(struct virtqueue *vq);
+void virtqueue_rxvq_flush(struct virtqueue *vq);
 
 static inline int
 virtqueue_full(const struct virtqueue *vq)