[v1] vhost: fix mbuf allocation failures

Message ID 20200428095203.64935-1-Sivaprasad.Tummala@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Maxime Coquelin
Headers
Series [v1] vhost: fix mbuf allocation failures |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-nxp-Performance success Performance Testing PASS
ci/travis-robot warning Travis build: failed
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-testing success Testing PASS
ci/Intel-compilation fail Compilation issues

Commit Message

Sivaprasad Tummala April 28, 2020, 9:52 a.m. UTC
  vhost buffer allocation is successful for packets that fit
into a linear buffer. If it fails, vhost library is expected
to drop the current buffer descriptor and skip to the next.

The patch fixes the error scenario by skipping to next descriptor.
Note: Drop counters are not currently supported.

Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
---
 lib/librte_vhost/virtio_net.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)
  

Comments

Maxime Coquelin April 29, 2020, 8:43 a.m. UTC | #1
Hi Sivaprasad,

On 4/28/20 11:52 AM, Sivaprasad Tummala wrote:
> vhost buffer allocation is successful for packets that fit
> into a linear buffer. If it fails, vhost library is expected
> to drop the current buffer descriptor and skip to the next.
> 
> The patch fixes the error scenario by skipping to next descriptor.
> Note: Drop counters are not currently supported.

Fixes tag is missing here, and stable@dpdk.org should be cc'ed if
necessary.

> Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
> ---
>  lib/librte_vhost/virtio_net.c | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 37c47c7dc..b0d3a85c2 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -1688,6 +1688,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  {

You only fix split ring path, but not packed ring.

>  	uint16_t i;
>  	uint16_t free_entries;
> +	uint16_t dropped = 0;
>  
>  	if (unlikely(dev->dequeue_zero_copy)) {
>  		struct zcopy_mbuf *zmbuf, *next;
> @@ -1751,8 +1752,19 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  			update_shadow_used_ring_split(vq, head_idx, 0);
>  
>  		pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
> -		if (unlikely(pkts[i] == NULL))
> +		if (unlikely(pkts[i] == NULL)) {
> +			/*
> +			 * mbuf allocation fails for jumbo packets with
> +			 * linear buffer flag set. Drop this packet and
> +			 * proceed with the next available descriptor to
> +			 * avoid HOL blocking
> +			 */
> +			VHOST_LOG_DATA(WARNING,
> +				"Failed to allocate memory for mbuf. Packet dropped!\n");

I think we need a better logging, otherwise it is going to flood the log
file quite rapidly if issue happens. Either some rate-limited logging or
warn-once would be better.

The warning message could be also improved, because when using linear
buffers, one would expect that the size of the mbufs could handle a
jumbo frame. So it should differentiate two

> +			dropped += 1;
> +			i++;
>  			break;
> +		}
>  
>  		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
>  				mbuf_pool);
> @@ -1796,7 +1808,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  		}
>  	}
>  
> -	return i;
> +	return (i - dropped);
>  }
>  
>  static __rte_always_inline int
>
  
Flavio Leitner April 29, 2020, 5:35 p.m. UTC | #2
On Wed, Apr 29, 2020 at 10:43:01AM +0200, Maxime Coquelin wrote:
> Hi Sivaprasad,
> 
> On 4/28/20 11:52 AM, Sivaprasad Tummala wrote:
> > vhost buffer allocation is successful for packets that fit
> > into a linear buffer. If it fails, vhost library is expected
> > to drop the current buffer descriptor and skip to the next.
> > 
> > The patch fixes the error scenario by skipping to next descriptor.
> > Note: Drop counters are not currently supported.

In that case shouldn't we continue to process the ring?

Also, don't we have the same issue with copy_desc_to_mbuf() 
and get_zmbuf()?

fbl

> Fixes tag is missing here, and stable@dpdk.org should be cc'ed if
> necessary.
> 
> > Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
> > ---
> >  lib/librte_vhost/virtio_net.c | 16 ++++++++++++++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> > 
> > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> > index 37c47c7dc..b0d3a85c2 100644
> > --- a/lib/librte_vhost/virtio_net.c
> > +++ b/lib/librte_vhost/virtio_net.c
> > @@ -1688,6 +1688,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> >  {
> 
> You only fix split ring path, but not packed ring.
> 
> >  	uint16_t i;
> >  	uint16_t free_entries;
> > +	uint16_t dropped = 0;
> >  
> >  	if (unlikely(dev->dequeue_zero_copy)) {
> >  		struct zcopy_mbuf *zmbuf, *next;
> > @@ -1751,8 +1752,19 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> >  			update_shadow_used_ring_split(vq, head_idx, 0);
> >  
> >  		pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
> > -		if (unlikely(pkts[i] == NULL))
> > +		if (unlikely(pkts[i] == NULL)) {
> > +			/*
> > +			 * mbuf allocation fails for jumbo packets with
> > +			 * linear buffer flag set. Drop this packet and
> > +			 * proceed with the next available descriptor to
> > +			 * avoid HOL blocking
> > +			 */
> > +			VHOST_LOG_DATA(WARNING,
> > +				"Failed to allocate memory for mbuf. Packet dropped!\n");
> 
> I think we need a better logging, otherwise it is going to flood the log
> file quite rapidly if issue happens. Either some rate-limited logging or
> warn-once would be better.
> 
> The warning message could be also improved, because when using linear
> buffers, one would expect that the size of the mbufs could handle a
> jumbo frame. So it should differentiate two
> 
> > +			dropped += 1;
> > +			i++;
> >  			break;
> > +		}
> >  
> >  		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
> >  				mbuf_pool);
> > @@ -1796,7 +1808,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> >  		}
> >  	}
> >  
> > -	return i;
> > +	return (i - dropped);
> >  }
> >  
> >  static __rte_always_inline int
> > 
>
  
Sivaprasad Tummala April 30, 2020, 7:13 a.m. UTC | #3
Hi Flavio,



Thanks for your comments.



snipped



> > The patch fixes the error scenario by skipping to next descriptor.

> > Note: Drop counters are not currently supported.



In that case shouldn't we continue to process the ring?

Yes, we are updating the loop index and following the required clean-up.



Also, don't we have the same issue with copy_desc_to_mbuf()

Thank you. Will update in the V2 patch.



and get_zmbuf()?

This patch is not targeted for zero-copy cases.



fbl



snipped
  

Patch

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 37c47c7dc..b0d3a85c2 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1688,6 +1688,7 @@  virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 {
 	uint16_t i;
 	uint16_t free_entries;
+	uint16_t dropped = 0;
 
 	if (unlikely(dev->dequeue_zero_copy)) {
 		struct zcopy_mbuf *zmbuf, *next;
@@ -1751,8 +1752,19 @@  virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			update_shadow_used_ring_split(vq, head_idx, 0);
 
 		pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
-		if (unlikely(pkts[i] == NULL))
+		if (unlikely(pkts[i] == NULL)) {
+			/*
+			 * mbuf allocation fails for jumbo packets with
+			 * linear buffer flag set. Drop this packet and
+			 * proceed with the next available descriptor to
+			 * avoid HOL blocking
+			 */
+			VHOST_LOG_DATA(WARNING,
+				"Failed to allocate memory for mbuf. Packet dropped!\n");
+			dropped += 1;
+			i++;
 			break;
+		}
 
 		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
 				mbuf_pool);
@@ -1796,7 +1808,7 @@  virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 	}
 
-	return i;
+	return (i - dropped);
 }
 
 static __rte_always_inline int