[v1,1/2] vdpa/mlx5: workaround FW first completion in start

Message ID 20211015134319.1664761-1-xuemingl@nvidia.com (mailing list archive)
State Superseded, archived
Delegated to: Maxime Coquelin
Headers
Series [v1,1/2] vdpa/mlx5: workaround FW first completion in start |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Xueming Li Oct. 15, 2021, 1:43 p.m. UTC
  After a vDPA application restart, qemu restores VQ with used and
available index, new incoming packet triggers virtio driver to
handle buffers. Under heavy traffic, no available buffer for
firmware to receive new packets, no Rx interrupts generated,
driver is stuck on endless interrupt waiting.

As a firmware workaround, this patch sends a notification after
VQ setup to ask driver handling buffers and filling new buffers.

Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 4 ++++
 1 file changed, 4 insertions(+)
  

Comments

Maxime Coquelin Oct. 15, 2021, 1:57 p.m. UTC | #1
On 10/15/21 15:43, Xueming Li wrote:
> After a vDPA application restart, qemu restores VQ with used and
> available index, new incoming packet triggers virtio driver to
> handle buffers. Under heavy traffic, no available buffer for
> firmware to receive new packets, no Rx interrupts generated,
> driver is stuck on endless interrupt waiting.
> 
> As a firmware workaround, this patch sends a notification after
> VQ setup to ask driver handling buffers and filling new buffers.
> 

As I mentionned on my reply to the v1, I would expect a Fixes tag,
it would make downstream maintainers life easier.

Maybe pointing to the commit introducing the function would help.
this is not ideal, but otherwise the risk is that your patch get
missed by the stable maintainers.

Thanks!
Maxime

> Cc: stable@dpdk.org
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> Reviewed-by: Matan Azrad <matan@nvidia.com>
> ---
>   drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> index f530646058f..71470d23d9e 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> @@ -4,6 +4,7 @@
>   #include <string.h>
>   #include <unistd.h>
>   #include <sys/mman.h>
> +#include <sys/eventfd.h>
>   
>   #include <rte_malloc.h>
>   #include <rte_errno.h>
> @@ -367,6 +368,9 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
>   		goto error;
>   	}
>   	virtq->stopped = false;
> +	/* Initial notification to ask qemu handling completed buffers. */
> +	if (virtq->eqp.cq.callfd != -1)
> +		eventfd_write(virtq->eqp.cq.callfd, (eventfd_t)1);
>   	DRV_LOG(DEBUG, "vid %u virtq %u was created successfully.", priv->vid,
>   		index);
>   	return 0;
>
  
Xueming Li Oct. 15, 2021, 2:51 p.m. UTC | #2
On Fri, 2021-10-15 at 15:57 +0200, Maxime Coquelin wrote:
> 
> On 10/15/21 15:43, Xueming Li wrote:
> > After a vDPA application restart, qemu restores VQ with used and
> > available index, new incoming packet triggers virtio driver to
> > handle buffers. Under heavy traffic, no available buffer for
> > firmware to receive new packets, no Rx interrupts generated,
> > driver is stuck on endless interrupt waiting.
> > 
> > As a firmware workaround, this patch sends a notification after
> > VQ setup to ask driver handling buffers and filling new buffers.
> > 
> 
> As I mentionned on my reply to the v1, I would expect a Fixes tag,
> it would make downstream maintainers life easier.
> 
> Maybe pointing to the commit introducing the function would help.
> this is not ideal, but otherwise the risk is that your patch get
> missed by the stable maintainers.

Yes, my bad, a Fixes tag should be helpful to identify which LTS need
it, thanks!

> 
> Thanks!
> Maxime
> 
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > Reviewed-by: Matan Azrad <matan@nvidia.com>
> > ---
> >   drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 4 ++++
> >   1 file changed, 4 insertions(+)
> > 
> > diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> > index f530646058f..71470d23d9e 100644
> > --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> > +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
> > @@ -4,6 +4,7 @@
> >   #include <string.h>
> >   #include <unistd.h>
> >   #include <sys/mman.h>
> > +#include <sys/eventfd.h>
> >   
> >   #include <rte_malloc.h>
> >   #include <rte_errno.h>
> > @@ -367,6 +368,9 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
> >   		goto error;
> >   	}
> >   	virtq->stopped = false;
> > +	/* Initial notification to ask qemu handling completed buffers. */
> > +	if (virtq->eqp.cq.callfd != -1)
> > +		eventfd_write(virtq->eqp.cq.callfd, (eventfd_t)1);
> >   	DRV_LOG(DEBUG, "vid %u virtq %u was created successfully.", priv->vid,
> >   		index);
> >   	return 0;
> > 
>
  

Patch

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index f530646058f..71470d23d9e 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -4,6 +4,7 @@ 
 #include <string.h>
 #include <unistd.h>
 #include <sys/mman.h>
+#include <sys/eventfd.h>
 
 #include <rte_malloc.h>
 #include <rte_errno.h>
@@ -367,6 +368,9 @@  mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
 		goto error;
 	}
 	virtq->stopped = false;
+	/* Initial notification to ask qemu handling completed buffers. */
+	if (virtq->eqp.cq.callfd != -1)
+		eventfd_write(virtq->eqp.cq.callfd, (eventfd_t)1);
 	DRV_LOG(DEBUG, "vid %u virtq %u was created successfully.", priv->vid,
 		index);
 	return 0;