[1/4] vdpa/mlx5: move virtual doorbell alloc to probe

Message ID 1585059877-2369-2-git-send-email-asafp@mellanox.com (mailing list archive)
State Accepted, archived
Delegated to: Maxime Coquelin
Headers
Series vdpa/mlx5: support direct notification |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-intel-Performance success Performance Testing PASS
ci/Intel-compilation success Compilation OK
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-testing success Testing PASS

Commit Message

Asaf Penso March 24, 2020, 2:24 p.m. UTC
  From: Matan Azrad <matan@mellanox.com>

The configure and close operations may be called a lot of time by vhost
library according to the virtio connections in the guest.

VAR is the device memory space for the virtio queues doorbells.
Each VAR page can be shared for more than one queue while its owner must
synchronize the writes to it.

The mlx5 driver allocates single VAR page for all its queues.

Therefore, it is better to allocate it in probe device level instead of
creating and destroying it per new connection.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa.c       | 14 +++++++++++++-
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c |  9 ---------
 2 files changed, 13 insertions(+), 10 deletions(-)
  

Comments

Maxime Coquelin April 15, 2020, 9:44 a.m. UTC | #1
On 3/24/20 3:24 PM, Asaf Penso wrote:
> From: Matan Azrad <matan@mellanox.com>
> 
> The configure and close operations may be called a lot of time by vhost
> library according to the virtio connections in the guest.
> 
> VAR is the device memory space for the virtio queues doorbells.
> Each VAR page can be shared for more than one queue while its owner must
> synchronize the writes to it.
> 
> The mlx5 driver allocates single VAR page for all its queues.
> 
> Therefore, it is better to allocate it in probe device level instead of
> creating and destroying it per new connection.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
>  drivers/vdpa/mlx5/mlx5_vdpa.c       | 14 +++++++++++++-
>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c |  9 ---------
>  2 files changed, 13 insertions(+), 10 deletions(-)

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime
  

Patch

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 97d914a..5542c29 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -447,6 +447,11 @@ 
 	priv->ctx = ctx;
 	priv->dev_addr.pci_addr = pci_dev->addr;
 	priv->dev_addr.type = PCI_ADDR;
+	priv->var = mlx5_glue->dv_alloc_var(ctx, 0);
+	if (!priv->var) {
+		DRV_LOG(ERR, "Failed to allocate VAR %u.\n", errno);
+		goto error;
+	}
 	priv->id = rte_vdpa_register_device(&priv->dev_addr, &mlx5_vdpa_ops);
 	if (priv->id < 0) {
 		DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -461,8 +466,11 @@ 
 	return 0;
 
 error:
-	if (priv)
+	if (priv) {
+		if (priv->var)
+			mlx5_glue->dv_free_var(priv->var);
 		rte_free(priv);
+	}
 	if (ctx)
 		mlx5_glue->close_device(ctx);
 	return -rte_errno;
@@ -499,6 +507,10 @@ 
 	if (found) {
 		if (priv->configured)
 			mlx5_vdpa_dev_close(priv->vid);
+		if (priv->var) {
+			mlx5_glue->dv_free_var(priv->var);
+			priv->var = NULL;
+		}
 		mlx5_glue->close_device(priv->ctx);
 		rte_free(priv);
 	}
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index 2312331..6390385 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -105,10 +105,6 @@ 
 		claim_zero(munmap(priv->virtq_db_addr, priv->var->length));
 		priv->virtq_db_addr = NULL;
 	}
-	if (priv->var) {
-		mlx5_glue->dv_free_var(priv->var);
-		priv->var = NULL;
-	}
 	priv->features = 0;
 }
 
@@ -343,11 +339,6 @@ 
 		DRV_LOG(ERR, "Failed to configure negotiated features.");
 		return -1;
 	}
-	priv->var = mlx5_glue->dv_alloc_var(priv->ctx, 0);
-	if (!priv->var) {
-		DRV_LOG(ERR, "Failed to allocate VAR %u.\n", errno);
-		return -1;
-	}
 	/* Always map the entire page. */
 	priv->virtq_db_addr = mmap(NULL, priv->var->length, PROT_READ |
 				   PROT_WRITE, MAP_SHARED, priv->ctx->cmd_fd,