vdpa/mlx5: fix queue enable drain CQ

Message ID 20240125031755.657102-1-yajunw@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Maxime Coquelin
Headers
Series vdpa/mlx5: fix queue enable drain CQ |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/github-robot: build success github build: passed
ci/intel-Functional success Functional PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS

Commit Message

Yajun Wu Jan. 25, 2024, 3:17 a.m. UTC
  For the case: `ethtool -L eth0 combined xxx` in VM, VQ will disable
and enable without calling device close. In such case, need add
drain CQ before reuse/reset event QP.

Fixes: 24969c7b62 ("vdpa/mlx5: reuse event queues")
Cc: stable@dpdk.org

Signed-off-by: Yajun Wu <yajunw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)
  

Comments

Maxime Coquelin Feb. 6, 2024, 1:17 p.m. UTC | #1
On 1/25/24 04:17, Yajun Wu wrote:
> For the case: `ethtool -L eth0 combined xxx` in VM, VQ will disable
> and enable without calling device close. In such case, need add
> drain CQ before reuse/reset event QP.
> 
> Fixes: 24969c7b62 ("vdpa/mlx5: reuse event queues")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Yajun Wu <yajunw@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
> ---
>   drivers/vdpa/mlx5/mlx5_vdpa_event.c | 29 +++++++++++++++++++----------
>   1 file changed, 19 insertions(+), 10 deletions(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime
  
Maxime Coquelin Feb. 6, 2024, 2:06 p.m. UTC | #2
On 2/6/24 14:17, Maxime Coquelin wrote:
> 
> 
> On 1/25/24 04:17, Yajun Wu wrote:
>> For the case: `ethtool -L eth0 combined xxx` in VM, VQ will disable
>> and enable without calling device close. In such case, need add
>> drain CQ before reuse/reset event QP.
>>
>> Fixes: 24969c7b62 ("vdpa/mlx5: reuse event queues")

No need to resend, but the Fixes SHA1 should be 12 chars long, not 10.
As a helper, you can below alias in your .gitconfig:

[alias]
	fixline = log -1 --abbrev=12 --format='Fixes: %h (\"%s\")%nCc: %ae'

Then, you can just use it like this:

$ git fixline 24969c7b6224afc48751d94fc0152fca8b6645b1
Fixes: 24969c7b6224 ("vdpa/mlx5: reuse event queues")
Cc: yajunw@nvidia.com

>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Yajun Wu <yajunw@nvidia.com>
>> Acked-by: Matan Azrad <matan@nvidia.com>
>> ---
>>   drivers/vdpa/mlx5/mlx5_vdpa_event.c | 29 +++++++++++++++++++----------
>>   1 file changed, 19 insertions(+), 10 deletions(-)
>>
> 
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> Thanks,
> Maxime
>
  
Maxime Coquelin Feb. 6, 2024, 2:58 p.m. UTC | #3
On 1/25/24 04:17, Yajun Wu wrote:
> For the case: `ethtool -L eth0 combined xxx` in VM, VQ will disable
> and enable without calling device close. In such case, need add
> drain CQ before reuse/reset event QP.
> 
> Fixes: 24969c7b62 ("vdpa/mlx5: reuse event queues")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Yajun Wu <yajunw@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
> ---
>   drivers/vdpa/mlx5/mlx5_vdpa_event.c | 29 +++++++++++++++++++----------
>   1 file changed, 19 insertions(+), 10 deletions(-)
> 

Applied to next-virtio tree.

Thanks,
Maxime
  

Patch

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 9557c1042e..32430614d5 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -244,22 +244,30 @@  mlx5_vdpa_queues_complete(struct mlx5_vdpa_priv *priv)
 	return max;
 }
 
+static void
+mlx5_vdpa_drain_cq_one(struct mlx5_vdpa_priv *priv,
+	struct mlx5_vdpa_virtq *virtq)
+{
+	struct mlx5_vdpa_cq *cq = &virtq->eqp.cq;
+
+	mlx5_vdpa_queue_complete(cq);
+	if (cq->cq_obj.cq) {
+		cq->cq_obj.cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX);
+		virtq->eqp.qp_pi = 0;
+		if (!cq->armed)
+			mlx5_vdpa_cq_arm(priv, cq);
+	}
+}
+
 void
 mlx5_vdpa_drain_cq(struct mlx5_vdpa_priv *priv)
 {
+	struct mlx5_vdpa_virtq *virtq;
 	unsigned int i;
 
 	for (i = 0; i < priv->caps.max_num_virtio_queues; i++) {
-		struct mlx5_vdpa_cq *cq = &priv->virtqs[i].eqp.cq;
-
-		mlx5_vdpa_queue_complete(cq);
-		if (cq->cq_obj.cq) {
-			cq->cq_obj.cqes[0].wqe_counter =
-				rte_cpu_to_be_16(UINT16_MAX);
-			priv->virtqs[i].eqp.qp_pi = 0;
-			if (!cq->armed)
-				mlx5_vdpa_cq_arm(priv, cq);
-		}
+		virtq = &priv->virtqs[i];
+		mlx5_vdpa_drain_cq_one(priv, virtq);
 	}
 }
 
@@ -632,6 +640,7 @@  mlx5_vdpa_event_qp_prepare(struct mlx5_vdpa_priv *priv, uint16_t desc_n,
 	if (eqp->cq.cq_obj.cq != NULL && log_desc_n == eqp->cq.log_desc_n) {
 		/* Reuse existing resources. */
 		eqp->cq.callfd = callfd;
+		mlx5_vdpa_drain_cq_one(priv, virtq);
 		/* FW will set event qp to error state in q destroy. */
 		if (reset && !mlx5_vdpa_qps2rst2rts(eqp))
 			rte_write32(rte_cpu_to_be_32(RTE_BIT32(log_desc_n)),