From patchwork Thu Sep 10 07:20:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 77119 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 146B2A04B5; Thu, 10 Sep 2020 09:20:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B3DE91BFB4; Thu, 10 Sep 2020 09:20:43 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 157A1DE0 for ; Thu, 10 Sep 2020 09:20:41 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 10 Sep 2020 10:20:37 +0300 Received: from nvidia.com (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 08A7KbNr012939; Thu, 10 Sep 2020 10:20:37 +0300 From: Matan Azrad To: Maxime Coquelin Cc: dev@dpdk.org, stable@dpdk.org Date: Thu, 10 Sep 2020 07:20:34 +0000 Message-Id: <1599722434-432403-1-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 Subject: [dpdk-dev] [PATCH] vdpa/mlx5: fix completion queue polling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The CQ polling is done in order to notify the guest about new traffic bursts and to release FW resources for the next bursts management. When HW is faster than SW, it may be that all the FW resources are busy in SW due to late polling. In this case, due to wrong WQE counter masking, the fullness calculation of the completions number is 0 while the queue is full. Change the WQE counter masking to 16-bit wideness instead of the CQ size mask as defined by the CQE format. Fixes: c5f714e50b0e ("vdpa/mlx5: optimize completion queue poll") Cc: stable@dpdk.org Signed-off-by: Matan Azrad Acked-by: Xueming Li Reviewed-by: Maxime Coquelin --- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 5a2d4fb..2672935 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -172,7 +172,7 @@ cq->callfd = callfd; /* Init CQ to ones to be in HW owner in the start. */ cq->cqes[0].op_own = MLX5_CQE_OWNER_MASK; - cq->cqes[0].wqe_counter = rte_cpu_to_be_16(cq_size - 1); + cq->cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX); /* First arming. */ mlx5_vdpa_cq_arm(priv, cq); return 0; @@ -187,7 +187,6 @@ struct mlx5_vdpa_event_qp *eqp = container_of(cq, struct mlx5_vdpa_event_qp, cq); const unsigned int cq_size = 1 << cq->log_desc_n; - const unsigned int cq_mask = cq_size - 1; union { struct { uint16_t wqe_counter; @@ -196,13 +195,13 @@ }; uint32_t word; } last_word; - uint16_t next_wqe_counter = cq->cq_ci & cq_mask; + uint16_t next_wqe_counter = cq->cq_ci; uint16_t cur_wqe_counter; uint16_t comp; last_word.word = rte_read32(&cq->cqes[0].wqe_counter); cur_wqe_counter = rte_be_to_cpu_16(last_word.wqe_counter); - comp = (cur_wqe_counter + 1u - next_wqe_counter) & cq_mask; + comp = cur_wqe_counter + (uint16_t)1 - next_wqe_counter; if (comp) { cq->cq_ci += comp; MLX5_ASSERT(!!(cq->cq_ci & cq_size) ==