From patchwork Fri Feb 10 17:58:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chautru, Nicolas" X-Patchwork-Id: 123703 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3257A41C50; Fri, 10 Feb 2023 19:03:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0848D42D3E; Fri, 10 Feb 2023 19:02:52 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 77DE642686; Fri, 10 Feb 2023 19:02:45 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676052165; x=1707588165; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=d8otrFYnriUAaiqaZJ7lUa/yJvu7sDYdGNhRA920kmQ=; b=img0JxeBI3aPgztij0EOZS1Um4xZpLB4scdkq2h6Wllao/anrFoRVAVY FJfw3MuMSs0R+Gl/eZ1ukk2CC6qsKaNwfMyQWvFQdeRjmr20lcbhAv6O3 NvORgJDcdqR+FnRlC4T68HxXK2k38CcGEAMgSS+ndpWt49ZKFUTU0+r3j nYRrgUs5F5+UBNwLyhveDSds43A26rLk/KTmNBQAwmeBzGFFMUagSq3t0 Tfr8XPH0YTS+CH34wt4lGPHFbRgNHhcFG/wtFzMUxl93NHQ0mheRb7DGr UqTpnP859vIpIJPKChhqx88GAzx9uaJ8eD6tJ0v3g6n3cu8P6LQafDUzM A==; X-IronPort-AV: E=McAfee;i="6500,9779,10617"; a="310117909" X-IronPort-AV: E=Sophos;i="5.97,287,1669104000"; d="scan'208";a="310117909" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2023 10:02:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10617"; a="670076712" X-IronPort-AV: E=Sophos;i="5.97,287,1669104000"; d="scan'208";a="670076712" Received: from spr-npg-bds1-eec2.sn.intel.com (HELO spr-npg-bds1-eec2..) ([10.233.181.123]) by fmsmga007.fm.intel.com with ESMTP; 10 Feb 2023 10:02:15 -0800 From: Nicolas Chautru To: dev@dpdk.org, maxime.coquelin@redhat.com Cc: hernan.vargas@intel.com, stable@dpdk.org, Nicolas Chautru Subject: [PATCH v2 4/9] baseband/acc: prevent to dequeue more than requested Date: Fri, 10 Feb 2023 17:58:36 +0000 Message-Id: <20230210175841.303450-5-nicolas.chautru@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230210175841.303450-1-nicolas.chautru@intel.com> References: <20230209221929.265059-2-nicolas.chautru@intel.com> <20230210175841.303450-1-nicolas.chautru@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for corner-case when more operations are requested than expected, in the case of encoder muxing operations. Fixes: e640f6cdfa84 ("baseband/acc200: add LDPC processing") Cc: stable@dpdk.org Signed-off-by: Nicolas Chautru Reviewed-by: Maxime Coquelin --- drivers/baseband/acc/rte_vrb_pmd.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c index b1134f244d..a7d0b1e33c 100644 --- a/drivers/baseband/acc/rte_vrb_pmd.c +++ b/drivers/baseband/acc/rte_vrb_pmd.c @@ -2640,7 +2640,8 @@ vrb_enqueue_ldpc_dec(struct rte_bbdev_queue_data *q_data, /* Dequeue one encode operations from device in CB mode. */ static inline int vrb_dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, - uint16_t *dequeued_ops, uint32_t *aq_dequeued, uint16_t *dequeued_descs) + uint16_t *dequeued_ops, uint32_t *aq_dequeued, uint16_t *dequeued_descs, + uint16_t max_requested_ops) { union acc_dma_desc *desc, atom_desc; union acc_dma_rsp_desc rsp; @@ -2653,6 +2654,9 @@ vrb_dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, desc = q->ring_addr + desc_idx; atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + if (*dequeued_ops + desc->req.numCBs > max_requested_ops) + return -1; + /* Check fdone bit. */ if (!(atom_desc.rsp.val & ACC_FDONE)) return -1; @@ -2694,7 +2698,7 @@ vrb_dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, static inline int vrb_dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, uint16_t *dequeued_ops, uint32_t *aq_dequeued, - uint16_t *dequeued_descs) + uint16_t *dequeued_descs, uint16_t max_requested_ops) { union acc_dma_desc *desc, *last_desc, atom_desc; union acc_dma_rsp_desc rsp; @@ -2705,6 +2709,9 @@ vrb_dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op, desc = acc_desc_tail(q, *dequeued_descs); atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + if (*dequeued_ops + 1 > max_requested_ops) + return -1; + /* Check fdone bit. */ if (!(atom_desc.rsp.val & ACC_FDONE)) return -1; @@ -2965,25 +2972,23 @@ vrb_dequeue_enc(struct rte_bbdev_queue_data *q_data, cbm = op->turbo_enc.code_block_mode; - for (i = 0; i < num; i++) { + for (i = 0; i < avail; i++) { if (cbm == RTE_BBDEV_TRANSPORT_BLOCK) ret = vrb_dequeue_enc_one_op_tb(q, &ops[dequeued_ops], &dequeued_ops, &aq_dequeued, - &dequeued_descs); + &dequeued_descs, num); else ret = vrb_dequeue_enc_one_op_cb(q, &ops[dequeued_ops], &dequeued_ops, &aq_dequeued, - &dequeued_descs); + &dequeued_descs, num); if (ret < 0) break; - if (dequeued_ops >= num) - break; } q->aq_dequeued += aq_dequeued; q->sw_ring_tail += dequeued_descs; - /* Update enqueue stats */ + /* Update enqueue stats. */ q_data->queue_stats.dequeued_count += dequeued_ops; return dequeued_ops; @@ -3009,15 +3014,13 @@ vrb_dequeue_ldpc_enc(struct rte_bbdev_queue_data *q_data, if (cbm == RTE_BBDEV_TRANSPORT_BLOCK) ret = vrb_dequeue_enc_one_op_tb(q, &ops[dequeued_ops], &dequeued_ops, &aq_dequeued, - &dequeued_descs); + &dequeued_descs, num); else ret = vrb_dequeue_enc_one_op_cb(q, &ops[dequeued_ops], &dequeued_ops, &aq_dequeued, - &dequeued_descs); + &dequeued_descs, num); if (ret < 0) break; - if (dequeued_ops >= num) - break; } q->aq_dequeued += aq_dequeued;