From patchwork Sat Aug 20 02:31:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hernan Vargas X-Patchwork-Id: 115260 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 82F2AA034C; Fri, 19 Aug 2022 20:36:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B7EA541611; Fri, 19 Aug 2022 20:36:16 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 84E4240694 for ; Fri, 19 Aug 2022 20:36:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660934173; x=1692470173; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TuuCRAzTVO0VQVmB71/y3vkfbn9FnV6UDrC6baIrgs4=; b=aQ2WkbY1bnXqUSZEJ0Odmel7sbthaXckSF/n1H6pyFZ8esQICx3WNp9X 8gDsrwZIEJ/rn+2zWx/Di+e/VRQ3zTE3MEEMTmOuSFtPgmpWP5KJN/axY 6XJEavTFNN7B5ebkNgmeOqG0v16THOXx8rijvkfhwWbemsCIN972jVqx9 XhpHd303U0aFVYkl+P8+lmZwplpO/xKXSwx6kZCEl5VZ5/M7GCjPugQup RbfKyIPpomWkTeN4F0v1UwYBCoQBL/xtSNS1WbqvBCdi4N8G3hh/uUndQ 3DO6fH2wHQwi9C5t2SiFcWG2rBqSYSMOl+dvcnB5vCkADZYQSVEmEDKvi g==; X-IronPort-AV: E=McAfee;i="6500,9779,10444"; a="319107208" X-IronPort-AV: E=Sophos;i="5.93,248,1654585200"; d="scan'208";a="319107208" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Aug 2022 11:36:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,248,1654585200"; d="scan'208";a="608296229" Received: from unknown (HELO csl-npg-qt0.la.intel.com) ([10.233.181.103]) by orsmga002.jf.intel.com with ESMTP; 19 Aug 2022 11:36:11 -0700 From: Hernan Vargas To: dev@dpdk.org, gakhil@marvell.com, trix@redhat.com Cc: nicolas.chautru@intel.com, qi.z.zhang@intel.com, Hernan Vargas Subject: [PATCH v2 02/37] baseband/acc100: update ring availability calculation Date: Fri, 19 Aug 2022 19:31:22 -0700 Message-Id: <20220820023157.189047-3-hernan.vargas@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220820023157.189047-1-hernan.vargas@intel.com> References: <20220820023157.189047-1-hernan.vargas@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Refactor of the queue availability computation to prevent the application to dequeue more than what may have been enqueued. Signed-off-by: Hernan Vargas --- drivers/baseband/acc100/rte_acc100_pmd.c | 39 ++++++++++++++++-------- 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c index 7f698ec3d2..0598d33582 100644 --- a/drivers/baseband/acc100/rte_acc100_pmd.c +++ b/drivers/baseband/acc100/rte_acc100_pmd.c @@ -3465,13 +3465,27 @@ acc100_enqueue_queue_full(struct rte_bbdev_queue_data *q_data) acc100_enqueue_status(q_data, RTE_BBDEV_ENQ_STATUS_QUEUE_FULL); } +/* Number of available descriptor in ring to enqueue */ +static uint32_t +acc100_ring_avail_enq(struct acc100_queue *q) +{ + return (q->sw_ring_depth - 1 + q->sw_ring_tail - q->sw_ring_head) % q->sw_ring_depth; +} + +/* Number of available descriptor in ring to dequeue */ +static uint32_t +acc100_ring_avail_deq(struct acc100_queue *q) +{ + return (q->sw_ring_depth + q->sw_ring_head - q->sw_ring_tail) % q->sw_ring_depth; +} + /* Enqueue encode operations for ACC100 device in CB mode. */ static uint16_t acc100_enqueue_enc_cb(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_enc_op **ops, uint16_t num) { struct acc100_queue *q = q_data->queue_private; - int32_t avail = q->sw_ring_depth + q->sw_ring_tail - q->sw_ring_head; + int32_t avail = acc100_ring_avail_enq(q); uint16_t i; union acc100_dma_desc *desc; int ret; @@ -3531,7 +3545,7 @@ acc100_enqueue_ldpc_enc_cb(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_enc_op **ops, uint16_t num) { struct acc100_queue *q = q_data->queue_private; - int32_t avail = q->sw_ring_depth + q->sw_ring_tail - q->sw_ring_head; + int32_t avail = acc100_ring_avail_enq(q); uint16_t i = 0; union acc100_dma_desc *desc; int ret, desc_idx = 0; @@ -3588,7 +3602,7 @@ acc100_enqueue_enc_tb(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_enc_op **ops, uint16_t num) { struct acc100_queue *q = q_data->queue_private; - int32_t avail = q->sw_ring_depth + q->sw_ring_tail - q->sw_ring_head; + int32_t avail = acc100_ring_avail_enq(q); uint16_t i, enqueued_cbs = 0; uint8_t cbs_in_tb; int ret; @@ -3654,7 +3668,7 @@ acc100_enqueue_dec_cb(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_dec_op **ops, uint16_t num) { struct acc100_queue *q = q_data->queue_private; - int32_t avail = q->sw_ring_depth + q->sw_ring_tail - q->sw_ring_head; + int32_t avail = acc100_ring_avail_enq(q); uint16_t i; union acc100_dma_desc *desc; int ret; @@ -3711,7 +3725,7 @@ acc100_enqueue_ldpc_dec_tb(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_dec_op **ops, uint16_t num) { struct acc100_queue *q = q_data->queue_private; - int32_t avail = q->sw_ring_depth + q->sw_ring_tail - q->sw_ring_head; + int32_t avail = acc100_ring_avail_enq(q); uint16_t i, enqueued_cbs = 0; uint8_t cbs_in_tb; int ret; @@ -3746,7 +3760,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_dec_op **ops, uint16_t num) { struct acc100_queue *q = q_data->queue_private; - int32_t avail = q->sw_ring_depth + q->sw_ring_tail - q->sw_ring_head; + int32_t avail = acc100_ring_avail_enq(q); uint16_t i; union acc100_dma_desc *desc; int ret; @@ -3800,7 +3814,7 @@ acc100_enqueue_dec_tb(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_dec_op **ops, uint16_t num) { struct acc100_queue *q = q_data->queue_private; - int32_t avail = q->sw_ring_depth + q->sw_ring_tail - q->sw_ring_head; + int32_t avail = acc100_ring_avail_enq(q); uint16_t i, enqueued_cbs = 0; uint8_t cbs_in_tb; int ret; @@ -4179,12 +4193,13 @@ acc100_dequeue_enc(struct rte_bbdev_queue_data *q_data, { struct acc100_queue *q = q_data->queue_private; uint16_t dequeue_num; - uint32_t avail = q->sw_ring_head - q->sw_ring_tail; + uint32_t avail = acc100_ring_avail_deq(q); uint32_t aq_dequeued = 0; uint16_t i, dequeued_cbs = 0; struct rte_bbdev_enc_op *op; int ret; - + if (avail == 0) + return 0; #ifdef RTE_LIBRTE_BBDEV_DEBUG if (unlikely(ops == NULL || q == NULL)) { rte_bbdev_log_debug("Unexpected undefined pointer"); @@ -4224,7 +4239,7 @@ acc100_dequeue_ldpc_enc(struct rte_bbdev_queue_data *q_data, struct rte_bbdev_enc_op **ops, uint16_t num) { struct acc100_queue *q = q_data->queue_private; - uint32_t avail = q->sw_ring_head - q->sw_ring_tail; + uint32_t avail = acc100_ring_avail_deq(q); uint32_t aq_dequeued = 0; uint16_t dequeue_num, i, dequeued_cbs = 0, dequeued_descs = 0; int ret; @@ -4264,7 +4279,7 @@ acc100_dequeue_dec(struct rte_bbdev_queue_data *q_data, { struct acc100_queue *q = q_data->queue_private; uint16_t dequeue_num; - uint32_t avail = q->sw_ring_head - q->sw_ring_tail; + uint32_t avail = acc100_ring_avail_deq(q); uint32_t aq_dequeued = 0; uint16_t i; uint16_t dequeued_cbs = 0; @@ -4309,7 +4324,7 @@ acc100_dequeue_ldpc_dec(struct rte_bbdev_queue_data *q_data, { struct acc100_queue *q = q_data->queue_private; uint16_t dequeue_num; - uint32_t avail = q->sw_ring_head - q->sw_ring_tail; + uint32_t avail = acc100_ring_avail_deq(q); uint32_t aq_dequeued = 0; uint16_t i; uint16_t dequeued_cbs = 0;