From patchwork Fri May 22 11:57:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 70518 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29288A0350; Fri, 22 May 2020 13:57:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DE62F1D976; Fri, 22 May 2020 13:57:43 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 289881D96D for ; Fri, 22 May 2020 13:57:41 +0200 (CEST) IronPort-SDR: 1EFeulw8TkRpBeVzcLnXQoxazVqbFzKu4SJviDo6MAJYqxcOUqd6kf/dmizXv46arC6yBK31LE pzlEd4D+sW9Q== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2020 04:57:41 -0700 IronPort-SDR: Wih/We1n7ePYCvrVkuKopc44jwhm8jHms4MlPgJ6LYD1FsTQv7YZlR3P0pzyOwWjLjqVcbBZ1X kEFxO6JcjGIA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,421,1583222400"; d="scan'208";a="440830407" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga005.jf.intel.com with ESMTP; 22 May 2020 04:57:39 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: roy.fan.zhang@intel.com, Konstantin Ananyev Date: Fri, 22 May 2020 12:57:36 +0100 Message-Id: <20200522115736.476-1-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 Subject: [dpdk-dev] [PATCH 20.08] crypto/scheduler: use ring peek API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" scheduler PMD uses its own hand-made peek functions that directly access rte_ring internals. As now rte_ring has an API for that type of functionality - change scheduler PMD to use API provided by rte_ring. Signed-off-by: Konstantin Ananyev Acked-by: Fan Zhang --- .../crypto/scheduler/scheduler_multicore.c | 28 ++++++++----------- .../crypto/scheduler/scheduler_pmd_private.h | 28 +++++++------------ 2 files changed, 21 insertions(+), 35 deletions(-) diff --git a/drivers/crypto/scheduler/scheduler_multicore.c b/drivers/crypto/scheduler/scheduler_multicore.c index 7808e9a34..2d6790bb3 100644 --- a/drivers/crypto/scheduler/scheduler_multicore.c +++ b/drivers/crypto/scheduler/scheduler_multicore.c @@ -110,31 +110,25 @@ static uint16_t schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { - struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; + struct rte_ring *order_ring = + ((struct scheduler_qp_ctx *)qp)->order_ring; struct rte_crypto_op *op; - uint32_t nb_objs = rte_ring_count(order_ring); - uint32_t nb_ops_to_deq = 0; - uint32_t nb_ops_deqd = 0; - - if (nb_objs > nb_ops) - nb_objs = nb_ops; + uint32_t nb_objs, nb_ops_to_deq; - while (nb_ops_to_deq < nb_objs) { - SCHEDULER_GET_RING_OBJ(order_ring, nb_ops_to_deq, op); + nb_objs = rte_ring_dequeue_burst_start(order_ring, (void **)ops, + nb_ops, NULL); + if (nb_objs == 0) + return 0; + for (nb_ops_to_deq = 0; nb_ops_to_deq != nb_objs; nb_ops_to_deq++) { + op = ops[nb_ops_to_deq]; if (!(op->status & CRYPTO_OP_STATUS_BIT_COMPLETE)) break; - op->status &= ~CRYPTO_OP_STATUS_BIT_COMPLETE; - nb_ops_to_deq++; - } - - if (nb_ops_to_deq) { - nb_ops_deqd = rte_ring_sc_dequeue_bulk(order_ring, - (void **)ops, nb_ops_to_deq, NULL); } - return nb_ops_deqd; + rte_ring_dequeue_finish(order_ring, nb_ops_to_deq); + return nb_ops_to_deq; } static int diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h index 3ed480c18..e1531d1da 100644 --- a/drivers/crypto/scheduler/scheduler_pmd_private.h +++ b/drivers/crypto/scheduler/scheduler_pmd_private.h @@ -80,36 +80,28 @@ scheduler_order_insert(struct rte_ring *order_ring, rte_ring_sp_enqueue_burst(order_ring, (void **)ops, nb_ops, NULL); } -#define SCHEDULER_GET_RING_OBJ(order_ring, pos, op) do { \ - struct rte_crypto_op **ring = (void *)&order_ring[1]; \ - op = ring[(order_ring->cons.head + pos) & order_ring->mask]; \ -} while (0) - static __rte_always_inline uint16_t scheduler_order_drain(struct rte_ring *order_ring, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_crypto_op *op; - uint32_t nb_objs = rte_ring_count(order_ring); - uint32_t nb_ops_to_deq = 0; - uint32_t nb_ops_deqd = 0; + uint32_t nb_objs, nb_ops_to_deq; - if (nb_objs > nb_ops) - nb_objs = nb_ops; + nb_objs = rte_ring_dequeue_burst_start(order_ring, (void **)ops, + nb_ops, NULL); + if (nb_objs == 0) + return 0; - while (nb_ops_to_deq < nb_objs) { - SCHEDULER_GET_RING_OBJ(order_ring, nb_ops_to_deq, op); + for (nb_ops_to_deq = 0; nb_ops_to_deq != nb_objs; nb_ops_to_deq++) { + op = ops[nb_ops_to_deq]; if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) break; - nb_ops_to_deq++; } - if (nb_ops_to_deq) - nb_ops_deqd = rte_ring_sc_dequeue_bulk(order_ring, - (void **)ops, nb_ops_to_deq, NULL); - - return nb_ops_deqd; + rte_ring_dequeue_finish(order_ring, nb_ops_to_deq); + return nb_ops_to_deq; } + /** device specific operations function pointer structure */ extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;