[20.08] crypto/scheduler: use ring peek API
diff mbox series

Message ID 20200522115736.476-1-konstantin.ananyev@intel.com
State Accepted, archived
Delegated to: akhil goyal
Headers show
Series
  • [20.08] crypto/scheduler: use ring peek API
Related show

Checks

Context Check Description
ci/Intel-compilation success Compilation OK
ci/travis-robot success Travis build: passed
ci/iol-testing fail Testing issues
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-nxp-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/checkpatch success coding style OK

Commit Message

Konstantin Ananyev May 22, 2020, 11:57 a.m. UTC
scheduler PMD uses its own hand-made peek functions
that directly access rte_ring internals.
As now rte_ring has an API for that type of functionality -
change scheduler PMD to use API provided by rte_ring.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 .../crypto/scheduler/scheduler_multicore.c    | 28 ++++++++-----------
 .../crypto/scheduler/scheduler_pmd_private.h  | 28 +++++++------------
 2 files changed, 21 insertions(+), 35 deletions(-)

Comments

Zhang, Roy Fan July 9, 2020, 12:12 p.m. UTC | #1
> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Friday, May 22, 2020 12:58 PM
> To: dev@dpdk.org
> Cc: Zhang, Roy Fan <roy.fan.zhang@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Subject: [PATCH 20.08] crypto/scheduler: use ring peek API
> 
> scheduler PMD uses its own hand-made peek functions
> that directly access rte_ring internals.
> As now rte_ring has an API for that type of functionality -
> change scheduler PMD to use API provided by rte_ring.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Akhil Goyal July 15, 2020, 7:55 p.m. UTC | #2
> > Subject: [PATCH 20.08] crypto/scheduler: use ring peek API
> >
> > scheduler PMD uses its own hand-made peek functions
> > that directly access rte_ring internals.
> > As now rte_ring has an API for that type of functionality -
> > change scheduler PMD to use API provided by rte_ring.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

Applied to dpdk-next-crypto

Thanks.

Patch
diff mbox series

diff --git a/drivers/crypto/scheduler/scheduler_multicore.c b/drivers/crypto/scheduler/scheduler_multicore.c
index 7808e9a34..2d6790bb3 100644
--- a/drivers/crypto/scheduler/scheduler_multicore.c
+++ b/drivers/crypto/scheduler/scheduler_multicore.c
@@ -110,31 +110,25 @@  static uint16_t
 schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops,
 		uint16_t nb_ops)
 {
-	struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring;
+	struct rte_ring *order_ring =
+		((struct scheduler_qp_ctx *)qp)->order_ring;
 	struct rte_crypto_op *op;
-	uint32_t nb_objs = rte_ring_count(order_ring);
-	uint32_t nb_ops_to_deq = 0;
-	uint32_t nb_ops_deqd = 0;
-
-	if (nb_objs > nb_ops)
-		nb_objs = nb_ops;
+	uint32_t nb_objs, nb_ops_to_deq;
 
-	while (nb_ops_to_deq < nb_objs) {
-		SCHEDULER_GET_RING_OBJ(order_ring, nb_ops_to_deq, op);
+	nb_objs = rte_ring_dequeue_burst_start(order_ring, (void **)ops,
+		nb_ops, NULL);
+	if (nb_objs == 0)
+		return 0;
 
+	for (nb_ops_to_deq = 0; nb_ops_to_deq != nb_objs; nb_ops_to_deq++) {
+		op = ops[nb_ops_to_deq];
 		if (!(op->status & CRYPTO_OP_STATUS_BIT_COMPLETE))
 			break;
-
 		op->status &= ~CRYPTO_OP_STATUS_BIT_COMPLETE;
-		nb_ops_to_deq++;
-	}
-
-	if (nb_ops_to_deq) {
-		nb_ops_deqd = rte_ring_sc_dequeue_bulk(order_ring,
-				(void **)ops, nb_ops_to_deq, NULL);
 	}
 
-	return nb_ops_deqd;
+	rte_ring_dequeue_finish(order_ring, nb_ops_to_deq);
+	return nb_ops_to_deq;
 }
 
 static int
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
index 3ed480c18..e1531d1da 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_private.h
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -80,36 +80,28 @@  scheduler_order_insert(struct rte_ring *order_ring,
 	rte_ring_sp_enqueue_burst(order_ring, (void **)ops, nb_ops, NULL);
 }
 
-#define SCHEDULER_GET_RING_OBJ(order_ring, pos, op) do {            \
-	struct rte_crypto_op **ring = (void *)&order_ring[1];     \
-	op = ring[(order_ring->cons.head + pos) & order_ring->mask]; \
-} while (0)
-
 static __rte_always_inline uint16_t
 scheduler_order_drain(struct rte_ring *order_ring,
 		struct rte_crypto_op **ops, uint16_t nb_ops)
 {
 	struct rte_crypto_op *op;
-	uint32_t nb_objs = rte_ring_count(order_ring);
-	uint32_t nb_ops_to_deq = 0;
-	uint32_t nb_ops_deqd = 0;
+	uint32_t nb_objs, nb_ops_to_deq;
 
-	if (nb_objs > nb_ops)
-		nb_objs = nb_ops;
+	nb_objs = rte_ring_dequeue_burst_start(order_ring, (void **)ops,
+		nb_ops, NULL);
+	if (nb_objs == 0)
+		return 0;
 
-	while (nb_ops_to_deq < nb_objs) {
-		SCHEDULER_GET_RING_OBJ(order_ring, nb_ops_to_deq, op);
+	for (nb_ops_to_deq = 0; nb_ops_to_deq != nb_objs; nb_ops_to_deq++) {
+		op = ops[nb_ops_to_deq];
 		if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
 			break;
-		nb_ops_to_deq++;
 	}
 
-	if (nb_ops_to_deq)
-		nb_ops_deqd = rte_ring_sc_dequeue_bulk(order_ring,
-				(void **)ops, nb_ops_to_deq, NULL);
-
-	return nb_ops_deqd;
+	rte_ring_dequeue_finish(order_ring, nb_ops_to_deq);
+	return nb_ops_to_deq;
 }
+
 /** device specific operations function pointer structure */
 extern struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;