From patchwork Sun Aug 29 12:51:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 97512 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A63EDA0C46; Sun, 29 Aug 2021 14:52:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 07F21410D8; Sun, 29 Aug 2021 14:52:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 2D71140042 for ; Sun, 29 Aug 2021 14:52:01 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17T9b6Vp028552; Sun, 29 Aug 2021 05:51:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=mepVPp8iNlfGg023mdWDkM213Xg+o5fifN66YqZ1vWE=; b=LagrHb3Zj1yAKJxVEx2j8N8+2c08sYZqJjpbo8XGR/XxnJ+g5wel3/I3A6N2PgBsgC7q Y1J09fgQvY5urUTqsxeD5elumIJ9H0mqcnETIlic0d8HILzlMAC4k0/BTGX8bocARt1T zImsWQKO+zR8mZ/LwuHjLh6PC2Xkl9+oOp+CcuUkyWRq12NjKPD31vXhp880kWQBgiEL HUZoFl0GN4idEUgwu7Qo1W2rZ+UMiwXfCa1F5i6fiT4EaHnssaFxvGP9apocTHuamJQ3 ED12UlhCu2ro9IuA6iTEWwlcAxs67FZmsGpZMne31T9Oyd6WqVx1uf1wutz/bgQFntCe gQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmhy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:51:56 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:51:53 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:51:53 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id A81A53F7074; Sun, 29 Aug 2021 05:51:48 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:32 +0530 Message-ID: <20210829125139.2173235-2-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: NEi5MKRLXGtgxzzUmdxbPcifFPHX4OB7 X-Proofpoint-GUID: NEi5MKRLXGtgxzzUmdxbPcifFPHX4OB7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 1/8] cryptodev: separate out internal structures X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" A new header file rte_cryptodev_core.h is added and all internal data structures which need not be exposed directly to application are moved to this file. These structures are mostly used by drivers, but they need to be in the public header file as they are accessed by datapath inline functions for performance reasons. Signed-off-by: Akhil Goyal Tested-by: Rebecca Troy Acked-by: Fan Zhang --- lib/cryptodev/cryptodev_pmd.h | 6 - lib/cryptodev/meson.build | 4 +- lib/cryptodev/rte_cryptodev.h | 360 ++++++++++++----------------- lib/cryptodev/rte_cryptodev_core.h | 100 ++++++++ 4 files changed, 245 insertions(+), 225 deletions(-) create mode 100644 lib/cryptodev/rte_cryptodev_core.h diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h index ec7bb82be8..f775ba6beb 100644 --- a/lib/cryptodev/cryptodev_pmd.h +++ b/lib/cryptodev/cryptodev_pmd.h @@ -96,12 +96,6 @@ __rte_internal struct rte_cryptodev * rte_cryptodev_pmd_get_named_dev(const char *name); -/** - * The pool of rte_cryptodev structures. - */ -extern struct rte_cryptodev *rte_cryptodevs; - - /** * Definitions of all functions exported by a driver through the * the generic structure of type *crypto_dev_ops* supplied in the diff --git a/lib/cryptodev/meson.build b/lib/cryptodev/meson.build index 735935df4a..f32cc62a78 100644 --- a/lib/cryptodev/meson.build +++ b/lib/cryptodev/meson.build @@ -14,7 +14,9 @@ headers = files( 'rte_crypto_sym.h', 'rte_crypto_asym.h', ) - +indirect_headers += files( + 'rte_cryptodev_core.h', +) driver_sdk_headers += files( 'cryptodev_pmd.h', ) diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index 33aac44446..3d99dd1cf5 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -861,17 +861,6 @@ rte_cryptodev_callback_unregister(uint8_t dev_id, enum rte_cryptodev_event_type event, rte_cryptodev_cb_fn cb_fn, void *cb_arg); -typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, - struct rte_crypto_op **ops, uint16_t nb_ops); -/**< Dequeue processed packets from queue pair of a device. */ - -typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, - struct rte_crypto_op **ops, uint16_t nb_ops); -/**< Enqueue packets for processing on queue pair of a device. */ - - - - struct rte_cryptodev_callback; /** Structure to keep track of registered callbacks */ @@ -901,216 +890,9 @@ struct rte_cryptodev_cb_rcu { /**< RCU QSBR variable per queue pair */ }; -/** The data structure associated with each crypto device. */ -struct rte_cryptodev { - dequeue_pkt_burst_t dequeue_burst; - /**< Pointer to PMD receive function. */ - enqueue_pkt_burst_t enqueue_burst; - /**< Pointer to PMD transmit function. */ - - struct rte_cryptodev_data *data; - /**< Pointer to device data */ - struct rte_cryptodev_ops *dev_ops; - /**< Functions exported by PMD */ - uint64_t feature_flags; - /**< Feature flags exposes HW/SW features for the given device */ - struct rte_device *device; - /**< Backing device */ - - uint8_t driver_id; - /**< Crypto driver identifier*/ - - struct rte_cryptodev_cb_list link_intr_cbs; - /**< User application callback for interrupts if present */ - - void *security_ctx; - /**< Context for security ops */ - - __extension__ - uint8_t attached : 1; - /**< Flag indicating the device is attached */ - - struct rte_cryptodev_cb_rcu *enq_cbs; - /**< User application callback for pre enqueue processing */ - - struct rte_cryptodev_cb_rcu *deq_cbs; - /**< User application callback for post dequeue processing */ -} __rte_cache_aligned; - void * rte_cryptodev_get_sec_ctx(uint8_t dev_id); -/** - * - * The data part, with no function pointers, associated with each device. - * - * This structure is safe to place in shared memory to be common among - * different processes in a multi-process configuration. - */ -struct rte_cryptodev_data { - uint8_t dev_id; - /**< Device ID for this instance */ - uint8_t socket_id; - /**< Socket ID where memory is allocated */ - char name[RTE_CRYPTODEV_NAME_MAX_LEN]; - /**< Unique identifier name */ - - __extension__ - uint8_t dev_started : 1; - /**< Device state: STARTED(1)/STOPPED(0) */ - - struct rte_mempool *session_pool; - /**< Session memory pool */ - void **queue_pairs; - /**< Array of pointers to queue pairs. */ - uint16_t nb_queue_pairs; - /**< Number of device queue pairs. */ - - void *dev_private; - /**< PMD-specific private data */ -} __rte_cache_aligned; - -extern struct rte_cryptodev *rte_cryptodevs; -/** - * - * Dequeue a burst of processed crypto operations from a queue on the crypto - * device. The dequeued operation are stored in *rte_crypto_op* structures - * whose pointers are supplied in the *ops* array. - * - * The rte_cryptodev_dequeue_burst() function returns the number of ops - * actually dequeued, which is the number of *rte_crypto_op* data structures - * effectively supplied into the *ops* array. - * - * A return value equal to *nb_ops* indicates that the queue contained - * at least *nb_ops* operations, and this is likely to signify that other - * processed operations remain in the devices output queue. Applications - * implementing a "retrieve as many processed operations as possible" policy - * can check this specific case and keep invoking the - * rte_cryptodev_dequeue_burst() function until a value less than - * *nb_ops* is returned. - * - * The rte_cryptodev_dequeue_burst() function does not provide any error - * notification to avoid the corresponding overhead. - * - * @param dev_id The symmetric crypto device identifier - * @param qp_id The index of the queue pair from which to - * retrieve processed packets. The value must be - * in the range [0, nb_queue_pair - 1] previously - * supplied to rte_cryptodev_configure(). - * @param ops The address of an array of pointers to - * *rte_crypto_op* structures that must be - * large enough to store *nb_ops* pointers in it. - * @param nb_ops The maximum number of operations to dequeue. - * - * @return - * - The number of operations actually dequeued, which is the number - * of pointers to *rte_crypto_op* structures effectively supplied to the - * *ops* array. - */ -static inline uint16_t -rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, - struct rte_crypto_op **ops, uint16_t nb_ops) -{ - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; - - rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops); - nb_ops = (*dev->dequeue_burst) - (dev->data->queue_pairs[qp_id], ops, nb_ops); -#ifdef RTE_CRYPTO_CALLBACKS - if (unlikely(dev->deq_cbs != NULL)) { - struct rte_cryptodev_cb_rcu *list; - struct rte_cryptodev_cb *cb; - - /* __ATOMIC_RELEASE memory order was used when the - * call back was inserted into the list. - * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is - * not required. - */ - list = &dev->deq_cbs[qp_id]; - rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); - - while (cb != NULL) { - nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, - cb->arg); - cb = cb->next; - }; - - rte_rcu_qsbr_thread_offline(list->qsbr, 0); - } -#endif - return nb_ops; -} - -/** - * Enqueue a burst of operations for processing on a crypto device. - * - * The rte_cryptodev_enqueue_burst() function is invoked to place - * crypto operations on the queue *qp_id* of the device designated by - * its *dev_id*. - * - * The *nb_ops* parameter is the number of operations to process which are - * supplied in the *ops* array of *rte_crypto_op* structures. - * - * The rte_cryptodev_enqueue_burst() function returns the number of - * operations it actually enqueued for processing. A return value equal to - * *nb_ops* means that all packets have been enqueued. - * - * @param dev_id The identifier of the device. - * @param qp_id The index of the queue pair which packets are - * to be enqueued for processing. The value - * must be in the range [0, nb_queue_pairs - 1] - * previously supplied to - * *rte_cryptodev_configure*. - * @param ops The address of an array of *nb_ops* pointers - * to *rte_crypto_op* structures which contain - * the crypto operations to be processed. - * @param nb_ops The number of operations to process. - * - * @return - * The number of operations actually enqueued on the crypto device. The return - * value can be less than the value of the *nb_ops* parameter when the - * crypto devices queue is full or if invalid parameters are specified in - * a *rte_crypto_op*. - */ -static inline uint16_t -rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, - struct rte_crypto_op **ops, uint16_t nb_ops) -{ - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; - -#ifdef RTE_CRYPTO_CALLBACKS - if (unlikely(dev->enq_cbs != NULL)) { - struct rte_cryptodev_cb_rcu *list; - struct rte_cryptodev_cb *cb; - - /* __ATOMIC_RELEASE memory order was used when the - * call back was inserted into the list. - * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is - * not required. - */ - list = &dev->enq_cbs[qp_id]; - rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); - - while (cb != NULL) { - nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, - cb->arg); - cb = cb->next; - }; - - rte_rcu_qsbr_thread_offline(list->qsbr, 0); - } -#endif - - rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops); - return (*dev->enqueue_burst)( - dev->data->queue_pairs[qp_id], ops, nb_ops); -} - - /** Cryptodev symmetric crypto session * Each session is derived from a fixed xform chain. Therefore each session * has a fixed algo, key, op-type, digest_len etc. @@ -1997,6 +1779,148 @@ int rte_cryptodev_remove_deq_callback(uint8_t dev_id, uint16_t qp_id, struct rte_cryptodev_cb *cb); +#include +/** + * + * Dequeue a burst of processed crypto operations from a queue on the crypto + * device. The dequeued operation are stored in *rte_crypto_op* structures + * whose pointers are supplied in the *ops* array. + * + * The rte_cryptodev_dequeue_burst() function returns the number of ops + * actually dequeued, which is the number of *rte_crypto_op* data structures + * effectively supplied into the *ops* array. + * + * A return value equal to *nb_ops* indicates that the queue contained + * at least *nb_ops* operations, and this is likely to signify that other + * processed operations remain in the devices output queue. Applications + * implementing a "retrieve as many processed operations as possible" policy + * can check this specific case and keep invoking the + * rte_cryptodev_dequeue_burst() function until a value less than + * *nb_ops* is returned. + * + * The rte_cryptodev_dequeue_burst() function does not provide any error + * notification to avoid the corresponding overhead. + * + * @param dev_id The symmetric crypto device identifier + * @param qp_id The index of the queue pair from which to + * retrieve processed packets. The value must be + * in the range [0, nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param ops The address of an array of pointers to + * *rte_crypto_op* structures that must be + * large enough to store *nb_ops* pointers in it. + * @param nb_ops The maximum number of operations to dequeue. + * + * @return + * - The number of operations actually dequeued, which is the number + * of pointers to *rte_crypto_op* structures effectively supplied to the + * *ops* array. + */ +static inline uint16_t +rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, + struct rte_crypto_op **ops, uint16_t nb_ops) +{ + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + + rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops); + nb_ops = (*dev->dequeue_burst) + (dev->data->queue_pairs[qp_id], ops, nb_ops); +#ifdef RTE_CRYPTO_CALLBACKS + if (unlikely(dev->deq_cbs != NULL)) { + struct rte_cryptodev_cb_rcu *list; + struct rte_cryptodev_cb *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + list = &dev->deq_cbs[qp_id]; + rte_rcu_qsbr_thread_online(list->qsbr, 0); + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + + while (cb != NULL) { + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, + cb->arg); + cb = cb->next; + }; + + rte_rcu_qsbr_thread_offline(list->qsbr, 0); + } +#endif + return nb_ops; +} + +/** + * Enqueue a burst of operations for processing on a crypto device. + * + * The rte_cryptodev_enqueue_burst() function is invoked to place + * crypto operations on the queue *qp_id* of the device designated by + * its *dev_id*. + * + * The *nb_ops* parameter is the number of operations to process which are + * supplied in the *ops* array of *rte_crypto_op* structures. + * + * The rte_cryptodev_enqueue_burst() function returns the number of + * operations it actually enqueued for processing. A return value equal to + * *nb_ops* means that all packets have been enqueued. + * + * @param dev_id The identifier of the device. + * @param qp_id The index of the queue pair which packets are + * to be enqueued for processing. The value + * must be in the range [0, nb_queue_pairs - 1] + * previously supplied to + * *rte_cryptodev_configure*. + * @param ops The address of an array of *nb_ops* pointers + * to *rte_crypto_op* structures which contain + * the crypto operations to be processed. + * @param nb_ops The number of operations to process. + * + * @return + * The number of operations actually enqueued on the crypto device. The return + * value can be less than the value of the *nb_ops* parameter when the + * crypto devices queue is full or if invalid parameters are specified in + * a *rte_crypto_op*. + */ +static inline uint16_t +rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, + struct rte_crypto_op **ops, uint16_t nb_ops) +{ + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + +#ifdef RTE_CRYPTO_CALLBACKS + if (unlikely(dev->enq_cbs != NULL)) { + struct rte_cryptodev_cb_rcu *list; + struct rte_cryptodev_cb *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + list = &dev->enq_cbs[qp_id]; + rte_rcu_qsbr_thread_online(list->qsbr, 0); + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + + while (cb != NULL) { + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, + cb->arg); + cb = cb->next; + }; + + rte_rcu_qsbr_thread_offline(list->qsbr, 0); + } +#endif + + rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops); + return (*dev->enqueue_burst)( + dev->data->queue_pairs[qp_id], ops, nb_ops); +} + + + #ifdef __cplusplus } #endif diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h new file mode 100644 index 0000000000..1633e55889 --- /dev/null +++ b/lib/cryptodev/rte_cryptodev_core.h @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _RTE_CRYPTODEV_CORE_H_ +#define _RTE_CRYPTODEV_CORE_H_ + +/** + * @file + * + * RTE Crypto Device internal header. + * + * This header contains internal data types. But they are still part of the + * public API because they are used by inline functions in the published API. + * + * Applications should not use these directly. + * + */ + +typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, + struct rte_crypto_op **ops, uint16_t nb_ops); +/**< Dequeue processed packets from queue pair of a device. */ + +typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, + struct rte_crypto_op **ops, uint16_t nb_ops); +/**< Enqueue packets for processing on queue pair of a device. */ + +/** + * @internal + * The data part, with no function pointers, associated with each device. + * + * This structure is safe to place in shared memory to be common among + * different processes in a multi-process configuration. + */ +struct rte_cryptodev_data { + uint8_t dev_id; + /**< Device ID for this instance */ + uint8_t socket_id; + /**< Socket ID where memory is allocated */ + char name[RTE_CRYPTODEV_NAME_MAX_LEN]; + /**< Unique identifier name */ + + __extension__ + uint8_t dev_started : 1; + /**< Device state: STARTED(1)/STOPPED(0) */ + + struct rte_mempool *session_pool; + /**< Session memory pool */ + void **queue_pairs; + /**< Array of pointers to queue pairs. */ + uint16_t nb_queue_pairs; + /**< Number of device queue pairs. */ + + void *dev_private; + /**< PMD-specific private data */ +} __rte_cache_aligned; + + +/** @internal The data structure associated with each crypto device. */ +struct rte_cryptodev { + dequeue_pkt_burst_t dequeue_burst; + /**< Pointer to PMD receive function. */ + enqueue_pkt_burst_t enqueue_burst; + /**< Pointer to PMD transmit function. */ + + struct rte_cryptodev_data *data; + /**< Pointer to device data */ + struct rte_cryptodev_ops *dev_ops; + /**< Functions exported by PMD */ + uint64_t feature_flags; + /**< Feature flags exposes HW/SW features for the given device */ + struct rte_device *device; + /**< Backing device */ + + uint8_t driver_id; + /**< Crypto driver identifier*/ + + struct rte_cryptodev_cb_list link_intr_cbs; + /**< User application callback for interrupts if present */ + + void *security_ctx; + /**< Context for security ops */ + + __extension__ + uint8_t attached : 1; + /**< Flag indicating the device is attached */ + + struct rte_cryptodev_cb_rcu *enq_cbs; + /**< User application callback for pre enqueue processing */ + + struct rte_cryptodev_cb_rcu *deq_cbs; + /**< User application callback for post dequeue processing */ +} __rte_cache_aligned; + +/** + * The pool of rte_cryptodev structures. + */ +extern struct rte_cryptodev *rte_cryptodevs; + +#endif /* _RTE_CRYPTODEV_CORE_H_ */ From patchwork Sun Aug 29 12:51:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 97513 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 01DF0A0C46; Sun, 29 Aug 2021 14:52:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 355EF410D7; Sun, 29 Aug 2021 14:52:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id E5452410D7 for ; Sun, 29 Aug 2021 14:52:07 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17T9b6dZ028559; Sun, 29 Aug 2021 05:52:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=3zP8+v3yJCyfXqp+AONzzdSw7abgSQ519Pz24LLkLoE=; b=SKATlbcZXHO+laxo9vwT0tC91unSzmumyb3xxKnQ2OhcR41q2OrJS9Mgf0eepLE/29si MtVfX5XHNtb9BpD3Yvc+a6w/7syxamEHWRNVPfhKrCkqT+vnV4V25fprYjLhWLloa11f 4KGW3GXIWmcmTwWr+V7V/Zm0acMjZCvl1+xF/mkCsJb6GXJtJGi9ftFQvfA7TFOemScZ EE2+15xBbYwIvBnH8cXzjlCh/dQzD8dW2PkO5I5Ok3Lj3QsD8KG852t4DtzHMlNhVaGv KoMNIjsCstDxTRzf3dMMZyZEPSVmTH19VRT21RUv0h4c2Ya8fW/GJ8D3zMktdR7ZKKvc /g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmj5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:52:01 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:51:59 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:51:59 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 76C083F706F; Sun, 29 Aug 2021 05:51:54 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:33 +0530 Message-ID: <20210829125139.2173235-3-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Vts_vZHaLObOWaESC4o-Hq2ggrUJaB9f X-Proofpoint-GUID: Vts_vZHaLObOWaESC4o-Hq2ggrUJaB9f X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 2/8] cryptodev: move inline APIs into separate structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move fastpath inline function pointers from rte_cryptodev into a separate structure accessed via a flat array. The intension is to make rte_cryptodev and related structures private to avoid future API/ABI breakages. Signed-off-by: Akhil Goyal Tested-by: Rebecca Troy Acked-by: Fan Zhang --- lib/cryptodev/cryptodev_pmd.c | 33 ++++++++++++++++++++++++++++++ lib/cryptodev/cryptodev_pmd.h | 9 ++++++++ lib/cryptodev/rte_cryptodev.c | 3 +++ lib/cryptodev/rte_cryptodev_core.h | 19 +++++++++++++++++ lib/cryptodev/version.map | 4 ++++ 5 files changed, 68 insertions(+) diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c index 71e34140cd..46772dc355 100644 --- a/lib/cryptodev/cryptodev_pmd.c +++ b/lib/cryptodev/cryptodev_pmd.c @@ -158,3 +158,36 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev) return 0; } + +static uint16_t +dummy_crypto_enqueue_burst(__rte_unused uint8_t dev_id, + __rte_unused uint8_t qp_id, + __rte_unused struct rte_crypto_op **ops, + __rte_unused uint16_t nb_ops) +{ + CDEV_LOG_ERR( + "crypto enqueue burst requested for unconfigured crypto device"); + return 0; +} + +static uint16_t +dummy_crypto_dequeue_burst(__rte_unused uint8_t dev_id, + __rte_unused uint8_t qp_id, + __rte_unused struct rte_crypto_op **ops, + __rte_unused uint16_t nb_ops) +{ + CDEV_LOG_ERR( + "crypto enqueue burst requested for unconfigured crypto device"); + return 0; +} + +void +rte_cryptodev_api_reset(struct rte_cryptodev_api *api) +{ + static const struct rte_cryptodev_api dummy = { + .enqueue_burst = dummy_crypto_enqueue_burst, + .dequeue_burst = dummy_crypto_dequeue_burst, + }; + + *api = dummy; +} diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h index f775ba6beb..eeaea13a23 100644 --- a/lib/cryptodev/cryptodev_pmd.h +++ b/lib/cryptodev/cryptodev_pmd.h @@ -520,6 +520,15 @@ RTE_INIT(init_ ##driver_id)\ driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\ } +/** + * Reset crypto device fastpath APIs to dummy values. + * + * @param The *api* pointer to reset. + */ +__rte_internal +void +rte_cryptodev_api_reset(struct rte_cryptodev_api *api); + static inline void * get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess, uint8_t driver_id) { diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 9fa3aff1d3..26f8390668 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -54,6 +54,9 @@ static struct rte_cryptodev_global cryptodev_globals = { .nb_devs = 0 }; +/* Public fastpath APIs. */ +struct rte_cryptodev_api *rte_cryptodev_api; + /* spinlock for crypto device callbacks */ static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER; diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h index 1633e55889..ec38f70e0c 100644 --- a/lib/cryptodev/rte_cryptodev_core.h +++ b/lib/cryptodev/rte_cryptodev_core.h @@ -25,6 +25,25 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops); /**< Enqueue packets for processing on queue pair of a device. */ +typedef uint16_t (*rte_crypto_dequeue_burst_t)(uint8_t dev_id, uint8_t qp_id, + struct rte_crypto_op **ops, + uint16_t nb_ops); +/**< @internal Dequeue processed packets from queue pair of a device. */ +typedef uint16_t (*rte_crypto_enqueue_burst_t)(uint8_t dev_id, uint8_t qp_id, + struct rte_crypto_op **ops, + uint16_t nb_ops); +/**< @internal Enqueue packets for processing on queue pair of a device. */ + +struct rte_cryptodev_api { + rte_crypto_enqueue_burst_t enqueue_burst; + /**< PMD enqueue burst function. */ + rte_crypto_dequeue_burst_t dequeue_burst; + /**< PMD dequeue burst function. */ + uintptr_t reserved[6]; +} __rte_cache_aligned; + +extern struct rte_cryptodev_api *rte_cryptodev_api; + /** * @internal * The data part, with no function pointers, associated with each device. diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map index 2fdf70002d..050089ae55 100644 --- a/lib/cryptodev/version.map +++ b/lib/cryptodev/version.map @@ -57,6 +57,9 @@ DPDK_22 { rte_cryptodev_sym_session_init; rte_cryptodevs; + #added in 21.11 + rte_cryptodev_api; + local: *; }; @@ -114,6 +117,7 @@ INTERNAL { global: rte_cryptodev_allocate_driver; + rte_cryptodev_api_reset; rte_cryptodev_pmd_allocate; rte_cryptodev_pmd_callback_process; rte_cryptodev_pmd_create; From patchwork Sun Aug 29 12:51:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 97514 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 639ADA0C46; Sun, 29 Aug 2021 14:52:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4CF7A410F9; Sun, 29 Aug 2021 14:52:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id C678B4068B for ; Sun, 29 Aug 2021 14:52:13 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17T9b6dc028559; Sun, 29 Aug 2021 05:52:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=+CnoW+r/s8ed+MGAoYnFwkwpKvARQ3JBW7vNpWO+5lU=; b=SFW0BHbGlAE1Pz+GUlrdEpv9ThlmIYqOQIIqInLaybv+NROTqqQ4vcGsawohVJl0C7aB gBaLSmZMzfdC9RieHvP2A1A1zbgbkvzWZ4poDVi23vAHLBXNz1AWGTgLCOYMczj/lZlo 4kvWakvfJqGIveKdP4m3NWZx4K/7sU8p+Ojg9x1/pOZ4gitbs16w8gM7yGiym9N9wN/r AKHCNWXnrLAqNzghLzRD19Mu5IAOW3eiLCtgLjTPtLY5ft07Hwk6n/l5OfZvJyeRkPYv iOguHeih3XDL/98rx72A1zk3Nj6T4r4qUmf+B3MxMx/IdA6JiVvaA1RYM22nhXij0vwq kg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmk7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:52:07 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:52:05 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:52:05 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 4054D3F7070; Sun, 29 Aug 2021 05:52:00 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:34 +0530 Message-ID: <20210829125139.2173235-4-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: F65vZDartnZGB4vVPN3w7uW3Gv7aWGTx X-Proofpoint-GUID: F65vZDartnZGB4vVPN3w7uW3Gv7aWGTx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 3/8] cryptodev: add helper functions for new datapath interface X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add helper functions and macros to help drivers to transition to new datapath interface. Signed-off-by: Akhil Goyal Tested-by: Rebecca Troy Acked-by: Fan Zhang --- lib/cryptodev/cryptodev_pmd.h | 246 ++++++++++++++++++++++++++++++++++ lib/cryptodev/rte_cryptodev.c | 40 +++++- lib/cryptodev/version.map | 4 + 3 files changed, 289 insertions(+), 1 deletion(-) diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h index eeaea13a23..d40e5cee94 100644 --- a/lib/cryptodev/cryptodev_pmd.h +++ b/lib/cryptodev/cryptodev_pmd.h @@ -70,6 +70,13 @@ struct cryptodev_driver { const struct rte_driver *driver; uint8_t id; }; +/** + * @internal + * The pool of *rte_cryptodev* structures. The size of the pool + * is configured at compile-time in the file. + */ +extern struct rte_cryptodev rte_crypto_devices[]; + /** * Get the rte_cryptodev structure device pointer for the device. Assumes a @@ -529,6 +536,245 @@ __rte_internal void rte_cryptodev_api_reset(struct rte_cryptodev_api *api); +/** + * @internal + * Helper routine for cryptodev_dequeue_burst. + * Should be called as first thing on entrance to the PMD's + * rte_cryptodev_dequeue_burst implementation. + * Does necessary checks and returns pointer to cryptodev identifier. + * + * @param dev_id + * The device identifier of the crypto device. + * @param qp_id + * The index of the queue pair from which processed crypto ops will + * be dequeued. + * + * @return + * Pointer to device queue pair on success or NULL otherwise. + */ +__rte_internal +static inline void * +_rte_cryptodev_dequeue_prolog(uint8_t dev_id, uint8_t qp_id) +{ + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + + return dev->data->queue_pairs[qp_id]; +} + +/** + * @internal + * Helper routine for crypto driver dequeue API. + * Should be called at exit from PMD's rte_cryptodev_dequeue_burst + * implementation. + * Does necessary post-processing - invokes RX callbacks if any, tracing, etc. + * + * @param dev_id + * The device identifier of the Crypto device. + * @param qp_id + * The index of the queue pair from which to retrieve input crypto_ops. + * @param ops + * The address of an array of pointers to *rte_crypto_op* structures that + * have been retrieved from the device. + * @param nb_ops + * The number of ops that were retrieved from the device. + * + * @return + * The number of crypto ops effectively supplied to the *ops* array. + */ +__rte_internal +static inline uint16_t +_rte_cryptodev_dequeue_epilog(uint16_t dev_id, uint16_t qp_id, + struct rte_crypto_op **ops, uint16_t nb_ops) +{ +#ifdef RTE_CRYPTO_CALLBACKS + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + + if (unlikely(dev->deq_cbs != NULL)) { + struct rte_cryptodev_cb_rcu *list; + struct rte_cryptodev_cb *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + list = &dev->deq_cbs[qp_id]; + rte_rcu_qsbr_thread_online(list->qsbr, 0); + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + + while (cb != NULL) { + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, + cb->arg); + cb = cb->next; + }; + + rte_rcu_qsbr_thread_offline(list->qsbr, 0); + } +#endif + + return nb_ops; +} +#define _RTE_CRYPTO_DEQ_FUNC(fn) _rte_crypto_deq_##fn + +/** + * @internal + * Helper macro to create new API wrappers for existing PMD dequeue functions. + */ +#define _RTE_CRYPTO_DEQ_PROTO(fn) \ + uint16_t _RTE_CRYPTO_DEQ_FUNC(fn)(uint8_t dev_id, uint8_t qp_id, \ + struct rte_crypto_op **ops, uint16_t nb_ops) + +/** + * @internal + * Helper macro to create new API wrappers for existing PMD dequeue functions. + */ +#define _RTE_CRYPTO_DEQ_DEF(fn) \ +_RTE_CRYPTO_DEQ_PROTO(fn) \ +{ \ + void *qp = _rte_cryptodev_dequeue_prolog(dev_id, qp_id); \ + if (qp == NULL) \ + return 0; \ + nb_ops = fn(qp, ops, nb_ops); \ + return _rte_cryptodev_dequeue_epilog(dev_id, qp_id, ops, nb_ops); \ +} + +/** + * @internal + * Helper routine for cryptodev_enqueue_burst. + * Should be called as first thing on entrance to the PMD's + * rte_cryptodev_enqueue_burst implementation. + * Does necessary checks and returns pointer to cryptodev queue pair. + * + * @param dev_id + * The device identifier of the crypto device. + * @param qp_id + * The index of the queue pair in which packets will be enqueued. + * @param ops + * The address of an array of pointers to *rte_crypto_op* structures that + * will be enqueued to the device. + * @param nb_ops + * The number of ops that will be sent to the device. + * + * @return + * Pointer to device queue pair on success or NULL otherwise. + */ +__rte_internal +static inline void * +_rte_cryptodev_enqueue_prolog(uint8_t dev_id, uint8_t qp_id, + struct rte_crypto_op **ops, uint16_t nb_ops) +{ + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + +#ifdef RTE_CRYPTO_CALLBACKS + if (unlikely(dev->enq_cbs != NULL)) { + struct rte_cryptodev_cb_rcu *list; + struct rte_cryptodev_cb *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + list = &dev->enq_cbs[qp_id]; + rte_rcu_qsbr_thread_online(list->qsbr, 0); + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + + while (cb != NULL) { + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, + cb->arg); + cb = cb->next; + }; + + rte_rcu_qsbr_thread_offline(list->qsbr, 0); + } +#endif + return dev->data->queue_pairs[qp_id]; +} + +#define _RTE_CRYPTO_ENQ_FUNC(fn) _rte_crypto_enq_##fn + +/** + * @internal + * Helper macro to create new API wrappers for existing PMD enqueue functions. + */ +#define _RTE_CRYPTO_ENQ_PROTO(fn) \ + uint16_t _RTE_CRYPTO_ENQ_FUNC(fn)(uint8_t dev_id, uint8_t qp_id, \ + struct rte_crypto_op **ops, uint16_t nb_ops) + +/** + * @internal + * Helper macro to create new API wrappers for existing PMD enqueue functions. + */ +#define _RTE_CRYPTO_ENQ_DEF(fn) \ +_RTE_CRYPTO_ENQ_PROTO(fn) \ +{ \ + void *qp = _rte_cryptodev_enqueue_prolog(dev_id, qp_id, ops, nb_ops); \ + if (qp == NULL) \ + return 0; \ + return fn(qp, ops, nb_ops); \ +} + +/** + * @internal + * Helper routine to get enqueue burst function of a given device. + * + * @param dev_id + * The device identifier of the Crypto device. + * + * @return + * The function if valid else NULL + */ +__rte_internal +rte_crypto_enqueue_burst_t +rte_crypto_get_enq_burst_fn(uint8_t dev_id); + +/** + * @internal + * Helper routine to get dequeue burst function of a given device. + * + * @param dev_id + * The device identifier of the Crypto device. + * + * @return + * The function if valid else NULL + */ +__rte_internal +rte_crypto_dequeue_burst_t +rte_crypto_get_deq_burst_fn(uint8_t dev_id); + +/** + * @internal + * Helper routine to set enqueue burst function of a given device. + * + * @param dev_id + * The device identifier of the Crypto device. + * + * @return + * 0 Success. + * -EINVAL Failure if dev_id or fn are in-valid. + */ +__rte_internal +int +rte_crypto_set_enq_burst_fn(uint8_t dev_id, rte_crypto_enqueue_burst_t fn); + +/** + * @internal + * Helper routine to set dequeue burst function of a given device. + * + * @param dev_id + * The device identifier of the Crypto device. + * + * @return + * 0 Success. + * -EINVAL Failure if dev_id or fn are in-valid. + */ +__rte_internal +int +rte_crypto_set_deq_burst_fn(uint8_t dev_id, rte_crypto_dequeue_burst_t fn); + + static inline void * get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess, uint8_t driver_id) { diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 26f8390668..4ab82d21d0 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -44,7 +44,7 @@ static uint8_t nb_drivers; -static struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS]; +struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS]; struct rte_cryptodev *rte_cryptodevs = rte_crypto_devices; @@ -1270,6 +1270,44 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id, socket_id); } +rte_crypto_enqueue_burst_t +rte_crypto_get_enq_burst_fn(uint8_t dev_id) +{ + if (dev_id >= RTE_CRYPTO_MAX_DEVS) { + rte_errno = EINVAL; + return NULL; + } + return rte_cryptodev_api[dev_id].enqueue_burst; +} + +rte_crypto_dequeue_burst_t +rte_crypto_get_deq_burst_fn(uint8_t dev_id) +{ + if (dev_id >= RTE_CRYPTO_MAX_DEVS) { + rte_errno = EINVAL; + return NULL; + } + return rte_cryptodev_api[dev_id].dequeue_burst; +} + +int +rte_crypto_set_enq_burst_fn(uint8_t dev_id, rte_crypto_enqueue_burst_t fn) +{ + if (dev_id >= RTE_CRYPTO_MAX_DEVS || fn == NULL) + return -EINVAL; + rte_cryptodev_api[dev_id].enqueue_burst = fn; + return 0; +} + +int +rte_crypto_set_deq_burst_fn(uint8_t dev_id, rte_crypto_dequeue_burst_t fn) +{ + if (dev_id >= RTE_CRYPTO_MAX_DEVS || fn == NULL) + return -EINVAL; + rte_cryptodev_api[dev_id].dequeue_burst = fn; + return 0; +} + struct rte_cryptodev_cb * rte_cryptodev_add_enq_callback(uint8_t dev_id, uint16_t qp_id, diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map index 050089ae55..b64384cc05 100644 --- a/lib/cryptodev/version.map +++ b/lib/cryptodev/version.map @@ -116,6 +116,10 @@ EXPERIMENTAL { INTERNAL { global: + rte_crypto_get_deq_burst_fn; + rte_crypto_get_enq_burst_fn; + rte_crypto_set_deq_burst_fn; + rte_crypto_set_enq_burst_fn; rte_cryptodev_allocate_driver; rte_cryptodev_api_reset; rte_cryptodev_pmd_allocate; From patchwork Sun Aug 29 12:51:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 97515 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3E402A0C46; Sun, 29 Aug 2021 14:52:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D856E41135; Sun, 29 Aug 2021 14:52:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 88F5540DF6 for ; Sun, 29 Aug 2021 14:52:17 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17TANEE3009178; Sun, 29 Aug 2021 05:52:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=uoC7e8LtmHBQ2nPgMS6hjVCabOFeU0XgVpeGMMBbyKs=; b=ctq/iUnraTUOlhFsCTbbj6EDdRr/b2oN37TLr+BrVEjtGUuxRnW7n79Whyv1sR0Ja6RY AUm0HELA7/aaAEVAMTCyX8E0R5GcHhLN/30MxY/dh8/KCM+UEGRFxXwm/xEM9CfYQ82d mqUGZRhinszmYSQCiHBKaoo5nWdaeePT0R/sOEZ6xko+7iS9skb3O3Kp2UFWjqDbOUuF 6dQbauxDQcoWxpcRuUXhKA+9aTuwJ2tf9yqykyya9FWZG+5fVu0QuAUzBfPhIKOlv2gT VtiWLu8E/WEvAePcg8iXX2z53TmvIx9SPiBh2K2j9Qyr5brCrypaGK48As8APMZRDfR8 8Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmkk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:52:13 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:52:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:52:11 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 0CC3C3F7074; Sun, 29 Aug 2021 05:52:05 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:35 +0530 Message-ID: <20210829125139.2173235-5-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: AGQPZ57bl22rTEhfe6LmsqWPQQETp4Aw X-Proofpoint-GUID: AGQPZ57bl22rTEhfe6LmsqWPQQETp4Aw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 4/8] cryptodev: use new API for datapath functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The datapath inline APIs (rte_cryptodev_enqueue_burst/ rte_cryptodev_dequeue_burst) are updated to use the new rte_crytodev_api->enqueue_burst/rte_cryptodev_api->dequeue_burst APIs based on the dev_id Signed-off-by: Akhil Goyal Tested-by: Rebecca Troy Acked-by: Fan Zhang --- lib/cryptodev/rte_cryptodev.h | 62 +++-------------------------------- 1 file changed, 5 insertions(+), 57 deletions(-) diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index 3d99dd1cf5..49919a43a4 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -1820,36 +1820,10 @@ static inline uint16_t rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op **ops, uint16_t nb_ops) { - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; - rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops); - nb_ops = (*dev->dequeue_burst) - (dev->data->queue_pairs[qp_id], ops, nb_ops); -#ifdef RTE_CRYPTO_CALLBACKS - if (unlikely(dev->deq_cbs != NULL)) { - struct rte_cryptodev_cb_rcu *list; - struct rte_cryptodev_cb *cb; - /* __ATOMIC_RELEASE memory order was used when the - * call back was inserted into the list. - * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is - * not required. - */ - list = &dev->deq_cbs[qp_id]; - rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); - - while (cb != NULL) { - nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, - cb->arg); - cb = cb->next; - }; - - rte_rcu_qsbr_thread_offline(list->qsbr, 0); - } -#endif - return nb_ops; + return rte_cryptodev_api[dev_id].dequeue_burst( + dev_id, qp_id, ops, nb_ops); } /** @@ -1887,36 +1861,10 @@ static inline uint16_t rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op **ops, uint16_t nb_ops) { - struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; - -#ifdef RTE_CRYPTO_CALLBACKS - if (unlikely(dev->enq_cbs != NULL)) { - struct rte_cryptodev_cb_rcu *list; - struct rte_cryptodev_cb *cb; - - /* __ATOMIC_RELEASE memory order was used when the - * call back was inserted into the list. - * Since there is a clear dependency between loading - * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is - * not required. - */ - list = &dev->enq_cbs[qp_id]; - rte_rcu_qsbr_thread_online(list->qsbr, 0); - cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); - - while (cb != NULL) { - nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, - cb->arg); - cb = cb->next; - }; - - rte_rcu_qsbr_thread_offline(list->qsbr, 0); - } -#endif - rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops); - return (*dev->enqueue_burst)( - dev->data->queue_pairs[qp_id], ops, nb_ops); + + return rte_cryptodev_api[dev_id].enqueue_burst( + dev_id, qp_id, ops, nb_ops); } From patchwork Sun Aug 29 12:51:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 97516 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8476CA0C46; Sun, 29 Aug 2021 14:52:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 11E99410FA; Sun, 29 Aug 2021 14:52:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 22AA3410F8 for ; Sun, 29 Aug 2021 14:52:24 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17TCRgux028123; Sun, 29 Aug 2021 05:52:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=DZkZk1X4QMIJIBMG8VxWMWIzJW1nYdyA2xWohrlE2EQ=; b=cUn/VEprD1xYdxqw2la1uDhhDv8YF2aWM7Iqi7iWe4rzX0UiZ85FigEPe1xXBkxOf1dE ijPeU5+3w4LqkFsZeU5l5S7YEFm64ktuWh0gQPRJEnM3BKD/Plg3vzr98oswPoAO9g8g D1s3pXk8I7JJdOaHsEe1ZYi8lM76G1mzV2Gsh0NtkYnxOARJZk16HjqUiJqqijFlMxLY ++hkl+9677qIa0MC5xCpM0oQDGTdOnTdtTkU0HlKZvM650qF7j/BAJ7R7IURlEprsDO0 G74kpUAo+wSF/MztXU+r3bZeVIgM1STlC7pZVdu1Qmbtss9Ot6vtUTM3FKgx99BbiRRB fg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmky-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:52:19 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:52:17 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:52:16 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id C7BFD3F706F; Sun, 29 Aug 2021 05:52:11 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:36 +0530 Message-ID: <20210829125139.2173235-6-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: r5_YnxvVp97U1-XU5MGsMNhYKSKKJYjq X-Proofpoint-GUID: r5_YnxvVp97U1-XU5MGsMNhYKSKKJYjq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 5/8] drivers/crypto: use new framework for datapath X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" All crypto drivers are updated to use the new API for all enqueue and dequeue paths. Signed-off-by: Akhil Goyal Tested-by: Rebecca Troy Acked-by: Fan Zhang --- drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 10 ++++++++-- drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 11 +++++++++-- drivers/crypto/armv8/rte_armv8_pmd.c | 11 +++++++++-- drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 11 +++++++++-- drivers/crypto/caam_jr/caam_jr.c | 11 +++++++++-- drivers/crypto/ccp/ccp_dev.c | 1 + drivers/crypto/ccp/rte_ccp_pmd.c | 11 +++++++++-- drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 8 ++++++-- drivers/crypto/cnxk/cn10k_cryptodev_ops.h | 3 +++ drivers/crypto/cnxk/cn10k_ipsec.c | 1 + drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 9 +++++++-- drivers/crypto/cnxk/cn9k_cryptodev_ops.h | 3 +++ .../crypto/cnxk/cnxk_cryptodev_capabilities.c | 1 + drivers/crypto/cnxk/cnxk_cryptodev_sec.c | 1 + drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 12 ++++++++++-- drivers/crypto/dpaa_sec/dpaa_sec.c | 11 +++++++++-- drivers/crypto/kasumi/rte_kasumi_pmd.c | 11 +++++++++-- drivers/crypto/mlx5/mlx5_crypto.c | 11 +++++++++-- drivers/crypto/mvsam/rte_mrvl_pmd.c | 11 +++++++++-- drivers/crypto/nitrox/nitrox_sym.c | 11 +++++++++-- drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 1 + drivers/crypto/null/null_crypto_pmd.c | 11 +++++++++-- .../crypto/octeontx/otx_cryptodev_hw_access.c | 1 + drivers/crypto/octeontx/otx_cryptodev_ops.c | 16 ++++++++++++---- drivers/crypto/octeontx/otx_cryptodev_ops.h | 5 +++++ drivers/crypto/octeontx2/otx2_cryptodev_mbox.c | 1 + drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 11 +++++++++-- drivers/crypto/openssl/rte_openssl_pmd.c | 11 +++++++++-- drivers/crypto/qat/qat_asym_pmd.c | 11 +++++++++-- drivers/crypto/qat/qat_sym_pmd.c | 10 ++++++++-- drivers/crypto/snow3g/rte_snow3g_pmd.c | 11 +++++++++-- drivers/crypto/virtio/virtio_cryptodev.c | 10 ++++++---- drivers/crypto/virtio/virtio_cryptodev.h | 2 ++ drivers/crypto/virtio/virtio_rxtx.c | 2 ++ drivers/crypto/zuc/rte_zuc_pmd.c | 11 +++++++++-- 35 files changed, 223 insertions(+), 50 deletions(-) diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c index 330aad8157..35c89318fe 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c @@ -14,6 +14,8 @@ #include "aesni_gcm_pmd_private.h" static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(aesni_gcm_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(aesni_gcm_pmd_dequeue_burst); /* setup session handlers */ static void @@ -758,6 +760,7 @@ aesni_gcm_pmd_dequeue_burst(void *queue_pair, return i; } +_RTE_CRYPTO_DEQ_DEF(aesni_gcm_pmd_dequeue_burst) static uint16_t aesni_gcm_pmd_enqueue_burst(void *queue_pair, @@ -773,6 +776,7 @@ aesni_gcm_pmd_enqueue_burst(void *queue_pair, return nb_enqueued; } +_RTE_CRYPTO_ENQ_DEF(aesni_gcm_pmd_enqueue_burst) static int aesni_gcm_remove(struct rte_vdev_device *vdev); @@ -807,8 +811,10 @@ aesni_gcm_create(const char *name, dev->dev_ops = rte_aesni_gcm_pmd_ops; /* register rx/tx burst functions for data path */ - dev->dequeue_burst = aesni_gcm_pmd_dequeue_burst; - dev->enqueue_burst = aesni_gcm_pmd_enqueue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(aesni_gcm_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(aesni_gcm_pmd_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c index 60963a8208..bd7a928583 100644 --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c @@ -30,6 +30,8 @@ static RTE_DEFINE_PER_LCORE(MB_MGR *, sync_mb_mgr); typedef void (*hash_one_block_t)(const void *data, void *digest); typedef void (*aes_keyexp_t)(const void *key, void *enc_exp_keys, void *dec_exp_keys); +_RTE_CRYPTO_ENQ_PROTO(aesni_mb_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(aesni_mb_pmd_dequeue_burst); /** * Calculate the authentication pre-computes * @@ -1005,6 +1007,7 @@ aesni_mb_pmd_enqueue_burst(void *__qp, struct rte_crypto_op **ops, return nb_enqueued; } +_RTE_CRYPTO_ENQ_DEF(aesni_mb_pmd_enqueue_burst) /** Get multi buffer session */ static inline struct aesni_mb_session * @@ -1872,6 +1875,8 @@ aesni_mb_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, return processed_jobs; } +_RTE_CRYPTO_DEQ_DEF(aesni_mb_pmd_dequeue_burst) + static MB_MGR * alloc_init_mb_mgr(enum aesni_mb_vector_mode vector_mode) @@ -2097,8 +2102,10 @@ cryptodev_aesni_mb_create(const char *name, dev->dev_ops = rte_aesni_mb_pmd_ops; /* register rx/tx burst functions for data path */ - dev->dequeue_burst = aesni_mb_pmd_dequeue_burst; - dev->enqueue_burst = aesni_mb_pmd_enqueue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(aesni_mb_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(aesni_mb_pmd_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c index 36a1a9bb4f..6a283df3b7 100644 --- a/drivers/crypto/armv8/rte_armv8_pmd.c +++ b/drivers/crypto/armv8/rte_armv8_pmd.c @@ -18,6 +18,9 @@ static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(armv8_crypto_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(armv8_crypto_pmd_dequeue_burst); + static int cryptodev_armv8_crypto_uninit(struct rte_vdev_device *vdev); /** @@ -731,6 +734,7 @@ armv8_crypto_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, qp->stats.enqueue_err_count++; return retval; } +_RTE_CRYPTO_ENQ_DEF(armv8_crypto_pmd_enqueue_burst) /** Dequeue burst */ static uint16_t @@ -747,6 +751,7 @@ armv8_crypto_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, return nb_dequeued; } +_RTE_CRYPTO_DEQ_DEF(armv8_crypto_pmd_dequeue_burst) /** Create ARMv8 crypto device */ static int @@ -789,8 +794,10 @@ cryptodev_armv8_crypto_create(const char *name, dev->dev_ops = rte_armv8_crypto_pmd_ops; /* register rx/tx burst functions for data path */ - dev->dequeue_burst = armv8_crypto_pmd_dequeue_burst; - dev->enqueue_burst = armv8_crypto_pmd_enqueue_burst; + rte_crypto_set_enq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(armv8_crypto_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(armv8_crypto_pmd_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c index d1dd22823e..f2f6f53e71 100644 --- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c +++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c @@ -19,6 +19,9 @@ uint8_t cryptodev_bcmfs_driver_id; +_RTE_CRYPTO_ENQ_PROTO(bcmfs_sym_pmd_enqueue_op_burst); +_RTE_CRYPTO_DEQ_PROTO(bcmfs_sym_pmd_dequeue_op_burst); + static int bcmfs_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id); @@ -298,6 +301,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair, return enq; } +_RTE_CRYPTO_ENQ_DEF(bcmfs_sym_pmd_enqueue_op_burst) static void bcmfs_sym_set_request_status(struct rte_crypto_op *op, struct bcmfs_sym_request *out) @@ -339,6 +343,7 @@ bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair, return pkts; } +_RTE_CRYPTO_DEQ_DEF(bcmfs_sym_pmd_dequeue_op_burst) /* * An rte_driver is needed in the registration of both the @@ -380,8 +385,10 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev) cryptodev->driver_id = cryptodev_bcmfs_driver_id; cryptodev->dev_ops = &crypto_bcmfs_ops; - cryptodev->enqueue_burst = bcmfs_sym_pmd_enqueue_op_burst; - cryptodev->dequeue_burst = bcmfs_sym_pmd_dequeue_op_burst; + rte_crypto_set_enq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(bcmfs_sym_pmd_enqueue_op_burst)); + rte_crypto_set_deq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(bcmfs_sym_pmd_dequeue_op_burst)); cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c index 258750afe7..ffc88de1af 100644 --- a/drivers/crypto/caam_jr/caam_jr.c +++ b/drivers/crypto/caam_jr/caam_jr.c @@ -36,6 +36,9 @@ #define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(caam_jr_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(caam_jr_dequeue_burst); + /* Lists the states possible for the SEC user space driver. */ enum sec_driver_state_e { SEC_DRIVER_STATE_IDLE, /* Driver not initialized */ @@ -697,6 +700,7 @@ caam_jr_dequeue_burst(void *qp, struct rte_crypto_op **ops, return num_rx; } +_RTE_CRYPTO_DEQ_DEF(caam_jr_dequeue_burst) /** * packet looks like: @@ -1485,6 +1489,7 @@ caam_jr_enqueue_burst(void *qp, struct rte_crypto_op **ops, return num_tx; } +_RTE_CRYPTO_ENQ_DEF(caam_jr_enqueue_burst) /* Release queue pair */ static int @@ -2333,8 +2338,10 @@ caam_jr_dev_init(const char *name, dev->dev_ops = &caam_jr_ops; /* register rx/tx burst functions for data path */ - dev->dequeue_burst = caam_jr_dequeue_burst; - dev->enqueue_burst = caam_jr_enqueue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(caam_jr_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(caam_jr_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_HW_ACCELERATED | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c index 0eb1b0328e..60533cb5fc 100644 --- a/drivers/crypto/ccp/ccp_dev.c +++ b/drivers/crypto/ccp/ccp_dev.c @@ -12,6 +12,7 @@ #include #include +#include #include #include #include diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c index a54d81de46..67b880f2ca 100644 --- a/drivers/crypto/ccp/rte_ccp_pmd.c +++ b/drivers/crypto/ccp/rte_ccp_pmd.c @@ -34,6 +34,9 @@ struct ccp_pmd_init_params { #define CCP_CRYPTODEV_PARAM_MAX_NB_QP ("max_nb_queue_pairs") #define CCP_CRYPTODEV_PARAM_AUTH_OPT ("ccp_auth_opt") +_RTE_CRYPTO_ENQ_PROTO(ccp_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(ccp_pmd_dequeue_burst); + const char *ccp_pmd_valid_params[] = { CCP_CRYPTODEV_PARAM_NAME, CCP_CRYPTODEV_PARAM_SOCKET_ID, @@ -140,6 +143,7 @@ ccp_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, qp->qp_stats.enqueued_count += enq_cnt; return enq_cnt; } +_RTE_CRYPTO_ENQ_DEF(ccp_pmd_enqueue_burst) static uint16_t ccp_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, @@ -176,6 +180,7 @@ ccp_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, return nb_dequeued; } +_RTE_CRYPTO_DEQ_DEF(ccp_pmd_dequeue_burst) /* * The set of PCI devices this driver supports @@ -257,8 +262,10 @@ cryptodev_ccp_create(const char *name, /* register rx/tx burst functions for data path */ dev->dev_ops = ccp_pmd_ops; - dev->enqueue_burst = ccp_pmd_enqueue_burst; - dev->dequeue_burst = ccp_pmd_dequeue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(ccp_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(ccp_pmd_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_HW_ACCELERATED | diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c index d9b53128bc..4081bd778c 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c @@ -256,6 +256,7 @@ cn10k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) return count + i; } +_RTE_CRYPTO_ENQ_DEF(cn10k_cpt_enqueue_burst) static inline void cn10k_cpt_sec_post_process(struct rte_crypto_op *cop, @@ -414,12 +415,15 @@ cn10k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) return i; } +_RTE_CRYPTO_DEQ_DEF(cn10k_cpt_dequeue_burst) void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev) { - dev->enqueue_burst = cn10k_cpt_enqueue_burst; - dev->dequeue_burst = cn10k_cpt_dequeue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(cn10k_cpt_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(cn10k_cpt_dequeue_burst)); rte_mb(); } diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h index 198e9ea5bd..05b30e6d0b 100644 --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.h @@ -10,6 +10,9 @@ extern struct rte_cryptodev_ops cn10k_cpt_ops; +_RTE_CRYPTO_ENQ_PROTO(cn10k_cpt_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(cn10k_cpt_dequeue_burst); + void cn10k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev); #endif /* _CN10K_CRYPTODEV_OPS_H_ */ diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c index 1d567bf188..c51d5d55c6 100644 --- a/drivers/crypto/cnxk/cn10k_ipsec.c +++ b/drivers/crypto/cnxk/cn10k_ipsec.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "cnxk_cryptodev.h" #include "cnxk_ipsec.h" diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c index 97fbf780fe..6a15974d6f 100644 --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c @@ -175,6 +175,7 @@ cn9k_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) return count; } +_RTE_CRYPTO_ENQ_DEF(cn9k_cpt_enqueue_burst) static inline void cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop, @@ -299,11 +300,15 @@ cn9k_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) return i; } +_RTE_CRYPTO_DEQ_DEF(cn9k_cpt_dequeue_burst) + void cn9k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev) { - dev->enqueue_burst = cn9k_cpt_enqueue_burst; - dev->dequeue_burst = cn9k_cpt_dequeue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(cn9k_cpt_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(cn9k_cpt_dequeue_burst)); rte_mb(); } diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h index d042d18474..4e6bfe6971 100644 --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.h @@ -9,6 +9,9 @@ extern struct rte_cryptodev_ops cn9k_cpt_ops; +_RTE_CRYPTO_ENQ_PROTO(cn9k_cpt_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(cn9k_cpt_dequeue_burst); + void cn9k_cpt_set_enqdeq_fns(struct rte_cryptodev *dev); #endif /* _CN9K_CRYPTODEV_OPS_H_ */ diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c index ab37f9c43b..7db388b3d6 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_capabilities.c @@ -4,6 +4,7 @@ #include #include +#include #include "roc_api.h" diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_sec.c b/drivers/crypto/cnxk/cnxk_cryptodev_sec.c index 8d04d4b575..293d1a18fe 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_sec.c +++ b/drivers/crypto/cnxk/cnxk_cryptodev_sec.c @@ -6,6 +6,7 @@ #include #include #include +#include #include "cnxk_cryptodev_capabilities.h" #include "cnxk_cryptodev_sec.h" diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c index bf69c61916..26c00b2c3d 100644 --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c @@ -59,6 +59,9 @@ static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(dpaa2_sec_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(dpaa2_sec_dequeue_burst); + #ifdef RTE_LIB_SECURITY static inline int build_proto_compound_sg_fd(dpaa2_sec_session *sess, @@ -1524,6 +1527,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, dpaa2_qp->tx_vq.err_pkts += nb_ops; return num_tx; } +_RTE_CRYPTO_ENQ_DEF(dpaa2_sec_enqueue_burst) #ifdef RTE_LIB_SECURITY static inline struct rte_crypto_op * @@ -1727,6 +1731,8 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops, return num_rx; } +_RTE_CRYPTO_DEQ_DEF(dpaa2_sec_dequeue_burst) + /** Release queue pair */ static int dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) @@ -3881,8 +3887,10 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev) cryptodev->driver_id = cryptodev_driver_id; cryptodev->dev_ops = &crypto_ops; - cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst; - cryptodev->dequeue_burst = dpaa2_sec_dequeue_burst; + rte_crypto_set_enq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(dpaa2_sec_enqueue_burst)); + rte_crypto_set_deq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(dpaa2_sec_dequeue_burst)); cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_HW_ACCELERATED | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index 3d53746ef1..6e998c589b 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -47,6 +47,9 @@ static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(dpaa_sec_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(dpaa_sec_dequeue_burst); + static int dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess); @@ -1916,6 +1919,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, return num_tx; } +_RTE_CRYPTO_ENQ_DEF(dpaa_sec_enqueue_burst) static uint16_t dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops, @@ -1940,6 +1944,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops, return num_rx; } +_RTE_CRYPTO_DEQ_DEF(dpaa_sec_dequeue_burst) /** Release queue pair */ static int @@ -3365,8 +3370,10 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev) cryptodev->driver_id = cryptodev_driver_id; cryptodev->dev_ops = &crypto_ops; - cryptodev->enqueue_burst = dpaa_sec_enqueue_burst; - cryptodev->dequeue_burst = dpaa_sec_dequeue_burst; + rte_crypto_set_enq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(dpaa_sec_enqueue_burst)); + rte_crypto_set_deq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(dpaa_sec_dequeue_burst)); cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_HW_ACCELERATED | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c index d6f927417a..9cf79b323e 100644 --- a/drivers/crypto/kasumi/rte_kasumi_pmd.c +++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c @@ -19,6 +19,9 @@ static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(kasumi_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(kasumi_pmd_dequeue_burst); + /** Get xform chain order. */ static enum kasumi_operation kasumi_get_mode(const struct rte_crypto_sym_xform *xform) @@ -508,6 +511,7 @@ kasumi_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, qp->qp_stats.enqueue_err_count += nb_ops - enqueued_ops; return enqueued_ops; } +_RTE_CRYPTO_ENQ_DEF(kasumi_pmd_enqueue_burst) static uint16_t kasumi_pmd_dequeue_burst(void *queue_pair, @@ -523,6 +527,7 @@ kasumi_pmd_dequeue_burst(void *queue_pair, return nb_dequeued; } +_RTE_CRYPTO_DEQ_DEF(kasumi_pmd_dequeue_burst) static int cryptodev_kasumi_remove(struct rte_vdev_device *vdev); @@ -545,8 +550,10 @@ cryptodev_kasumi_create(const char *name, dev->dev_ops = rte_kasumi_pmd_ops; /* Register RX/TX burst functions for data path. */ - dev->dequeue_burst = kasumi_pmd_dequeue_burst; - dev->enqueue_burst = kasumi_pmd_enqueue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(kasumi_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(kasumi_pmd_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index b3d5200ca3..ec054d0863 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -39,6 +39,9 @@ int mlx5_crypto_logtype; uint8_t mlx5_crypto_driver_id; +_RTE_CRYPTO_ENQ_PROTO(mlx5_crypto_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(mlx5_crypto_dequeue_burst); + const struct rte_cryptodev_capabilities mlx5_crypto_caps[] = { { /* AES XTS */ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, @@ -523,6 +526,7 @@ mlx5_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, rte_wmb(); return nb_ops; } +_RTE_CRYPTO_ENQ_DEF(mlx5_crypto_enqueue_burst) static __rte_noinline void mlx5_crypto_cqe_err_handle(struct mlx5_crypto_qp *qp, struct rte_crypto_op *op) @@ -576,6 +580,7 @@ mlx5_crypto_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, } return i; } +_RTE_CRYPTO_DEQ_DEF(mlx5_crypto_dequeue_burst) static void mlx5_crypto_qp_init(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp) @@ -1041,8 +1046,10 @@ mlx5_crypto_dev_probe(struct rte_device *dev) DRV_LOG(INFO, "Crypto device %s was created successfully.", ibv->name); crypto_dev->dev_ops = &mlx5_crypto_ops; - crypto_dev->dequeue_burst = mlx5_crypto_dequeue_burst; - crypto_dev->enqueue_burst = mlx5_crypto_enqueue_burst; + rte_crypto_set_enq_burst_fn(crypto_dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(mlx5_crypto_enqueue_burst)); + rte_crypto_set_deq_burst_fn(crypto_dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(mlx5_crypto_dequeue_burst)); crypto_dev->feature_flags = MLX5_CRYPTO_FEATURE_FLAGS; crypto_dev->driver_id = mlx5_crypto_driver_id; priv = crypto_dev->data->dev_private; diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd.c b/drivers/crypto/mvsam/rte_mrvl_pmd.c index a72642a772..7c8f3fdde4 100644 --- a/drivers/crypto/mvsam/rte_mrvl_pmd.c +++ b/drivers/crypto/mvsam/rte_mrvl_pmd.c @@ -22,6 +22,9 @@ static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(mrvl_crypto_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(mrvl_crypto_pmd_dequeue_burst); + struct mrvl_pmd_init_params { struct rte_cryptodev_pmd_init_params common; uint32_t max_nb_sessions; @@ -981,6 +984,7 @@ mrvl_crypto_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, qp->stats.enqueued_count += to_enq_sec + to_enq_crp; return consumed; } +_RTE_CRYPTO_ENQ_DEF(mrvl_crypto_pmd_enqueue_burst) /** * Dequeue burst. @@ -1046,6 +1050,7 @@ mrvl_crypto_pmd_dequeue_burst(void *queue_pair, qp->stats.dequeued_count += nb_ops; return nb_ops; } +_RTE_CRYPTO_DEQ_DEF(mrvl_crypto_pmd_dequeue_burst) /** * Create a new crypto device. @@ -1077,8 +1082,10 @@ cryptodev_mrvl_crypto_create(const char *name, dev->dev_ops = rte_mrvl_crypto_pmd_ops; /* Register rx/tx burst functions for data path. */ - dev->enqueue_burst = mrvl_crypto_pmd_enqueue_burst; - dev->dequeue_burst = mrvl_crypto_pmd_dequeue_burst; + rte_crypto_set_enq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(mrvl_crypto_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(mrvl_crypto_pmd_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c index f8b7edcd69..291b1e5983 100644 --- a/drivers/crypto/nitrox/nitrox_sym.c +++ b/drivers/crypto/nitrox/nitrox_sym.c @@ -68,6 +68,9 @@ static const struct rte_driver nitrox_rte_sym_drv = { .alias = nitrox_sym_drv_name }; +_RTE_CRYPTO_ENQ_PROTO(nitrox_sym_dev_enq_burst); +_RTE_CRYPTO_DEQ_PROTO(nitrox_sym_dev_deq_burst); + static int nitrox_sym_dev_qp_release(struct rte_cryptodev *cdev, uint16_t qp_id); @@ -677,6 +680,7 @@ nitrox_sym_dev_enq_burst(void *queue_pair, struct rte_crypto_op **ops, return cnt; } +_RTE_CRYPTO_ENQ_DEF(nitrox_sym_dev_enq_burst) static int nitrox_deq_single_op(struct nitrox_qp *qp, struct rte_crypto_op **op_ptr) @@ -726,6 +730,7 @@ nitrox_sym_dev_deq_burst(void *queue_pair, struct rte_crypto_op **ops, return cnt; } +_RTE_CRYPTO_DEQ_DEF(nitrox_sym_dev_deq_burst) static struct rte_cryptodev_ops nitrox_cryptodev_ops = { .dev_configure = nitrox_sym_dev_config, @@ -769,8 +774,10 @@ nitrox_sym_pmd_create(struct nitrox_device *ndev) ndev->rte_sym_dev.name = cdev->data->name; cdev->driver_id = nitrox_sym_drv_id; cdev->dev_ops = &nitrox_cryptodev_ops; - cdev->enqueue_burst = nitrox_sym_dev_enq_burst; - cdev->dequeue_burst = nitrox_sym_dev_deq_burst; + rte_crypto_set_enq_burst_fn(cdev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(nitrox_sym_dev_enq_burst)); + rte_crypto_set_deq_burst_fn(cdev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(nitrox_sym_dev_deq_burst)); cdev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_HW_ACCELERATED | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c index fe3ca25a0c..7a6a7fadfe 100644 --- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c +++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c @@ -6,6 +6,7 @@ #include #include #include +#include #include "nitrox_sym_reqmgr.h" #include "nitrox_logs.h" diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c index f9935d52cc..9a4c0fa66e 100644 --- a/drivers/crypto/null/null_crypto_pmd.c +++ b/drivers/crypto/null/null_crypto_pmd.c @@ -11,6 +11,9 @@ static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(null_crypto_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(null_crypto_pmd_dequeue_burst); + /** verify and set session parameters */ int null_crypto_set_session_parameters( @@ -137,6 +140,7 @@ null_crypto_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, qp->qp_stats.enqueue_err_count++; return i; } +_RTE_CRYPTO_ENQ_DEF(null_crypto_pmd_enqueue_burst) /** Dequeue burst */ static uint16_t @@ -153,6 +157,7 @@ null_crypto_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, return nb_dequeued; } +_RTE_CRYPTO_DEQ_DEF(null_crypto_pmd_dequeue_burst) /** Create crypto device */ static int @@ -172,8 +177,10 @@ cryptodev_null_create(const char *name, dev->dev_ops = null_crypto_pmd_ops; /* register rx/tx burst functions for data path */ - dev->dequeue_burst = null_crypto_pmd_dequeue_burst; - dev->enqueue_burst = null_crypto_pmd_enqueue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(null_crypto_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(null_crypto_pmd_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c index ab335c6a62..8b5f32821a 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c +++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c @@ -12,6 +12,7 @@ #include #include #include +#include #include "otx_cryptodev_hw_access.h" #include "otx_cryptodev_mbox.h" diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c index 9b5bde53f8..9d607f35ad 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c @@ -687,12 +687,14 @@ otx_cpt_enqueue_asym(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) { return otx_cpt_pkt_enqueue(qptr, ops, nb_ops, OP_TYPE_ASYM); } +_RTE_CRYPTO_ENQ_DEF(otx_cpt_enqueue_asym) static uint16_t otx_cpt_enqueue_sym(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) { return otx_cpt_pkt_enqueue(qptr, ops, nb_ops, OP_TYPE_SYM); } +_RTE_CRYPTO_ENQ_DEF(otx_cpt_enqueue_sym) static __rte_always_inline void submit_request_to_sso(struct ssows *ws, uintptr_t req, @@ -1019,12 +1021,14 @@ otx_cpt_dequeue_asym(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) { return otx_cpt_pkt_dequeue(qptr, ops, nb_ops, OP_TYPE_ASYM); } +_RTE_CRYPTO_DEQ_DEF(otx_cpt_dequeue_asym) static uint16_t otx_cpt_dequeue_sym(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) { return otx_cpt_pkt_dequeue(qptr, ops, nb_ops, OP_TYPE_SYM); } +_RTE_CRYPTO_DEQ_DEF(otx_cpt_dequeue_sym) uintptr_t __rte_hot otx_crypto_adapter_dequeue(uintptr_t get_work1) @@ -1151,11 +1155,15 @@ otx_cpt_dev_create(struct rte_cryptodev *c_dev) c_dev->dev_ops = &cptvf_ops; if (c_dev->feature_flags & RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO) { - c_dev->enqueue_burst = otx_cpt_enqueue_sym; - c_dev->dequeue_burst = otx_cpt_dequeue_sym; + rte_crypto_set_enq_burst_fn(c_dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(otx_cpt_enqueue_sym)); + rte_crypto_set_deq_burst_fn(c_dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(otx_cpt_dequeue_sym)); } else { - c_dev->enqueue_burst = otx_cpt_enqueue_asym; - c_dev->dequeue_burst = otx_cpt_dequeue_asym; + rte_crypto_set_enq_burst_fn(c_dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(otx_cpt_enqueue_asym)); + rte_crypto_set_deq_burst_fn(c_dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(otx_cpt_dequeue_asym)); } /* Save dev private data */ diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.h b/drivers/crypto/octeontx/otx_cryptodev_ops.h index f234f16970..72ab287b5b 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_ops.h +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.h @@ -14,6 +14,11 @@ int otx_cpt_dev_create(struct rte_cryptodev *c_dev); +_RTE_CRYPTO_ENQ_PROTO(otx_cpt_enqueue_sym); +_RTE_CRYPTO_DEQ_PROTO(otx_cpt_dequeue_sym); +_RTE_CRYPTO_ENQ_PROTO(otx_cpt_enqueue_asym); +_RTE_CRYPTO_DEQ_PROTO(otx_cpt_dequeue_asym); + __rte_internal uint16_t __rte_hot otx_crypto_adapter_enqueue(void *port, struct rte_crypto_op *op); diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c index 812515fc1b..263e09879d 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c @@ -3,6 +3,7 @@ */ #include #include +#include #include "otx2_cryptodev.h" #include "otx2_cryptodev_hw_access.h" diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index 723804347f..a646c8f3ef 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -33,6 +33,9 @@ static uint64_t otx2_fpm_iova[CPT_EC_ID_PMAX]; /* Forward declarations */ +_RTE_CRYPTO_ENQ_PROTO(otx2_cpt_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(otx2_cpt_dequeue_burst); + static int otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id); @@ -826,6 +829,7 @@ otx2_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) return count; } +_RTE_CRYPTO_ENQ_DEF(otx2_cpt_enqueue_burst) static __rte_always_inline void otx2_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req, @@ -1096,12 +1100,15 @@ otx2_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops) return nb_completed; } +_RTE_CRYPTO_DEQ_DEF(otx2_cpt_dequeue_burst) void otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev) { - dev->enqueue_burst = otx2_cpt_enqueue_burst; - dev->dequeue_burst = otx2_cpt_dequeue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(otx2_cpt_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(otx2_cpt_dequeue_burst)); rte_mb(); } diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c index f149366c2a..9401072760 100644 --- a/drivers/crypto/openssl/rte_openssl_pmd.c +++ b/drivers/crypto/openssl/rte_openssl_pmd.c @@ -20,6 +20,9 @@ static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(openssl_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(openssl_pmd_dequeue_burst); + #if (OPENSSL_VERSION_NUMBER < 0x10100000L) static HMAC_CTX *HMAC_CTX_new(void) { @@ -2159,6 +2162,7 @@ openssl_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, qp->stats.enqueue_err_count++; return i; } +_RTE_CRYPTO_ENQ_DEF(openssl_pmd_enqueue_burst) /** Dequeue burst */ static uint16_t @@ -2175,6 +2179,7 @@ openssl_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, return nb_dequeued; } +_RTE_CRYPTO_DEQ_DEF(openssl_pmd_dequeue_burst) /** Create OPENSSL crypto device */ static int @@ -2195,8 +2200,10 @@ cryptodev_openssl_create(const char *name, dev->dev_ops = rte_openssl_pmd_ops; /* register rx/tx burst functions for data path */ - dev->dequeue_burst = openssl_pmd_dequeue_burst; - dev->enqueue_burst = openssl_pmd_enqueue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(openssl_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(openssl_pmd_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c index e91bb0d317..cbd7768f2c 100644 --- a/drivers/crypto/qat/qat_asym_pmd.c +++ b/drivers/crypto/qat/qat_asym_pmd.c @@ -13,6 +13,9 @@ uint8_t qat_asym_driver_id; +_RTE_CRYPTO_ENQ_PROTO(qat_asym_pmd_enqueue_op_burst); +_RTE_CRYPTO_DEQ_PROTO(qat_asym_pmd_dequeue_op_burst); + static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = { QAT_BASE_GEN1_ASYM_CAPABILITIES, RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() @@ -214,12 +217,14 @@ uint16_t qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, { return qat_enqueue_op_burst(qp, (void **)ops, nb_ops); } +_RTE_CRYPTO_ENQ_DEF(qat_asym_pmd_enqueue_op_burst) uint16_t qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { return qat_dequeue_op_burst(qp, (void **)ops, nb_ops); } +_RTE_CRYPTO_DEQ_DEF(qat_asym_pmd_dequeue_op_burst) /* An rte_driver is needed in the registration of both the device and the driver * with cryptodev. @@ -292,8 +297,10 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, cryptodev->driver_id = qat_asym_driver_id; cryptodev->dev_ops = &crypto_qat_ops; - cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst; - cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst; + rte_crypto_set_enq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(qat_asym_pmd_enqueue_op_burst)); + rte_crypto_set_deq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(qat_asym_pmd_dequeue_op_burst)); cryptodev->feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_HW_ACCELERATED | diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index efda921c05..b3ea51a246 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -20,6 +20,8 @@ #define MIXED_CRYPTO_MIN_FW_VER 0x04090000 uint8_t qat_sym_driver_id; +_RTE_CRYPTO_ENQ_PROTO(qat_sym_pmd_enqueue_op_burst); +_RTE_CRYPTO_DEQ_PROTO(qat_sym_pmd_dequeue_op_burst); static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = { QAT_BASE_GEN1_SYM_CAPABILITIES, @@ -319,6 +321,7 @@ qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, { return qat_enqueue_op_burst(qp, (void **)ops, nb_ops); } +_RTE_CRYPTO_ENQ_DEF(qat_sym_pmd_enqueue_op_burst) static uint16_t qat_sym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, @@ -326,6 +329,7 @@ qat_sym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, { return qat_dequeue_op_burst(qp, (void **)ops, nb_ops); } +_RTE_CRYPTO_DEQ_DEF(qat_sym_pmd_dequeue_op_burst) /* An rte_driver is needed in the registration of both the device and the driver * with cryptodev. @@ -399,8 +403,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, cryptodev->driver_id = qat_sym_driver_id; cryptodev->dev_ops = &crypto_qat_ops; - cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst; - cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst; + rte_crypto_set_enq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(qat_sym_pmd_enqueue_op_burst)); + rte_crypto_set_deq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(qat_sym_pmd_dequeue_op_burst)); cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_HW_ACCELERATED | diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c index 8284ac0b66..9df3d66df2 100644 --- a/drivers/crypto/snow3g/rte_snow3g_pmd.c +++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c @@ -18,6 +18,9 @@ static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(snow3g_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(snow3g_pmd_dequeue_burst); + /** Get xform chain order. */ static enum snow3g_operation snow3g_get_mode(const struct rte_crypto_sym_xform *xform) @@ -520,6 +523,7 @@ snow3g_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, qp->qp_stats.enqueue_err_count += nb_ops - enqueued_ops; return enqueued_ops; } +_RTE_CRYPTO_ENQ_DEF(snow3g_pmd_enqueue_burst) static uint16_t snow3g_pmd_dequeue_burst(void *queue_pair, @@ -535,6 +539,7 @@ snow3g_pmd_dequeue_burst(void *queue_pair, return nb_dequeued; } +_RTE_CRYPTO_DEQ_DEF(snow3g_pmd_dequeue_burst) static int cryptodev_snow3g_remove(struct rte_vdev_device *vdev); @@ -557,8 +562,10 @@ cryptodev_snow3g_create(const char *name, dev->dev_ops = rte_snow3g_pmd_ops; /* Register RX/TX burst functions for data path. */ - dev->dequeue_burst = snow3g_pmd_dequeue_burst; - dev->enqueue_burst = snow3g_pmd_enqueue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(snow3g_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(snow3g_pmd_dequeue_burst)); dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c index 8faa39df4a..a76d014a35 100644 --- a/drivers/crypto/virtio/virtio_cryptodev.c +++ b/drivers/crypto/virtio/virtio_cryptodev.c @@ -731,8 +731,10 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev, cryptodev->driver_id = cryptodev_virtio_driver_id; cryptodev->dev_ops = &virtio_crypto_dev_ops; - cryptodev->enqueue_burst = virtio_crypto_pkt_tx_burst; - cryptodev->dequeue_burst = virtio_crypto_pkt_rx_burst; + rte_crypto_set_enq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(virtio_crypto_pkt_tx_burst)); + rte_crypto_set_deq_burst_fn(cryptodev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(virtio_crypto_pkt_rx_burst)); cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | @@ -773,8 +775,8 @@ virtio_crypto_dev_uninit(struct rte_cryptodev *cryptodev) } cryptodev->dev_ops = NULL; - cryptodev->enqueue_burst = NULL; - cryptodev->dequeue_burst = NULL; + rte_crypto_set_enq_burst_fn(cryptodev->data->dev_id, NULL); + rte_crypto_set_deq_burst_fn(cryptodev->data->dev_id, NULL); /* release control queue */ virtio_crypto_queue_release(hw->cvq); diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h index 215bce7863..2ca8c35434 100644 --- a/drivers/crypto/virtio/virtio_cryptodev.h +++ b/drivers/crypto/virtio/virtio_cryptodev.h @@ -63,4 +63,6 @@ uint16_t virtio_crypto_pkt_rx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts, uint16_t nb_pkts); +_RTE_CRYPTO_ENQ_PROTO(virtio_crypto_pkt_tx_burst); +_RTE_CRYPTO_DEQ_PROTO(virtio_crypto_pkt_rx_burst); #endif /* _VIRTIO_CRYPTODEV_H_ */ diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c index a65524a306..c96bb541a2 100644 --- a/drivers/crypto/virtio/virtio_rxtx.c +++ b/drivers/crypto/virtio/virtio_rxtx.c @@ -454,6 +454,7 @@ virtio_crypto_pkt_rx_burst(void *tx_queue, struct rte_crypto_op **rx_pkts, return nb_rx; } +_RTE_CRYPTO_DEQ_DEF(virtio_crypto_pkt_rx_burst) uint16_t virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts, @@ -525,3 +526,4 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts, return nb_tx; } +_RTE_CRYPTO_ENQ_DEF(virtio_crypto_pkt_tx_burst) diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c index d4b343a7af..19d1670dad 100644 --- a/drivers/crypto/zuc/rte_zuc_pmd.c +++ b/drivers/crypto/zuc/rte_zuc_pmd.c @@ -16,6 +16,9 @@ static uint8_t cryptodev_driver_id; +_RTE_CRYPTO_ENQ_PROTO(zuc_pmd_enqueue_burst); +_RTE_CRYPTO_DEQ_PROTO(zuc_pmd_dequeue_burst); + /** Get xform chain order. */ static enum zuc_operation zuc_get_mode(const struct rte_crypto_sym_xform *xform) @@ -444,6 +447,7 @@ zuc_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, qp->qp_stats.enqueue_err_count += nb_ops - enqueued_ops; return enqueued_ops; } +_RTE_CRYPTO_ENQ_DEF(zuc_pmd_enqueue_burst) static uint16_t zuc_pmd_dequeue_burst(void *queue_pair, @@ -459,6 +463,7 @@ zuc_pmd_dequeue_burst(void *queue_pair, return nb_dequeued; } +_RTE_CRYPTO_DEQ_DEF(zuc_pmd_dequeue_burst) static int cryptodev_zuc_remove(struct rte_vdev_device *vdev); @@ -505,8 +510,10 @@ cryptodev_zuc_create(const char *name, dev->dev_ops = rte_zuc_pmd_ops; /* Register RX/TX burst functions for data path. */ - dev->dequeue_burst = zuc_pmd_dequeue_burst; - dev->enqueue_burst = zuc_pmd_enqueue_burst; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(zuc_pmd_enqueue_burst)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(zuc_pmd_dequeue_burst)); internals = dev->data->dev_private; internals->mb_mgr = mb_mgr; From patchwork Sun Aug 29 12:51:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 97517 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C227FA0C46; Sun, 29 Aug 2021 14:52:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8CABB410F8; Sun, 29 Aug 2021 14:52:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 4CADF410F8 for ; Sun, 29 Aug 2021 14:52:31 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17T9XAQf022234; Sun, 29 Aug 2021 05:52:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=zb5ol5Rb7k2vbLF4OI9+fCPyHdBnckqGc67K3kg+2xE=; b=U9ROlexu+rGDY4AIqYuc7+6UjYj+J7HzFiM3gfP/CDb/J8kT3HrXKs/NRpSWHT9Q8uIE Y+XVP7ODraiZEb/eR0u72xLS0/W3uNJaAZs9arLe/404SnXuPYwgHRRYumGUy6PlNzDT 1B6EEUnbpilhA19TxUwlgJxbDVL/HakHuztvGQlWMFw8d4iVUUf0x46zkwaJR4pOzpLr VQw6IQeOoSNjQX1xLW9hkhuhYWXLOAvzI4iaBxWaHKJXOAVrQfnZ8/Z4l+d6zZEKS4Hu lgZcEgw7yebcNd81VD0aUI2OcXDPeng/AsW5k1OFAQ82JlshR2hB6MDn6a4CkD3unvaE /w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmme-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:52:24 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:52:22 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:52:22 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 94FD83F7070; Sun, 29 Aug 2021 05:52:17 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:37 +0530 Message-ID: <20210829125139.2173235-7-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: q62UWl8MOL8LFGydOSUg41lvMqZt6ICM X-Proofpoint-GUID: q62UWl8MOL8LFGydOSUg41lvMqZt6ICM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 6/8] crypto/scheduler: rename enq-deq functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" scheduler PMD has 4 variants, which uses same name for all the enqueue and dequeue functions. This causes multiple definitions of same function with the new framework of datapath APIs. Hence the function names are updated to specify the the variant it is for. Signed-off-by: Akhil Goyal Tested-by: Rebecca Troy Acked-by: Fan Zhang --- drivers/crypto/scheduler/scheduler_failover.c | 20 +++++++++---------- .../crypto/scheduler/scheduler_multicore.c | 18 ++++++++--------- .../scheduler/scheduler_pkt_size_distr.c | 20 +++++++++---------- .../crypto/scheduler/scheduler_roundrobin.c | 20 +++++++++---------- 4 files changed, 39 insertions(+), 39 deletions(-) diff --git a/drivers/crypto/scheduler/scheduler_failover.c b/drivers/crypto/scheduler/scheduler_failover.c index 844312dd1b..88cc8f05f7 100644 --- a/drivers/crypto/scheduler/scheduler_failover.c +++ b/drivers/crypto/scheduler/scheduler_failover.c @@ -37,7 +37,7 @@ failover_worker_enqueue(struct scheduler_worker *worker, } static uint16_t -schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_fo_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct fo_scheduler_qp_ctx *qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -60,14 +60,14 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) static uint16_t -schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_fo_enqueue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; uint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring, nb_ops); - uint16_t nb_ops_enqd = schedule_enqueue(qp, ops, + uint16_t nb_ops_enqd = schedule_fo_enqueue(qp, ops, nb_ops_to_enq); scheduler_order_insert(order_ring, ops, nb_ops_enqd); @@ -76,7 +76,7 @@ schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, } static uint16_t -schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_fo_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct fo_scheduler_qp_ctx *qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -108,13 +108,13 @@ schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_fo_dequeue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; - schedule_dequeue(qp, ops, nb_ops); + schedule_fo_dequeue(qp, ops, nb_ops); return scheduler_order_drain(order_ring, ops, nb_ops); } @@ -145,11 +145,11 @@ scheduler_start(struct rte_cryptodev *dev) } if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = schedule_enqueue_ordering; - dev->dequeue_burst = schedule_dequeue_ordering; + dev->enqueue_burst = schedule_fo_enqueue_ordering; + dev->dequeue_burst = schedule_fo_dequeue_ordering; } else { - dev->enqueue_burst = schedule_enqueue; - dev->dequeue_burst = schedule_dequeue; + dev->enqueue_burst = schedule_fo_enqueue; + dev->dequeue_burst = schedule_fo_dequeue; } for (i = 0; i < dev->data->nb_queue_pairs; i++) { diff --git a/drivers/crypto/scheduler/scheduler_multicore.c b/drivers/crypto/scheduler/scheduler_multicore.c index 1e2e8dbf9f..bf97343e52 100644 --- a/drivers/crypto/scheduler/scheduler_multicore.c +++ b/drivers/crypto/scheduler/scheduler_multicore.c @@ -36,7 +36,7 @@ struct mc_scheduler_qp_ctx { }; static uint16_t -schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_mc_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct mc_scheduler_qp_ctx *mc_qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -64,14 +64,14 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_mc_enqueue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; uint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring, nb_ops); - uint16_t nb_ops_enqd = schedule_enqueue(qp, ops, + uint16_t nb_ops_enqd = schedule_mc_enqueue(qp, ops, nb_ops_to_enq); scheduler_order_insert(order_ring, ops, nb_ops_enqd); @@ -81,7 +81,7 @@ schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, static uint16_t -schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_mc_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct mc_scheduler_qp_ctx *mc_qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -107,7 +107,7 @@ schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_mc_dequeue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = @@ -253,11 +253,11 @@ scheduler_start(struct rte_cryptodev *dev) sched_ctx->wc_pool[i]); if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = &schedule_enqueue_ordering; - dev->dequeue_burst = &schedule_dequeue_ordering; + dev->enqueue_burst = &schedule_mc_enqueue_ordering; + dev->dequeue_burst = &schedule_mc_dequeue_ordering; } else { - dev->enqueue_burst = &schedule_enqueue; - dev->dequeue_burst = &schedule_dequeue; + dev->enqueue_burst = &schedule_mc_enqueue; + dev->dequeue_burst = &schedule_mc_dequeue; } for (i = 0; i < dev->data->nb_queue_pairs; i++) { diff --git a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c index 57e330a744..b025ab9736 100644 --- a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c +++ b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c @@ -34,7 +34,7 @@ struct psd_schedule_op { }; static uint16_t -schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_dist_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct scheduler_qp_ctx *qp_ctx = qp; struct psd_scheduler_qp_ctx *psd_qp_ctx = qp_ctx->private_qp_ctx; @@ -171,14 +171,14 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_dist_enqueue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; uint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring, nb_ops); - uint16_t nb_ops_enqd = schedule_enqueue(qp, ops, + uint16_t nb_ops_enqd = schedule_dist_enqueue(qp, ops, nb_ops_to_enq); scheduler_order_insert(order_ring, ops, nb_ops_enqd); @@ -187,7 +187,7 @@ schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, } static uint16_t -schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_dist_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct psd_scheduler_qp_ctx *qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -224,13 +224,13 @@ schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_dist_dequeue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; - schedule_dequeue(qp, ops, nb_ops); + schedule_dist_dequeue(qp, ops, nb_ops); return scheduler_order_drain(order_ring, ops, nb_ops); } @@ -281,11 +281,11 @@ scheduler_start(struct rte_cryptodev *dev) } if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = &schedule_enqueue_ordering; - dev->dequeue_burst = &schedule_dequeue_ordering; + dev->enqueue_burst = &schedule_dist_enqueue_ordering; + dev->dequeue_burst = &schedule_dist_dequeue_ordering; } else { - dev->enqueue_burst = &schedule_enqueue; - dev->dequeue_burst = &schedule_dequeue; + dev->enqueue_burst = &schedule_dist_enqueue; + dev->dequeue_burst = &schedule_dist_dequeue; } return 0; diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c index bc4a632106..95e34401ce 100644 --- a/drivers/crypto/scheduler/scheduler_roundrobin.c +++ b/drivers/crypto/scheduler/scheduler_roundrobin.c @@ -17,7 +17,7 @@ struct rr_scheduler_qp_ctx { }; static uint16_t -schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_rr_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rr_scheduler_qp_ctx *rr_qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -43,14 +43,14 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_rr_enqueue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; uint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring, nb_ops); - uint16_t nb_ops_enqd = schedule_enqueue(qp, ops, + uint16_t nb_ops_enqd = schedule_rr_enqueue(qp, ops, nb_ops_to_enq); scheduler_order_insert(order_ring, ops, nb_ops_enqd); @@ -60,7 +60,7 @@ schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops, static uint16_t -schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) +schedule_rr_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rr_scheduler_qp_ctx *rr_qp_ctx = ((struct scheduler_qp_ctx *)qp)->private_qp_ctx; @@ -98,13 +98,13 @@ schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) } static uint16_t -schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops, +schedule_rr_dequeue_ordering(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) { struct rte_ring *order_ring = ((struct scheduler_qp_ctx *)qp)->order_ring; - schedule_dequeue(qp, ops, nb_ops); + schedule_rr_dequeue(qp, ops, nb_ops); return scheduler_order_drain(order_ring, ops, nb_ops); } @@ -130,11 +130,11 @@ scheduler_start(struct rte_cryptodev *dev) uint16_t i; if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = &schedule_enqueue_ordering; - dev->dequeue_burst = &schedule_dequeue_ordering; + dev->enqueue_burst = &schedule_rr_enqueue_ordering; + dev->dequeue_burst = &schedule_rr_dequeue_ordering; } else { - dev->enqueue_burst = &schedule_enqueue; - dev->dequeue_burst = &schedule_dequeue; + dev->enqueue_burst = &schedule_rr_enqueue; + dev->dequeue_burst = &schedule_rr_dequeue; } for (i = 0; i < dev->data->nb_queue_pairs; i++) { From patchwork Sun Aug 29 12:51:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 97518 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF62AA0C46; Sun, 29 Aug 2021 14:52:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C7A0A41144; Sun, 29 Aug 2021 14:52:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id C4358410F3 for ; Sun, 29 Aug 2021 14:52:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17T4vCZL004130; Sun, 29 Aug 2021 05:52:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=CXCwfMKMuqHmVX2avv3mh3VzUG+Qupn2hpsKg+leJF8=; b=Sul5CyFsGTtLlh5+dhy66Fi2fU3ntOGaGZhi8eHFWlUTSY3mNBh4NBgrA7JFUXO2ed1P ocZby+n9EcJsPSMPYLhIFPg/9GsYtfOxiiV3b7iF8Qco9dX/WOYA05H2xNyEILLV6bS6 ge3comOYTcyuHstLJalGSv89qUB96gcehPH08RuPLZv57UOfZpa6FLGGvIlROorPEEEz ZNK7k086+MGhaYA+18W9KcJCV+4IUdWA6JM/9KfQ7eV/8W4+5HYp3AxbF7kSkMx+PE0v xRnOkFC50O0LIMnutj9cVjnYla3LJNCXVhiCmsfiDnEhLL3SBp6wHnv8RqorgOzRdYu0 XQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmmr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:52:30 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:52:28 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:52:28 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 5E0EF3F706F; Sun, 29 Aug 2021 05:52:23 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:38 +0530 Message-ID: <20210829125139.2173235-8-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ne8TCqV3VEt0CZU8B5W3tQX17a0HAiDW X-Proofpoint-GUID: ne8TCqV3VEt0CZU8B5W3tQX17a0HAiDW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 7/8] crypto/scheduler: update for new datapath framework X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" PMD is updated to use the new API for all enqueue and dequeue paths. Signed-off-by: Akhil Goyal Tested-by: Rebecca Troy Acked-by: Fan Zhang --- drivers/crypto/scheduler/scheduler_failover.c | 23 +++++++++++++++---- .../crypto/scheduler/scheduler_multicore.c | 22 ++++++++++++++---- .../scheduler/scheduler_pkt_size_distr.c | 22 ++++++++++++++---- .../crypto/scheduler/scheduler_roundrobin.c | 22 ++++++++++++++---- 4 files changed, 72 insertions(+), 17 deletions(-) diff --git a/drivers/crypto/scheduler/scheduler_failover.c b/drivers/crypto/scheduler/scheduler_failover.c index 88cc8f05f7..0ccebfa6d1 100644 --- a/drivers/crypto/scheduler/scheduler_failover.c +++ b/drivers/crypto/scheduler/scheduler_failover.c @@ -3,6 +3,7 @@ */ #include +#include #include #include "rte_cryptodev_scheduler_operations.h" @@ -13,6 +14,11 @@ #define NB_FAILOVER_WORKERS 2 #define WORKER_SWITCH_MASK (0x01) +_RTE_CRYPTO_ENQ_PROTO(schedule_fo_enqueue); +_RTE_CRYPTO_DEQ_PROTO(schedule_fo_dequeue); +_RTE_CRYPTO_ENQ_PROTO(schedule_fo_enqueue_ordering); +_RTE_CRYPTO_DEQ_PROTO(schedule_fo_dequeue_ordering); + struct fo_scheduler_qp_ctx { struct scheduler_worker primary_worker; struct scheduler_worker secondary_worker; @@ -57,7 +63,7 @@ schedule_fo_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) return enqueued_ops; } - +_RTE_CRYPTO_ENQ_DEF(schedule_fo_enqueue) static uint16_t schedule_fo_enqueue_ordering(void *qp, struct rte_crypto_op **ops, @@ -74,6 +80,7 @@ schedule_fo_enqueue_ordering(void *qp, struct rte_crypto_op **ops, return nb_ops_enqd; } +_RTE_CRYPTO_ENQ_DEF(schedule_fo_enqueue_ordering) static uint16_t schedule_fo_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -106,6 +113,7 @@ schedule_fo_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) return nb_deq_ops + nb_deq_ops2; } +_RTE_CRYPTO_DEQ_DEF(schedule_fo_dequeue) static uint16_t schedule_fo_dequeue_ordering(void *qp, struct rte_crypto_op **ops, @@ -118,6 +126,7 @@ schedule_fo_dequeue_ordering(void *qp, struct rte_crypto_op **ops, return scheduler_order_drain(order_ring, ops, nb_ops); } +_RTE_CRYPTO_DEQ_DEF(schedule_fo_dequeue_ordering) static int worker_attach(__rte_unused struct rte_cryptodev *dev, @@ -145,11 +154,15 @@ scheduler_start(struct rte_cryptodev *dev) } if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = schedule_fo_enqueue_ordering; - dev->dequeue_burst = schedule_fo_dequeue_ordering; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(schedule_fo_enqueue_ordering)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(schedule_fo_dequeue_ordering)); } else { - dev->enqueue_burst = schedule_fo_enqueue; - dev->dequeue_burst = schedule_fo_dequeue; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(schedule_fo_enqueue)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(schedule_fo_dequeue)); } for (i = 0; i < dev->data->nb_queue_pairs; i++) { diff --git a/drivers/crypto/scheduler/scheduler_multicore.c b/drivers/crypto/scheduler/scheduler_multicore.c index bf97343e52..4c145dae88 100644 --- a/drivers/crypto/scheduler/scheduler_multicore.c +++ b/drivers/crypto/scheduler/scheduler_multicore.c @@ -4,6 +4,7 @@ #include #include +#include #include #include "rte_cryptodev_scheduler_operations.h" @@ -16,6 +17,11 @@ #define CRYPTO_OP_STATUS_BIT_COMPLETE 0x80 +_RTE_CRYPTO_ENQ_PROTO(schedule_mc_enqueue); +_RTE_CRYPTO_DEQ_PROTO(schedule_mc_dequeue); +_RTE_CRYPTO_ENQ_PROTO(schedule_mc_enqueue_ordering); +_RTE_CRYPTO_DEQ_PROTO(schedule_mc_dequeue_ordering); + /** multi-core scheduler context */ struct mc_scheduler_ctx { uint32_t num_workers; /**< Number of workers polling */ @@ -62,6 +68,7 @@ schedule_mc_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) return processed_ops; } +_RTE_CRYPTO_ENQ_DEF(schedule_mc_enqueue) static uint16_t schedule_mc_enqueue_ordering(void *qp, struct rte_crypto_op **ops, @@ -78,6 +85,7 @@ schedule_mc_enqueue_ordering(void *qp, struct rte_crypto_op **ops, return nb_ops_enqd; } +_RTE_CRYPTO_ENQ_DEF(schedule_mc_enqueue_ordering) static uint16_t @@ -105,6 +113,7 @@ schedule_mc_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) return processed_ops; } +_RTE_CRYPTO_DEQ_DEF(schedule_mc_dequeue) static uint16_t schedule_mc_dequeue_ordering(void *qp, struct rte_crypto_op **ops, @@ -130,6 +139,7 @@ schedule_mc_dequeue_ordering(void *qp, struct rte_crypto_op **ops, rte_ring_dequeue_finish(order_ring, nb_ops_to_deq); return nb_ops_to_deq; } +_RTE_CRYPTO_DEQ_DEF(schedule_mc_dequeue_ordering) static int worker_attach(__rte_unused struct rte_cryptodev *dev, @@ -253,11 +263,15 @@ scheduler_start(struct rte_cryptodev *dev) sched_ctx->wc_pool[i]); if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = &schedule_mc_enqueue_ordering; - dev->dequeue_burst = &schedule_mc_dequeue_ordering; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(schedule_mc_enqueue_ordering)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(schedule_mc_dequeue_ordering)); } else { - dev->enqueue_burst = &schedule_mc_enqueue; - dev->dequeue_burst = &schedule_mc_dequeue; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(schedule_mc_enqueue)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(schedule_mc_dequeue)); } for (i = 0; i < dev->data->nb_queue_pairs; i++) { diff --git a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c index b025ab9736..811f30ca0d 100644 --- a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c +++ b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c @@ -3,6 +3,7 @@ */ #include +#include #include #include "rte_cryptodev_scheduler_operations.h" @@ -14,6 +15,11 @@ #define SECONDARY_WORKER_IDX 1 #define NB_PKT_SIZE_WORKERS 2 +_RTE_CRYPTO_ENQ_PROTO(schedule_dist_enqueue); +_RTE_CRYPTO_DEQ_PROTO(schedule_dist_dequeue); +_RTE_CRYPTO_ENQ_PROTO(schedule_dist_enqueue_ordering); +_RTE_CRYPTO_DEQ_PROTO(schedule_dist_dequeue_ordering); + /** pkt size based scheduler context */ struct psd_scheduler_ctx { uint32_t threshold; @@ -169,6 +175,7 @@ schedule_dist_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) return processed_ops_pri + processed_ops_sec; } +_RTE_CRYPTO_ENQ_DEF(schedule_dist_enqueue) static uint16_t schedule_dist_enqueue_ordering(void *qp, struct rte_crypto_op **ops, @@ -185,6 +192,7 @@ schedule_dist_enqueue_ordering(void *qp, struct rte_crypto_op **ops, return nb_ops_enqd; } +_RTE_CRYPTO_ENQ_DEF(schedule_dist_enqueue_ordering) static uint16_t schedule_dist_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -222,6 +230,7 @@ schedule_dist_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) return nb_deq_ops_pri + nb_deq_ops_sec; } +_RTE_CRYPTO_DEQ_DEF(schedule_dist_dequeue) static uint16_t schedule_dist_dequeue_ordering(void *qp, struct rte_crypto_op **ops, @@ -234,6 +243,7 @@ schedule_dist_dequeue_ordering(void *qp, struct rte_crypto_op **ops, return scheduler_order_drain(order_ring, ops, nb_ops); } +_RTE_CRYPTO_DEQ_DEF(schedule_dist_dequeue_ordering) static int worker_attach(__rte_unused struct rte_cryptodev *dev, @@ -281,11 +291,15 @@ scheduler_start(struct rte_cryptodev *dev) } if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = &schedule_dist_enqueue_ordering; - dev->dequeue_burst = &schedule_dist_dequeue_ordering; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(schedule_dist_enqueue_ordering)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(schedule_dist_dequeue_ordering)); } else { - dev->enqueue_burst = &schedule_dist_enqueue; - dev->dequeue_burst = &schedule_dist_dequeue; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(schedule_dist_enqueue)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(schedule_dist_dequeue)); } return 0; diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c index 95e34401ce..139e227cfe 100644 --- a/drivers/crypto/scheduler/scheduler_roundrobin.c +++ b/drivers/crypto/scheduler/scheduler_roundrobin.c @@ -3,11 +3,17 @@ */ #include +#include #include #include "rte_cryptodev_scheduler_operations.h" #include "scheduler_pmd_private.h" +_RTE_CRYPTO_ENQ_PROTO(schedule_rr_enqueue); +_RTE_CRYPTO_DEQ_PROTO(schedule_rr_dequeue); +_RTE_CRYPTO_ENQ_PROTO(schedule_rr_enqueue_ordering); +_RTE_CRYPTO_DEQ_PROTO(schedule_rr_dequeue_ordering); + struct rr_scheduler_qp_ctx { struct scheduler_worker workers[RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS]; uint32_t nb_workers; @@ -41,6 +47,7 @@ schedule_rr_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) return processed_ops; } +_RTE_CRYPTO_ENQ_DEF(schedule_rr_enqueue) static uint16_t schedule_rr_enqueue_ordering(void *qp, struct rte_crypto_op **ops, @@ -57,6 +64,7 @@ schedule_rr_enqueue_ordering(void *qp, struct rte_crypto_op **ops, return nb_ops_enqd; } +_RTE_CRYPTO_ENQ_DEF(schedule_rr_enqueue_ordering) static uint16_t @@ -96,6 +104,7 @@ schedule_rr_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) return nb_deq_ops; } +_RTE_CRYPTO_DEQ_DEF(schedule_rr_dequeue) static uint16_t schedule_rr_dequeue_ordering(void *qp, struct rte_crypto_op **ops, @@ -108,6 +117,7 @@ schedule_rr_dequeue_ordering(void *qp, struct rte_crypto_op **ops, return scheduler_order_drain(order_ring, ops, nb_ops); } +_RTE_CRYPTO_DEQ_DEF(schedule_rr_dequeue_ordering) static int worker_attach(__rte_unused struct rte_cryptodev *dev, @@ -130,11 +140,15 @@ scheduler_start(struct rte_cryptodev *dev) uint16_t i; if (sched_ctx->reordering_enabled) { - dev->enqueue_burst = &schedule_rr_enqueue_ordering; - dev->dequeue_burst = &schedule_rr_dequeue_ordering; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(schedule_rr_enqueue_ordering)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(schedule_rr_dequeue_ordering)); } else { - dev->enqueue_burst = &schedule_rr_enqueue; - dev->dequeue_burst = &schedule_rr_dequeue; + rte_crypto_set_enq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_ENQ_FUNC(schedule_rr_enqueue)); + rte_crypto_set_deq_burst_fn(dev->data->dev_id, + _RTE_CRYPTO_DEQ_FUNC(schedule_rr_dequeue)); } for (i = 0; i < dev->data->nb_queue_pairs; i++) { From patchwork Sun Aug 29 12:51:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 97519 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5D9ADA0C46; Sun, 29 Aug 2021 14:52:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 14AE0410E8; Sun, 29 Aug 2021 14:52:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CDDAE4014D for ; Sun, 29 Aug 2021 14:52:41 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17TANEE6009178; Sun, 29 Aug 2021 05:52:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=n2YNpJP6ObpoarNWWSW+HcIRB5UvOSaqMIr7/Tb+8RI=; b=kZqXob355WeuecHeTVG5bfl+ZCeMHsktdbSoZwDchSsHwoLsiRqqRBJ21zWljUkr6Zzn oy3g6nHbVLfLHd4V6MP2As397BK/Bq2C8CHsn1oFVzjU1KUWLDFcalUStrJEMDzmWalw w6Fqoe6G7bJuojQhQnXTaZOPtgnFVYXeOVAUlxqatbOzLbM4NF54VLpk+HMRGuLvTeHg S0MEx7Bzju1E7eidBSbEObn5MKC5ZihKCpDwOLFvx974wDEuLsI9xWLWU0bWbPYBEO07 Ty31k8sMIjQ5F94sIdZog7JpPvopoMRLdIDP2DxuK2vpijjvKtwEGnpDrPljXSLsl8Ug EQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmn0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:52:36 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:52:34 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:52:34 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 28D4C3F7070; Sun, 29 Aug 2021 05:52:28 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:39 +0530 Message-ID: <20210829125139.2173235-9-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: fYJrWExTAs_K6HR_4NNDUL37999cABNg X-Proofpoint-GUID: fYJrWExTAs_K6HR_4NNDUL37999cABNg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 8/8] cryptodev: move device specific structures X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The device specific structures - rte_cryptodev and rte_cryptodev_data are moved to cryptodev_pmd.h to hide it from the applications. Signed-off-by: Akhil Goyal Tested-by: Rebecca Troy Acked-by: Fan Zhang --- lib/cryptodev/cryptodev_pmd.h | 62 ++++++++++++++++++++++++ lib/cryptodev/rte_cryptodev_core.h | 76 ------------------------------ 2 files changed, 62 insertions(+), 76 deletions(-) diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h index d40e5cee94..00c159c2db 100644 --- a/lib/cryptodev/cryptodev_pmd.h +++ b/lib/cryptodev/cryptodev_pmd.h @@ -56,6 +56,68 @@ struct rte_cryptodev_pmd_init_params { unsigned int max_nb_queue_pairs; }; +/** + * @internal + * The data part, with no function pointers, associated with each device. + * + * This structure is safe to place in shared memory to be common among + * different processes in a multi-process configuration. + */ +struct rte_cryptodev_data { + uint8_t dev_id; + /**< Device ID for this instance */ + uint8_t socket_id; + /**< Socket ID where memory is allocated */ + char name[RTE_CRYPTODEV_NAME_MAX_LEN]; + /**< Unique identifier name */ + + __extension__ + uint8_t dev_started : 1; + /**< Device state: STARTED(1)/STOPPED(0) */ + + struct rte_mempool *session_pool; + /**< Session memory pool */ + void **queue_pairs; + /**< Array of pointers to queue pairs. */ + uint16_t nb_queue_pairs; + /**< Number of device queue pairs. */ + + void *dev_private; + /**< PMD-specific private data */ +} __rte_cache_aligned; + + +/** @internal The data structure associated with each crypto device. */ +struct rte_cryptodev { + struct rte_cryptodev_data *data; + /**< Pointer to device data */ + struct rte_cryptodev_ops *dev_ops; + /**< Functions exported by PMD */ + uint64_t feature_flags; + /**< Feature flags exposes HW/SW features for the given device */ + struct rte_device *device; + /**< Backing device */ + + uint8_t driver_id; + /**< Crypto driver identifier*/ + + struct rte_cryptodev_cb_list link_intr_cbs; + /**< User application callback for interrupts if present */ + + void *security_ctx; + /**< Context for security ops */ + + __extension__ + uint8_t attached : 1; + /**< Flag indicating the device is attached */ + + struct rte_cryptodev_cb_rcu *enq_cbs; + /**< User application callback for pre enqueue processing */ + + struct rte_cryptodev_cb_rcu *deq_cbs; + /**< User application callback for post dequeue processing */ +} __rte_cache_aligned; + /** Global structure used for maintaining state of allocated crypto devices */ struct rte_cryptodev_global { struct rte_cryptodev *devs; /**< Device information array */ diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h index ec38f70e0c..88506e8a7b 100644 --- a/lib/cryptodev/rte_cryptodev_core.h +++ b/lib/cryptodev/rte_cryptodev_core.h @@ -16,15 +16,6 @@ * Applications should not use these directly. * */ - -typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, - struct rte_crypto_op **ops, uint16_t nb_ops); -/**< Dequeue processed packets from queue pair of a device. */ - -typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, - struct rte_crypto_op **ops, uint16_t nb_ops); -/**< Enqueue packets for processing on queue pair of a device. */ - typedef uint16_t (*rte_crypto_dequeue_burst_t)(uint8_t dev_id, uint8_t qp_id, struct rte_crypto_op **ops, uint16_t nb_ops); @@ -44,73 +35,6 @@ struct rte_cryptodev_api { extern struct rte_cryptodev_api *rte_cryptodev_api; -/** - * @internal - * The data part, with no function pointers, associated with each device. - * - * This structure is safe to place in shared memory to be common among - * different processes in a multi-process configuration. - */ -struct rte_cryptodev_data { - uint8_t dev_id; - /**< Device ID for this instance */ - uint8_t socket_id; - /**< Socket ID where memory is allocated */ - char name[RTE_CRYPTODEV_NAME_MAX_LEN]; - /**< Unique identifier name */ - - __extension__ - uint8_t dev_started : 1; - /**< Device state: STARTED(1)/STOPPED(0) */ - - struct rte_mempool *session_pool; - /**< Session memory pool */ - void **queue_pairs; - /**< Array of pointers to queue pairs. */ - uint16_t nb_queue_pairs; - /**< Number of device queue pairs. */ - - void *dev_private; - /**< PMD-specific private data */ -} __rte_cache_aligned; - - -/** @internal The data structure associated with each crypto device. */ -struct rte_cryptodev { - dequeue_pkt_burst_t dequeue_burst; - /**< Pointer to PMD receive function. */ - enqueue_pkt_burst_t enqueue_burst; - /**< Pointer to PMD transmit function. */ - - struct rte_cryptodev_data *data; - /**< Pointer to device data */ - struct rte_cryptodev_ops *dev_ops; - /**< Functions exported by PMD */ - uint64_t feature_flags; - /**< Feature flags exposes HW/SW features for the given device */ - struct rte_device *device; - /**< Backing device */ - - uint8_t driver_id; - /**< Crypto driver identifier*/ - - struct rte_cryptodev_cb_list link_intr_cbs; - /**< User application callback for interrupts if present */ - - void *security_ctx; - /**< Context for security ops */ - - __extension__ - uint8_t attached : 1; - /**< Flag indicating the device is attached */ - - struct rte_cryptodev_cb_rcu *enq_cbs; - /**< User application callback for pre enqueue processing */ - - struct rte_cryptodev_cb_rcu *deq_cbs; - /**< User application callback for post dequeue processing */ -} __rte_cache_aligned; - /** * The pool of rte_cryptodev structures. */