From patchwork Sun Aug 29 12:51:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 97514 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 639ADA0C46; Sun, 29 Aug 2021 14:52:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4CF7A410F9; Sun, 29 Aug 2021 14:52:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id C678B4068B for ; Sun, 29 Aug 2021 14:52:13 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 17T9b6dc028559; Sun, 29 Aug 2021 05:52:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=+CnoW+r/s8ed+MGAoYnFwkwpKvARQ3JBW7vNpWO+5lU=; b=SFW0BHbGlAE1Pz+GUlrdEpv9ThlmIYqOQIIqInLaybv+NROTqqQ4vcGsawohVJl0C7aB gBaLSmZMzfdC9RieHvP2A1A1zbgbkvzWZ4poDVi23vAHLBXNz1AWGTgLCOYMczj/lZlo 4kvWakvfJqGIveKdP4m3NWZx4K/7sU8p+Ojg9x1/pOZ4gitbs16w8gM7yGiym9N9wN/r AKHCNWXnrLAqNzghLzRD19Mu5IAOW3eiLCtgLjTPtLY5ft07Hwk6n/l5OfZvJyeRkPYv iOguHeih3XDL/98rx72A1zk3Nj6T4r4qUmf+B3MxMx/IdA6JiVvaA1RYM22nhXij0vwq kg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3aqmnmtmk7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 29 Aug 2021 05:52:07 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 29 Aug 2021 05:52:05 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sun, 29 Aug 2021 05:52:05 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 4054D3F7070; Sun, 29 Aug 2021 05:52:00 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , , , , Akhil Goyal Date: Sun, 29 Aug 2021 18:21:34 +0530 Message-ID: <20210829125139.2173235-4-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210829125139.2173235-1-gakhil@marvell.com> References: <20210829125139.2173235-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: F65vZDartnZGB4vVPN3w7uW3Gv7aWGTx X-Proofpoint-GUID: F65vZDartnZGB4vVPN3w7uW3Gv7aWGTx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-29_04,2021-08-27_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH 3/8] cryptodev: add helper functions for new datapath interface X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add helper functions and macros to help drivers to transition to new datapath interface. Signed-off-by: Akhil Goyal Tested-by: Rebecca Troy Acked-by: Fan Zhang --- lib/cryptodev/cryptodev_pmd.h | 246 ++++++++++++++++++++++++++++++++++ lib/cryptodev/rte_cryptodev.c | 40 +++++- lib/cryptodev/version.map | 4 + 3 files changed, 289 insertions(+), 1 deletion(-) diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h index eeaea13a23..d40e5cee94 100644 --- a/lib/cryptodev/cryptodev_pmd.h +++ b/lib/cryptodev/cryptodev_pmd.h @@ -70,6 +70,13 @@ struct cryptodev_driver { const struct rte_driver *driver; uint8_t id; }; +/** + * @internal + * The pool of *rte_cryptodev* structures. The size of the pool + * is configured at compile-time in the file. + */ +extern struct rte_cryptodev rte_crypto_devices[]; + /** * Get the rte_cryptodev structure device pointer for the device. Assumes a @@ -529,6 +536,245 @@ __rte_internal void rte_cryptodev_api_reset(struct rte_cryptodev_api *api); +/** + * @internal + * Helper routine for cryptodev_dequeue_burst. + * Should be called as first thing on entrance to the PMD's + * rte_cryptodev_dequeue_burst implementation. + * Does necessary checks and returns pointer to cryptodev identifier. + * + * @param dev_id + * The device identifier of the crypto device. + * @param qp_id + * The index of the queue pair from which processed crypto ops will + * be dequeued. + * + * @return + * Pointer to device queue pair on success or NULL otherwise. + */ +__rte_internal +static inline void * +_rte_cryptodev_dequeue_prolog(uint8_t dev_id, uint8_t qp_id) +{ + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + + return dev->data->queue_pairs[qp_id]; +} + +/** + * @internal + * Helper routine for crypto driver dequeue API. + * Should be called at exit from PMD's rte_cryptodev_dequeue_burst + * implementation. + * Does necessary post-processing - invokes RX callbacks if any, tracing, etc. + * + * @param dev_id + * The device identifier of the Crypto device. + * @param qp_id + * The index of the queue pair from which to retrieve input crypto_ops. + * @param ops + * The address of an array of pointers to *rte_crypto_op* structures that + * have been retrieved from the device. + * @param nb_ops + * The number of ops that were retrieved from the device. + * + * @return + * The number of crypto ops effectively supplied to the *ops* array. + */ +__rte_internal +static inline uint16_t +_rte_cryptodev_dequeue_epilog(uint16_t dev_id, uint16_t qp_id, + struct rte_crypto_op **ops, uint16_t nb_ops) +{ +#ifdef RTE_CRYPTO_CALLBACKS + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + + if (unlikely(dev->deq_cbs != NULL)) { + struct rte_cryptodev_cb_rcu *list; + struct rte_cryptodev_cb *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + list = &dev->deq_cbs[qp_id]; + rte_rcu_qsbr_thread_online(list->qsbr, 0); + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + + while (cb != NULL) { + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, + cb->arg); + cb = cb->next; + }; + + rte_rcu_qsbr_thread_offline(list->qsbr, 0); + } +#endif + + return nb_ops; +} +#define _RTE_CRYPTO_DEQ_FUNC(fn) _rte_crypto_deq_##fn + +/** + * @internal + * Helper macro to create new API wrappers for existing PMD dequeue functions. + */ +#define _RTE_CRYPTO_DEQ_PROTO(fn) \ + uint16_t _RTE_CRYPTO_DEQ_FUNC(fn)(uint8_t dev_id, uint8_t qp_id, \ + struct rte_crypto_op **ops, uint16_t nb_ops) + +/** + * @internal + * Helper macro to create new API wrappers for existing PMD dequeue functions. + */ +#define _RTE_CRYPTO_DEQ_DEF(fn) \ +_RTE_CRYPTO_DEQ_PROTO(fn) \ +{ \ + void *qp = _rte_cryptodev_dequeue_prolog(dev_id, qp_id); \ + if (qp == NULL) \ + return 0; \ + nb_ops = fn(qp, ops, nb_ops); \ + return _rte_cryptodev_dequeue_epilog(dev_id, qp_id, ops, nb_ops); \ +} + +/** + * @internal + * Helper routine for cryptodev_enqueue_burst. + * Should be called as first thing on entrance to the PMD's + * rte_cryptodev_enqueue_burst implementation. + * Does necessary checks and returns pointer to cryptodev queue pair. + * + * @param dev_id + * The device identifier of the crypto device. + * @param qp_id + * The index of the queue pair in which packets will be enqueued. + * @param ops + * The address of an array of pointers to *rte_crypto_op* structures that + * will be enqueued to the device. + * @param nb_ops + * The number of ops that will be sent to the device. + * + * @return + * Pointer to device queue pair on success or NULL otherwise. + */ +__rte_internal +static inline void * +_rte_cryptodev_enqueue_prolog(uint8_t dev_id, uint8_t qp_id, + struct rte_crypto_op **ops, uint16_t nb_ops) +{ + struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; + +#ifdef RTE_CRYPTO_CALLBACKS + if (unlikely(dev->enq_cbs != NULL)) { + struct rte_cryptodev_cb_rcu *list; + struct rte_cryptodev_cb *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + list = &dev->enq_cbs[qp_id]; + rte_rcu_qsbr_thread_online(list->qsbr, 0); + cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED); + + while (cb != NULL) { + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, + cb->arg); + cb = cb->next; + }; + + rte_rcu_qsbr_thread_offline(list->qsbr, 0); + } +#endif + return dev->data->queue_pairs[qp_id]; +} + +#define _RTE_CRYPTO_ENQ_FUNC(fn) _rte_crypto_enq_##fn + +/** + * @internal + * Helper macro to create new API wrappers for existing PMD enqueue functions. + */ +#define _RTE_CRYPTO_ENQ_PROTO(fn) \ + uint16_t _RTE_CRYPTO_ENQ_FUNC(fn)(uint8_t dev_id, uint8_t qp_id, \ + struct rte_crypto_op **ops, uint16_t nb_ops) + +/** + * @internal + * Helper macro to create new API wrappers for existing PMD enqueue functions. + */ +#define _RTE_CRYPTO_ENQ_DEF(fn) \ +_RTE_CRYPTO_ENQ_PROTO(fn) \ +{ \ + void *qp = _rte_cryptodev_enqueue_prolog(dev_id, qp_id, ops, nb_ops); \ + if (qp == NULL) \ + return 0; \ + return fn(qp, ops, nb_ops); \ +} + +/** + * @internal + * Helper routine to get enqueue burst function of a given device. + * + * @param dev_id + * The device identifier of the Crypto device. + * + * @return + * The function if valid else NULL + */ +__rte_internal +rte_crypto_enqueue_burst_t +rte_crypto_get_enq_burst_fn(uint8_t dev_id); + +/** + * @internal + * Helper routine to get dequeue burst function of a given device. + * + * @param dev_id + * The device identifier of the Crypto device. + * + * @return + * The function if valid else NULL + */ +__rte_internal +rte_crypto_dequeue_burst_t +rte_crypto_get_deq_burst_fn(uint8_t dev_id); + +/** + * @internal + * Helper routine to set enqueue burst function of a given device. + * + * @param dev_id + * The device identifier of the Crypto device. + * + * @return + * 0 Success. + * -EINVAL Failure if dev_id or fn are in-valid. + */ +__rte_internal +int +rte_crypto_set_enq_burst_fn(uint8_t dev_id, rte_crypto_enqueue_burst_t fn); + +/** + * @internal + * Helper routine to set dequeue burst function of a given device. + * + * @param dev_id + * The device identifier of the Crypto device. + * + * @return + * 0 Success. + * -EINVAL Failure if dev_id or fn are in-valid. + */ +__rte_internal +int +rte_crypto_set_deq_burst_fn(uint8_t dev_id, rte_crypto_dequeue_burst_t fn); + + static inline void * get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess, uint8_t driver_id) { diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 26f8390668..4ab82d21d0 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -44,7 +44,7 @@ static uint8_t nb_drivers; -static struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS]; +struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS]; struct rte_cryptodev *rte_cryptodevs = rte_crypto_devices; @@ -1270,6 +1270,44 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id, socket_id); } +rte_crypto_enqueue_burst_t +rte_crypto_get_enq_burst_fn(uint8_t dev_id) +{ + if (dev_id >= RTE_CRYPTO_MAX_DEVS) { + rte_errno = EINVAL; + return NULL; + } + return rte_cryptodev_api[dev_id].enqueue_burst; +} + +rte_crypto_dequeue_burst_t +rte_crypto_get_deq_burst_fn(uint8_t dev_id) +{ + if (dev_id >= RTE_CRYPTO_MAX_DEVS) { + rte_errno = EINVAL; + return NULL; + } + return rte_cryptodev_api[dev_id].dequeue_burst; +} + +int +rte_crypto_set_enq_burst_fn(uint8_t dev_id, rte_crypto_enqueue_burst_t fn) +{ + if (dev_id >= RTE_CRYPTO_MAX_DEVS || fn == NULL) + return -EINVAL; + rte_cryptodev_api[dev_id].enqueue_burst = fn; + return 0; +} + +int +rte_crypto_set_deq_burst_fn(uint8_t dev_id, rte_crypto_dequeue_burst_t fn) +{ + if (dev_id >= RTE_CRYPTO_MAX_DEVS || fn == NULL) + return -EINVAL; + rte_cryptodev_api[dev_id].dequeue_burst = fn; + return 0; +} + struct rte_cryptodev_cb * rte_cryptodev_add_enq_callback(uint8_t dev_id, uint16_t qp_id, diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map index 050089ae55..b64384cc05 100644 --- a/lib/cryptodev/version.map +++ b/lib/cryptodev/version.map @@ -116,6 +116,10 @@ EXPERIMENTAL { INTERNAL { global: + rte_crypto_get_deq_burst_fn; + rte_crypto_get_enq_burst_fn; + rte_crypto_set_deq_burst_fn; + rte_crypto_set_enq_burst_fn; rte_cryptodev_allocate_driver; rte_cryptodev_api_reset; rte_cryptodev_pmd_allocate;