From patchwork Mon Oct 19 02:57:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Gujjar, Abhinandan S" X-Patchwork-Id: 81593 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CC8EA04DC; Tue, 20 Oct 2020 14:05:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E25DBBC4A; Tue, 20 Oct 2020 14:05:28 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 37AA9BC48 for ; Tue, 20 Oct 2020 14:05:25 +0200 (CEST) IronPort-SDR: KEY+pnwZbEB/CBY/yuDiwmAavtpwBU/nnIVUolaQmfHMXxl+/Nk4WRM3ATeev2tA8bh05gfkGh MhaEHKXq1tTg== X-IronPort-AV: E=McAfee;i="6000,8403,9779"; a="231389633" X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="231389633" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2020 05:05:17 -0700 IronPort-SDR: /qM6wIfob7Yp+eir9KgRp/xh6xV6HD3bH57s8X6I4TKXSOhck8H6rIyj0QN5bLNlSLBEtkFWZu 5pTsaEx9jIeQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="533034068" Received: from unknown (HELO localhost.localdomain) ([10.190.210.98]) by orsmga005.jf.intel.com with ESMTP; 20 Oct 2020 05:05:13 -0700 From: Abhinandan Gujjar To: dev@dpdk.org, declan.doherty@intel.com, akhil.goyal@nxp.com, Honnappa.Nagarahalli@arm.com, konstantin.ananyev@intel.com Cc: narender.vangati@intel.com, jerinj@marvell.com, abhinandan.gujjar@intel.com Date: Mon, 19 Oct 2020 08:27:21 +0530 Message-Id: <1603076242-41883-2-git-send-email-abhinandan.gujjar@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1603076242-41883-1-git-send-email-abhinandan.gujjar@intel.com> References: <1603076242-41883-1-git-send-email-abhinandan.gujjar@intel.com> Subject: [dpdk-dev] [v3 1/2] cryptodev: support enqueue callback functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds APIs to add/remove callback functions. The callback function will be called for each burst of crypto ops received on a given crypto device queue pair. Signed-off-by: Abhinandan Gujjar --- config/rte_config.h | 1 + lib/librte_cryptodev/meson.build | 2 +- lib/librte_cryptodev/rte_cryptodev.c | 201 +++++++++++++++++++++++++ lib/librte_cryptodev/rte_cryptodev.h | 153 ++++++++++++++++++- lib/librte_cryptodev/rte_cryptodev_version.map | 2 + 5 files changed, 357 insertions(+), 2 deletions(-) diff --git a/config/rte_config.h b/config/rte_config.h index 03d90d7..e999d93 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -61,6 +61,7 @@ /* cryptodev defines */ #define RTE_CRYPTO_MAX_DEVS 64 #define RTE_CRYPTODEV_NAME_LEN 64 +#define RTE_CRYPTO_CALLBACKS 1 /* compressdev defines */ #define RTE_COMPRESS_MAX_DEVS 64 diff --git a/lib/librte_cryptodev/meson.build b/lib/librte_cryptodev/meson.build index c4c6b3b..8c5493f 100644 --- a/lib/librte_cryptodev/meson.build +++ b/lib/librte_cryptodev/meson.build @@ -9,4 +9,4 @@ headers = files('rte_cryptodev.h', 'rte_crypto.h', 'rte_crypto_sym.h', 'rte_crypto_asym.h') -deps += ['kvargs', 'mbuf'] +deps += ['kvargs', 'mbuf', 'rcu'] diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index 3d95ac6..5ba774a 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -448,6 +448,10 @@ struct rte_cryptodev_sym_session_pool_private_data { return 0; } +#ifdef RTE_CRYPTO_CALLBACKS +/* spinlock for crypto device enq callbacks */ +static rte_spinlock_t rte_cryptodev_enq_cb_lock = RTE_SPINLOCK_INITIALIZER; +#endif const char * rte_cryptodev_get_feature_name(uint64_t flag) @@ -1136,6 +1140,203 @@ struct rte_cryptodev * socket_id); } +#ifdef RTE_CRYPTO_CALLBACKS + +struct rte_cryptodev_cb * +rte_cryptodev_add_enq_callback(uint8_t dev_id, + uint16_t qp_id, + rte_cryptodev_callback_fn cb_fn, + void *cb_arg) +{ + struct rte_cryptodev *dev; + struct rte_cryptodev_cb *cb, *tail; + struct rte_cryptodev_enq_cb_rcu *list; + struct rte_rcu_qsbr *qsbr; + size_t size; + + /* Max thread set to 1, as one DP thread accessing a queue-pair */ + const uint32_t max_threads = 1; + + if (!cb_fn) + return NULL; + + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) { + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id); + return NULL; + } + + dev = &rte_crypto_devices[dev_id]; + if (qp_id >= dev->data->nb_queue_pairs) { + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id); + return NULL; + } + + rte_spinlock_lock(&rte_cryptodev_enq_cb_lock); + if (dev->enq_cbs == NULL) { + dev->enq_cbs = rte_zmalloc(NULL, sizeof(cb) * + dev->data->nb_queue_pairs, 0); + if (dev->enq_cbs == NULL) { + CDEV_LOG_ERR("Failed to allocate memory for callbacks"); + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock); + rte_errno = ENOMEM; + return NULL; + } + + list = rte_zmalloc(NULL, sizeof(*list), 0); + if (list == NULL) { + CDEV_LOG_ERR("Failed to allocate memory for list on " + "dev=%d, queue_pair_id=%d", dev_id, qp_id); + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock); + rte_errno = ENOMEM; + rte_free(dev->enq_cbs); + return NULL; + } + + /* Create RCU QSBR variable */ + size = rte_rcu_qsbr_get_memsize(max_threads); + qsbr = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE); + if (qsbr == NULL) { + CDEV_LOG_ERR("Failed to allocate memory for RCU on " + "dev=%d, queue_pair_id=%d", dev_id, qp_id); + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock); + rte_errno = ENOMEM; + rte_free(list); + rte_free(dev->enq_cbs); + dev->enq_cbs[qp_id] = NULL; + return NULL; + } + + if (rte_rcu_qsbr_init(qsbr, max_threads)) { + CDEV_LOG_ERR("Failed to initialize for RCU on " + "dev=%d, queue_pair_id=%d", dev_id, qp_id); + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock); + rte_free(qsbr); + rte_free(list); + rte_free(dev->enq_cbs); + dev->enq_cbs[qp_id] = NULL; + return NULL; + } + + dev->enq_cbs[qp_id] = list; + list->qsbr = qsbr; + } + + cb = rte_zmalloc(NULL, sizeof(*cb), 0); + if (cb == NULL) { + CDEV_LOG_ERR("Failed to allocate memory for callback on " + "dev=%d, queue_pair_id=%d", dev_id, qp_id); + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock); + rte_errno = ENOMEM; + return NULL; + } + + cb->fn = cb_fn; + cb->arg = cb_arg; + + /* Add the callbacks in fifo order. */ + list = dev->enq_cbs[qp_id]; + tail = list->next; + if (tail) { + while (tail->next) + tail = tail->next; + tail->next = cb; + } else + list->next = cb; + + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock); + + return cb; +} + +int +rte_cryptodev_remove_enq_callback(uint8_t dev_id, + uint16_t qp_id, + struct rte_cryptodev_cb *cb) +{ + struct rte_cryptodev *dev; + struct rte_cryptodev_cb **prev_cb, *curr_cb; + struct rte_cryptodev_enq_cb_rcu *list; + uint16_t qp; + int free_mem; + int ret; + + free_mem = 1; + ret = -EINVAL; + + if (!cb) { + CDEV_LOG_ERR("cb is NULL"); + return ret; + } + + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) { + CDEV_LOG_ERR("Invalid dev_id=%d", dev_id); + return ret; + } + + dev = &rte_crypto_devices[dev_id]; + if (qp_id >= dev->data->nb_queue_pairs) { + CDEV_LOG_ERR("Invalid queue_pair_id=%d", qp_id); + return ret; + } + + list = dev->enq_cbs[qp_id]; + if (list == NULL) { + CDEV_LOG_ERR("Callback list is NULL"); + return ret; + } + + if (list->qsbr == NULL) { + CDEV_LOG_ERR("Rcu qsbr is NULL"); + return ret; + } + + rte_spinlock_lock(&rte_cryptodev_enq_cb_lock); + if (dev->enq_cbs == NULL) { + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock); + return ret; + } + + prev_cb = &list->next; + for (; *prev_cb != NULL; prev_cb = &curr_cb->next) { + curr_cb = *prev_cb; + if (curr_cb == cb) { + /* Remove the user cb from the callback list. */ + *prev_cb = curr_cb->next; + ret = 0; + break; + } + } + + if (!ret) { + /* Call sync with invalid thread id as this is part of + * control plane API + */ + rte_rcu_qsbr_synchronize(list->qsbr, RTE_QSBR_THRID_INVALID); + rte_free(cb); + } + + if (list->next == NULL) { + rte_free(list->qsbr); + rte_free(list); + dev->enq_cbs[qp_id] = NULL; + } + + for (qp = 0; qp < dev->data->nb_queue_pairs; qp++) + if (dev->enq_cbs[qp] != NULL) { + free_mem = 0; + break; + } + + if (free_mem) { + rte_free(dev->enq_cbs); + dev->enq_cbs = NULL; + } + + rte_spinlock_unlock(&rte_cryptodev_enq_cb_lock); + + return ret; +} +#endif int rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats) diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h index 0935fd5..669746d 100644 --- a/lib/librte_cryptodev/rte_cryptodev.h +++ b/lib/librte_cryptodev/rte_cryptodev.h @@ -23,6 +23,7 @@ #include "rte_dev.h" #include #include +#include #include "rte_cryptodev_trace_fp.h" @@ -522,6 +523,34 @@ struct rte_cryptodev_qp_conf { /**< The mempool for creating sess private data in sessionless mode */ }; +#ifdef RTE_CRYPTO_CALLBACKS +/** + * Function type used for pre processing crypto ops when enqueue burst is + * called. + * + * The callback function is called on enqueue burst immediately + * before the crypto ops are put onto the hardware queue for processing. + * + * @param dev_id The identifier of the device. + * @param qp_id The index of the queue pair in which ops are + * to be enqueued for processing. The value + * must be in the range [0, nb_queue_pairs - 1] + * previously supplied to + * *rte_cryptodev_configure*. + * @param ops The address of an array of *nb_ops* pointers + * to *rte_crypto_op* structures which contain + * the crypto operations to be processed. + * @param nb_ops The number of operations to process. + * @param user_param The arbitrary user parameter passed in by the + * application when the callback was originally + * registered. + * @return The number of ops to be enqueued to the + * crypto device. + */ +typedef uint16_t (*rte_cryptodev_callback_fn)(uint16_t dev_id, uint16_t qp_id, + struct rte_crypto_op **ops, uint16_t nb_ops, void *user_param); +#endif + /** * Typedef for application callback function to be registered by application * software for notification of device events @@ -822,7 +851,6 @@ struct rte_cryptodev_config { enum rte_cryptodev_event_type event, rte_cryptodev_cb_fn cb_fn, void *cb_arg); - typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops); /**< Dequeue processed packets from queue pair of a device. */ @@ -839,6 +867,33 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, /** Structure to keep track of registered callbacks */ TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback); +#ifdef RTE_CRYPTO_CALLBACKS +/** + * @internal + * Structure used to hold information about the callbacks to be called for a + * queue pair on enqueue. + */ +struct rte_cryptodev_cb { + struct rte_cryptodev_cb *next; + /** < Pointer to next callback */ + rte_cryptodev_callback_fn fn; + /** < Pointer to callback function */ + void *arg; + /** < Pointer to argument */ +}; + +/** + * @internal + * Structure used to hold information about the RCU for a queue pair. + */ +struct rte_cryptodev_enq_cb_rcu { + struct rte_cryptodev_cb *next; + /** < Pointer to next callback */ + struct rte_rcu_qsbr *qsbr; + /** < RCU QSBR variable per queue pair */ +}; +#endif + /** The data structure associated with each crypto device. */ struct rte_cryptodev { dequeue_pkt_burst_t dequeue_burst; @@ -867,6 +922,11 @@ struct rte_cryptodev { __extension__ uint8_t attached : 1; /**< Flag indicating the device is attached */ + +#ifdef RTE_CRYPTO_CALLBACKS + struct rte_cryptodev_enq_cb_rcu **enq_cbs; + /**< User application callback for pre enqueue processing */ +#endif } __rte_cache_aligned; void * @@ -989,6 +1049,25 @@ struct rte_cryptodev_data { { struct rte_cryptodev *dev = &rte_cryptodevs[dev_id]; +#ifdef RTE_CRYPTO_CALLBACKS + if (unlikely(dev->enq_cbs != NULL && dev->enq_cbs[qp_id] != NULL)) { + struct rte_cryptodev_enq_cb_rcu *list; + struct rte_cryptodev_cb *cb; + + list = dev->enq_cbs[qp_id]; + cb = list->next; + rte_rcu_qsbr_thread_online(list->qsbr, 0); + + do { + nb_ops = cb->fn(dev_id, qp_id, ops, nb_ops, + cb->arg); + cb = cb->next; + } while (cb != NULL); + + rte_rcu_qsbr_thread_offline(list->qsbr, 0); + } +#endif + rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops); return (*dev->enqueue_burst)( dev->data->queue_pairs[qp_id], ops, nb_ops); @@ -1730,6 +1809,78 @@ struct rte_crypto_raw_dp_ctx { rte_cryptodev_raw_dequeue_done(struct rte_crypto_raw_dp_ctx *ctx, uint32_t n); +#ifdef RTE_CRYPTO_CALLBACKS +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Add a user callback for a given crypto device and queue pair which will be + * called on crypto ops enqueue. + * + * This API configures a function to be called for each burst of crypto ops + * received on a given crypto device queue pair. The return value is a pointer + * that can be used later to remove the callback using + * rte_cryptodev_remove_enq_callback(). + * + * Multiple functions are called in the order that they are added. + * + * @param dev_id The identifier of the device. + * @param qp_id The index of the queue pair in which ops are + * to be enqueued for processing. The value + * must be in the range [0, nb_queue_pairs - 1] + * previously supplied to + * *rte_cryptodev_configure*. + * @param cb_fn The callback function + * @param cb_arg A generic pointer parameter which will be passed + * to each invocation of the callback function on + * this crypto device and queue pair. + * + * @return + * NULL on error. + * On success, a pointer value which can later be used to remove the callback. + */ + +__rte_experimental +struct rte_cryptodev_cb * +rte_cryptodev_add_enq_callback(uint8_t dev_id, + uint16_t qp_id, + rte_cryptodev_callback_fn cb_fn, + void *cb_arg); + + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Remove a user callback function for given crypto device and queue pair. + * + * This function is used to removed callbacks that were added to a crypto + * device queue pair using rte_cryptodev_add_enq_callback(). + * + * + * + * @param dev_id The identifier of the device. + * @param qp_id The index of the queue pair in which ops are + * to be enqueued for processing. The value + * must be in the range [0, nb_queue_pairs - 1] + * previously supplied to + * *rte_cryptodev_configure*. + * @param cb Pointer to user supplied callback created via + * rte_cryptodev_add_enq_callback(). + * + * @return + * - 0: Success. Callback was removed. + * - -EINVAL: The dev_id or the qp_id is out of range, or the callback + * is NULL or not found for the crypto device queue pair. + */ + +__rte_experimental +int rte_cryptodev_remove_enq_callback(uint8_t dev_id, + uint16_t qp_id, + struct rte_cryptodev_cb *cb); + +#endif + #ifdef __cplusplus } #endif diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map index 7e4360f..5d8d6b0 100644 --- a/lib/librte_cryptodev/rte_cryptodev_version.map +++ b/lib/librte_cryptodev/rte_cryptodev_version.map @@ -101,6 +101,7 @@ EXPERIMENTAL { rte_cryptodev_get_qp_status; # added in 20.11 + rte_cryptodev_add_enq_callback; rte_cryptodev_configure_raw_dp_ctx; rte_cryptodev_get_raw_dp_ctx_size; rte_cryptodev_raw_dequeue; @@ -109,4 +110,5 @@ EXPERIMENTAL { rte_cryptodev_raw_enqueue; rte_cryptodev_raw_enqueue_burst; rte_cryptodev_raw_enqueue_done; + rte_cryptodev_remove_enq_callback; };