From patchwork Fri Aug 28 12:58:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 76139 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC4A7A04B1; Fri, 28 Aug 2020 14:58:33 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 23CC21C1C3; Fri, 28 Aug 2020 14:58:30 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id AB1761C1C3 for ; Fri, 28 Aug 2020 14:58:27 +0200 (CEST) IronPort-SDR: F9lKtyiuJfMiDWpNGLvrPZrq+FWklqy0zcVWhEYb78exQF//x/gWi7ZO50FD1OzI6nnBI+SW+k nl5mX0C1QBFA== X-IronPort-AV: E=McAfee;i="6000,8403,9726"; a="154072177" X-IronPort-AV: E=Sophos;i="5.76,363,1592895600"; d="scan'208";a="154072177" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Aug 2020 05:58:21 -0700 IronPort-SDR: C0Uf7ciogV2lLii9/GlklAkgcZw3RsTNR07225WU3sgMsnVUqXycJGCwPCXnkqtwBHssNiYMKP 9DmiWn8vOnSQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,363,1592895600"; d="scan'208";a="324004427" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga004.fm.intel.com with ESMTP; 28 Aug 2020 05:58:20 -0700 From: Fan Zhang To: dev@dpdk.org Cc: akhil.goyal@nxp.com, fiona.trahe@intel.com, arkadiuszx.kusztal@intel.com, adamx.dybkowski@intel.com, roy.fan.zhang@intel.com, Piotr Bronowski Date: Fri, 28 Aug 2020 13:58:12 +0100 Message-Id: <20200828125815.21614-2-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200828125815.21614-1-roy.fan.zhang@intel.com> References: <20200818162833.20219-1-roy.fan.zhang@intel.com> <20200828125815.21614-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v7 1/4] cryptodev: add crypto data-path service APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds data-path service APIs for enqueue and dequeue operations to cryptodev. The APIs support flexible user-define enqueue and dequeue behaviors and operation mode. Signed-off-by: Fan Zhang Signed-off-by: Piotr Bronowski Signed-off-by: Fan Zhang Signed-off-by: Piotr Bronowski --- lib/librte_cryptodev/rte_crypto.h | 9 + lib/librte_cryptodev/rte_crypto_sym.h | 44 ++- lib/librte_cryptodev/rte_cryptodev.c | 45 +++ lib/librte_cryptodev/rte_cryptodev.h | 335 +++++++++++++++++- lib/librte_cryptodev/rte_cryptodev_pmd.h | 47 ++- .../rte_cryptodev_version.map | 10 + 6 files changed, 481 insertions(+), 9 deletions(-) diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h index fd5ef3a87..f009be9af 100644 --- a/lib/librte_cryptodev/rte_crypto.h +++ b/lib/librte_cryptodev/rte_crypto.h @@ -438,6 +438,15 @@ rte_crypto_op_attach_asym_session(struct rte_crypto_op *op, return 0; } +/** Crypto data-path service types */ +enum rte_crypto_dp_service { + RTE_CRYPTO_DP_SYM_CIPHER_ONLY = 0, + RTE_CRYPTO_DP_SYM_AUTH_ONLY, + RTE_CRYPTO_DP_SYM_CHAIN, + RTE_CRYPTO_DP_SYM_AEAD, + RTE_CRYPTO_DP_N_SERVICE +}; + #ifdef __cplusplus } #endif diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h index f29c98051..518e4111b 100644 --- a/lib/librte_cryptodev/rte_crypto_sym.h +++ b/lib/librte_cryptodev/rte_crypto_sym.h @@ -50,6 +50,18 @@ struct rte_crypto_sgl { uint32_t num; }; +/** + * Crypto IO Data without length info. + * Supposed to be used to pass input/output data buffers with lengths + * defined when creating crypto session. + */ +struct rte_crypto_data { + /** virtual address of the data buffer */ + void *base; + /** IOVA of the data buffer */ + rte_iova_t iova; +}; + /** * Synchronous operation descriptor. * Supposed to be used with CPU crypto API call. @@ -57,12 +69,32 @@ struct rte_crypto_sgl { struct rte_crypto_sym_vec { /** array of SGL vectors */ struct rte_crypto_sgl *sgl; - /** array of pointers to IV */ - void **iv; - /** array of pointers to AAD */ - void **aad; - /** array of pointers to digest */ - void **digest; + + union { + + /* Supposed to be used with CPU crypto API call. */ + struct { + /** array of pointers to IV */ + void **iv; + /** array of pointers to AAD */ + void **aad; + /** array of pointers to digest */ + void **digest; + }; + + /* Supposed to be used with rte_cryptodev_dp_sym_submit_vec() + * call. + */ + struct { + /** vector to IV */ + struct rte_crypto_data *iv_vec; + /** vecor to AAD */ + struct rte_crypto_data *aad_vec; + /** vector to Digest */ + struct rte_crypto_data *digest_vec; + }; + }; + /** * array of statuses for each operation: * - 0 on success diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index 1dd795bcb..8a28511f9 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -1914,6 +1914,51 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec); } +int +rte_cryptodev_get_dp_service_ctx_data_size(uint8_t dev_id) +{ + struct rte_cryptodev *dev; + int32_t size = sizeof(struct rte_crypto_dp_service_ctx); + int32_t priv_size; + + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) + return -1; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + + if (*dev->dev_ops->get_drv_ctx_size == NULL || + !(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE)) { + return -1; + } + + priv_size = (*dev->dev_ops->get_drv_ctx_size)(dev); + if (priv_size < 0) + return -1; + + return RTE_ALIGN_CEIL((size + priv_size), 8); +} + +int +rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, + struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update) +{ + struct rte_cryptodev *dev; + + if (!rte_cryptodev_get_qp_status(dev_id, qp_id)) + return -1; + + dev = rte_cryptodev_pmd_get_dev(dev_id); + if (!(dev->feature_flags & RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE) + || dev->dev_ops->configure_service == NULL) + return -1; + + return (*dev->dev_ops->configure_service)(dev, qp_id, ctx, + service_type, sess_type, session_ctx, is_update); +} + /** Initialise rte_crypto_op mempool element */ static void rte_crypto_op_init(struct rte_mempool *mempool, diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h index 7b3ebc20f..9c97846f3 100644 --- a/lib/librte_cryptodev/rte_cryptodev.h +++ b/lib/librte_cryptodev/rte_cryptodev.h @@ -466,7 +466,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum, /**< Support symmetric session-less operations */ #define RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA (1ULL << 23) /**< Support operations on data which is not byte aligned */ - +#define RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE (1ULL << 24) +/**< Support accelerated specific raw data as input */ /** * Get the name of a crypto device feature flag @@ -1351,6 +1352,338 @@ rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec); +/** + * Get the size of the data-path service context for all registered drivers. + * + * @param dev_id The device identifier. + * + * @return + * - If the device supports data-path service, return the context size. + * - If the device does not support the data-plane service, return -1. + */ +__rte_experimental +int +rte_cryptodev_get_dp_service_ctx_data_size(uint8_t dev_id); + +/** + * Union of different crypto session types, including session-less xform + * pointer. + */ +union rte_cryptodev_session_ctx { + struct rte_cryptodev_sym_session *crypto_sess; + struct rte_crypto_sym_xform *xform; + struct rte_security_session *sec_sess; +}; + +/** + * Submit a data vector into device queue but the driver will not start + * processing until rte_cryptodev_dp_sym_submit_vec() is called. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param vec The array of job vectors. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param opaque The array of opaque data for dequeue. + * @return + * - The number of jobs successfully submitted. + */ +typedef uint32_t (*cryptodev_dp_sym_submit_vec_t)( + void *qp, uint8_t *service_data, struct rte_crypto_sym_vec *vec, + union rte_crypto_sym_ofs ofs, void **opaque); + +/** + * Submit single job into device queue but the driver will not start + * processing until rte_cryptodev_dp_sym_submit_vec() is called. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param data The buffer vector. + * @param n_data_vecs Number of buffer vectors. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param iv IV data. + * @param digest Digest data. + * @param aad AAD data. + * @param opaque The opaque data for dequeue. + * @return + * - On success return 0. + * - On failure return negative integer. + */ +typedef int (*cryptodev_dp_submit_single_job_t)( + void *qp, uint8_t *service_data, struct rte_crypto_vec *data, + uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs, + struct rte_crypto_data *iv, struct rte_crypto_data *digest, + struct rte_crypto_data *aad, void *opaque); + +/** + * Inform the queue pair to start processing or finish dequeuing all + * submitted/dequeued jobs. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param n The total number of submitted jobs. + */ +typedef void (*cryptodev_dp_sym_operation_done_t)(void *qp, + uint8_t *service_data, uint32_t n); + +/** + * Typedef that the user provided for the driver to get the dequeue count. + * The function may return a fixed number or the number parsed from the opaque + * data stored in the first processed job. + * + * @param opaque Dequeued opaque data. + **/ +typedef uint32_t (*rte_cryptodev_get_dequeue_count_t)(void *opaque); + +/** + * Typedef that the user provided to deal with post dequeue operation, such + * as filling status. + * + * @param opaque Dequeued opaque data. In case + * RTE_CRYPTO_HW_DP_FF_GET_OPAQUE_ARRAY bit is + * set, this value will be the opaque data stored + * in the specific processed jobs referenced by + * index, otherwise it will be the opaque data + * stored in the first processed job in the burst. + * @param index Index number of the processed job. + * @param is_op_success Driver filled operation status. + **/ +typedef void (*rte_cryptodev_post_dequeue_t)(void *opaque, uint32_t index, + uint8_t is_op_success); + +/** + * Dequeue symmetric crypto processing of user provided data. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param get_dequeue_count User provided callback function to + * obtain dequeue count. + * @param post_dequeue User provided callback function to + * post-process a dequeued operation. + * @param out_opaque Opaque pointer array to be retrieve from + * device queue. In case of + * *is_opaque_array* is set there should + * be enough room to store all opaque data. + * @param is_opaque_array Set 1 if every dequeued job will be + * written the opaque data into + * *out_opaque* array. + * @param n_success_jobs Driver written value to specific the + * total successful operations count. + * + * @return + * - Returns number of dequeued packets. + */ +typedef uint32_t (*cryptodev_dp_sym_dequeue_t)(void *qp, uint8_t *service_data, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, uint8_t is_opaque_array, + uint32_t *n_success_jobs); + +/** + * Dequeue symmetric crypto processing of user provided data. + * + * @param qp Driver specific queue pair data. + * @param service_data Driver specific service data. + * @param out_opaque Opaque pointer to be retrieve from + * device queue. + * + * @return + * - 1 if the job is dequeued and the operation is a success. + * - 0 if the job is dequeued but the operation is failed. + * - -1 if no job is dequeued. + */ +typedef int (*cryptodev_dp_sym_dequeue_single_job_t)( + void *qp, uint8_t *service_data, void **out_opaque); + +/** + * Context data for asynchronous crypto process. + */ +struct rte_crypto_dp_service_ctx { + void *qp_data; + + union { + /* Supposed to be used for symmetric crypto service */ + struct { + cryptodev_dp_submit_single_job_t submit_single_job; + cryptodev_dp_sym_submit_vec_t submit_vec; + cryptodev_dp_sym_operation_done_t submit_done; + cryptodev_dp_sym_dequeue_t dequeue_opaque; + cryptodev_dp_sym_dequeue_single_job_t dequeue_single; + cryptodev_dp_sym_operation_done_t dequeue_done; + }; + }; + + /* Driver specific service data */ + uint8_t drv_service_data[]; +}; + +/** + * Configure one DP service context data. Calling this function for the first + * time the user should unset the *is_update* parameter and the driver will + * fill necessary operation data into ctx buffer. Only when + * rte_cryptodev_dp_submit_done() is called the data stored in the ctx buffer + * will not be effective. + * + * @param dev_id The device identifier. + * @param qp_id The index of the queue pair from which to + * retrieve processed packets. The value must be + * in the range [0, nb_queue_pair - 1] previously + * supplied to rte_cryptodev_configure(). + * @param service_type Type of the service requested. + * @param sess_type session type. + * @param session_ctx Session context data. + * @param ctx The data-path service context data. + * @param is_update Set 1 if ctx is pre-initialized but need + * update to different service type or session, + * but the rest driver data remains the same. + * @return + * - On success return 0. + * - On failure return negative integer. + */ +__rte_experimental +int +rte_cryptodev_dp_configure_service(uint8_t dev_id, uint16_t qp_id, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, + struct rte_crypto_dp_service_ctx *ctx, uint8_t is_update); + +/** + * Submit single job into device queue but the driver will not start + * processing until rte_cryptodev_dp_sym_submit_vec() is called. + * + * @param ctx The initialized data-path service context data. + * @param data The buffer vector. + * @param n_data_vecs Number of buffer vectors. + * @param ofs Start and stop offsets for auth and cipher + * operations. + * @param iv IV data. + * @param digest Digest data. + * @param aad AAD data. + * @param opaque The array of opaque data for dequeue. + * @return + * - On success return 0. + * - On failure return negative integer. + */ +__rte_experimental +static __rte_always_inline int +rte_cryptodev_dp_submit_single_job(struct rte_crypto_dp_service_ctx *ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_data *iv, struct rte_crypto_data *digest, + struct rte_crypto_data *aad, void *opaque) +{ + return (*ctx->submit_single_job)(ctx->qp_data, ctx->drv_service_data, + data, n_data_vecs, ofs, iv, digest, aad, opaque); +} + +/** + * Submit a data vector into device queue but the driver will not start + * processing until rte_cryptodev_dp_sym_submit_vec() is called. + * + * @param ctx The initialized data-path service context data. + * @param vec The array of job vectors. + * @param ofs Start and stop offsets for auth and cipher operations. + * @param opaque The array of opaque data for dequeue. + * @return + * - The number of jobs successfully submitted. + */ +__rte_experimental +static __rte_always_inline uint32_t +rte_cryptodev_dp_sym_submit_vec(struct rte_crypto_dp_service_ctx *ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + return (*ctx->submit_vec)(ctx->qp_data, ctx->drv_service_data, vec, + ofs, opaque); +} + +/** + * Command the queue pair to start processing all submitted jobs from last + * rte_cryptodev_init_dp_service() call. + * + * @param ctx The initialized data-path service context data. + * @param n The total number of submitted jobs. + */ +__rte_experimental +static __rte_always_inline void +rte_cryptodev_dp_submit_done(struct rte_crypto_dp_service_ctx *ctx, uint32_t n) +{ + (*ctx->submit_done)(ctx->qp_data, ctx->drv_service_data, n); +} + +/** + * Dequeue symmetric crypto processing of user provided data. + * + * @param ctx The initialized data-path service + * context data. + * @param get_dequeue_count User provided callback function to + * obtain dequeue count. + * @param post_dequeue User provided callback function to + * post-process a dequeued operation. + * @param out_opaque Opaque pointer array to be retrieve from + * device queue. In case of + * *is_opaque_array* is set there should + * be enough room to store all opaque data. + * @param is_opaque_array Set 1 if every dequeued job will be + * written the opaque data into + * *out_opaque* array. + * @param n_success_jobs Driver written value to specific the + * total successful operations count. + * + * @return + * - Returns number of dequeued packets. + */ +__rte_experimental +static __rte_always_inline uint32_t +rte_cryptodev_dp_sym_dequeue(struct rte_crypto_dp_service_ctx *ctx, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, uint8_t is_opaque_array, + uint32_t *n_success_jobs) +{ + return (*ctx->dequeue_opaque)(ctx->qp_data, ctx->drv_service_data, + get_dequeue_count, post_dequeue, out_opaque, is_opaque_array, + n_success_jobs); +} + +/** + * Dequeue Single symmetric crypto processing of user provided data. + * + * @param ctx The initialized data-path service + * context data. + * @param out_opaque Opaque pointer to be retrieve from + * device queue. The driver shall support + * NULL input of this parameter. + * + * @return + * - 1 if the job is dequeued and the operation is a success. + * - 0 if the job is dequeued but the operation is failed. + * - -1 if no job is dequeued. + */ +__rte_experimental +static __rte_always_inline int +rte_cryptodev_dp_sym_dequeue_single_job(struct rte_crypto_dp_service_ctx *ctx, + void **out_opaque) +{ + return (*ctx->dequeue_single)(ctx->qp_data, ctx->drv_service_data, + out_opaque); +} + +/** + * Inform the queue pair dequeue jobs finished. + * + * @param ctx The initialized data-path service context data. + * @param n The total number of jobs already dequeued. + */ +__rte_experimental +static __rte_always_inline void +rte_cryptodev_dp_dequeue_done(struct rte_crypto_dp_service_ctx *ctx, uint32_t n) +{ + (*ctx->dequeue_done)(ctx->qp_data, ctx->drv_service_data, n); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h index 81975d72b..bf0260c87 100644 --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h @@ -316,6 +316,41 @@ typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t) (struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec); +/** + * Typedef that the driver provided to get service context private date size. + * + * @param dev Crypto device pointer. + * + * @return + * - On success return the size of the device's service context private data. + * - On failure return negative integer. + */ +typedef int (*cryptodev_dp_get_service_ctx_size_t)( + struct rte_cryptodev *dev); + +/** + * Typedef that the driver provided to configure data-path service. + * + * @param dev Crypto device pointer. + * @param qp_id Crypto device queue pair index. + * @param ctx The data-path service context data. + * @param service_type Type of the service requested. + * @param sess_type session type. + * @param session_ctx Session context data. + * @param is_update Set 1 if ctx is pre-initialized but need + * update to different service type or session, + * but the rest driver data remains the same. + * buffer will always be one. + * @return + * - On success return 0. + * - On failure return negative integer. + */ +typedef int (*cryptodev_dp_configure_service_t)( + struct rte_cryptodev *dev, uint16_t qp_id, + struct rte_crypto_dp_service_ctx *ctx, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, uint8_t is_update); /** Crypto device operations function pointer table */ struct rte_cryptodev_ops { @@ -348,8 +383,16 @@ struct rte_cryptodev_ops { /**< Clear a Crypto sessions private data. */ cryptodev_asym_free_session_t asym_session_clear; /**< Clear a Crypto sessions private data. */ - cryptodev_sym_cpu_crypto_process_t sym_cpu_process; - /**< process input data synchronously (cpu-crypto). */ + union { + cryptodev_sym_cpu_crypto_process_t sym_cpu_process; + /**< process input data synchronously (cpu-crypto). */ + struct { + cryptodev_dp_get_service_ctx_size_t get_drv_ctx_size; + /**< Get data path service context data size. */ + cryptodev_dp_configure_service_t configure_service; + /**< Initialize crypto service ctx data. */ + }; + }; }; diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map index 02f6dcf72..d384382d3 100644 --- a/lib/librte_cryptodev/rte_cryptodev_version.map +++ b/lib/librte_cryptodev/rte_cryptodev_version.map @@ -105,4 +105,14 @@ EXPERIMENTAL { # added in 20.08 rte_cryptodev_get_qp_status; + + # added in 20.11 + rte_cryptodev_dp_configure_service; + rte_cryptodev_get_dp_service_ctx_data_size; + rte_cryptodev_dp_submit_single_job; + rte_cryptodev_dp_sym_submit_vec; + rte_cryptodev_dp_submit_done; + rte_cryptodev_dp_sym_dequeue; + rte_cryptodev_dp_sym_dequeue_single_job; + rte_cryptodev_dp_dequeue_done; }; From patchwork Fri Aug 28 12:58:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 76141 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C571A04B1; Fri, 28 Aug 2020 14:58:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9CC5F1C1D2; Fri, 28 Aug 2020 14:58:34 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id A79E51C1C3 for ; Fri, 28 Aug 2020 14:58:28 +0200 (CEST) IronPort-SDR: +Ssk8KMmhuSegDI93tKvZeLQQv3l5QKIKsdc7HpI//H6CHWh5g8TJNh2s533aMkeXASuO/Kc5z m8WlUeEHqupg== X-IronPort-AV: E=McAfee;i="6000,8403,9726"; a="154072183" X-IronPort-AV: E=Sophos;i="5.76,363,1592895600"; d="scan'208";a="154072183" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Aug 2020 05:58:24 -0700 IronPort-SDR: n8oZY8KRKhD3jk38M5hL0rp/4MLMaedq0fIpldKVo9UI5lLNAbIKk13Upn5rvVqJ03gTFbPO49 pANf8zUZygBg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,363,1592895600"; d="scan'208";a="324004435" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga004.fm.intel.com with ESMTP; 28 Aug 2020 05:58:22 -0700 From: Fan Zhang To: dev@dpdk.org Cc: akhil.goyal@nxp.com, fiona.trahe@intel.com, arkadiuszx.kusztal@intel.com, adamx.dybkowski@intel.com, roy.fan.zhang@intel.com Date: Fri, 28 Aug 2020 13:58:13 +0100 Message-Id: <20200828125815.21614-3-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200828125815.21614-1-roy.fan.zhang@intel.com> References: <20200818162833.20219-1-roy.fan.zhang@intel.com> <20200828125815.21614-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v7 2/4] crypto/qat: add crypto data-path service API support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates QAT PMD to add crypto service API support. Signed-off-by: Fan Zhang --- drivers/common/qat/Makefile | 1 + drivers/crypto/qat/meson.build | 1 + drivers/crypto/qat/qat_sym.h | 13 + drivers/crypto/qat/qat_sym_hw_dp.c | 926 +++++++++++++++++++++++++++++ drivers/crypto/qat/qat_sym_pmd.c | 9 +- 5 files changed, 948 insertions(+), 2 deletions(-) create mode 100644 drivers/crypto/qat/qat_sym_hw_dp.c diff --git a/drivers/common/qat/Makefile b/drivers/common/qat/Makefile index 85d420709..1b71bbbab 100644 --- a/drivers/common/qat/Makefile +++ b/drivers/common/qat/Makefile @@ -42,6 +42,7 @@ endif SRCS-y += qat_sym.c SRCS-y += qat_sym_session.c SRCS-y += qat_sym_pmd.c + SRCS-y += qat_sym_hw_dp.c build_qat = yes endif endif diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build index a225f374a..bc90ec44c 100644 --- a/drivers/crypto/qat/meson.build +++ b/drivers/crypto/qat/meson.build @@ -15,6 +15,7 @@ if dep.found() qat_sources += files('qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c', + 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c') qat_ext_deps += dep diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index 1a9748849..2d6316130 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -264,6 +264,18 @@ qat_sym_process_response(void **op, uint8_t *resp) } *op = (void *)rx_op; } + +int +qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id, + struct rte_crypto_dp_service_ctx *service_ctx, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, + uint8_t is_update); + +int +qat_sym_get_service_ctx_size(struct rte_cryptodev *dev); + #else static inline void @@ -276,5 +288,6 @@ static inline void qat_sym_process_response(void **op __rte_unused, uint8_t *resp __rte_unused) { } + #endif #endif /* _QAT_SYM_H_ */ diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c new file mode 100644 index 000000000..0adc55359 --- /dev/null +++ b/drivers/crypto/qat/qat_sym_hw_dp.c @@ -0,0 +1,926 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ + +#include + +#include "adf_transport_access_macros.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_la.h" + +#include "qat_sym.h" +#include "qat_sym_pmd.h" +#include "qat_sym_session.h" +#include "qat_qp.h" + +struct qat_sym_dp_service_ctx { + struct qat_sym_session *session; + uint32_t tail; + uint32_t head; +}; + +static __rte_always_inline int32_t +qat_sym_dp_get_data(struct qat_qp *qp, struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_vec *data, uint16_t n_data_vecs) +{ + struct qat_queue *tx_queue; + struct qat_sym_op_cookie *cookie; + struct qat_sgl *list; + uint32_t i; + uint32_t total_len; + + if (likely(n_data_vecs == 1)) { + req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = + data[0].iova; + req->comn_mid.src_length = req->comn_mid.dst_length = + data[0].len; + return data[0].len; + } + + if (n_data_vecs == 0 || n_data_vecs > QAT_SYM_SGL_MAX_NUMBER) + return -1; + + total_len = 0; + tx_queue = &qp->tx_q; + + ICP_QAT_FW_COMN_PTR_TYPE_SET(req->comn_hdr.comn_req_flags, + QAT_COMN_PTR_TYPE_SGL); + cookie = qp->op_cookies[tx_queue->tail >> tx_queue->trailz]; + list = (struct qat_sgl *)&cookie->qat_sgl_src; + + for (i = 0; i < n_data_vecs; i++) { + list->buffers[i].len = data[i].len; + list->buffers[i].resrvd = 0; + list->buffers[i].addr = data[i].iova; + if (total_len + data[i].len > UINT32_MAX) { + QAT_DP_LOG(ERR, "Message too long"); + return -1; + } + total_len += data[i].len; + } + + list->num_bufs = i; + req->comn_mid.src_data_addr = req->comn_mid.dest_data_addr = + cookie->qat_sgl_src_phys_addr; + req->comn_mid.src_length = req->comn_mid.dst_length = 0; + return total_len; +} + +static __rte_always_inline void +set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param, + struct rte_crypto_data *iv, uint32_t iv_len, + struct icp_qat_fw_la_bulk_req *qat_req) +{ + /* copy IV into request if it fits */ + if (iv_len <= sizeof(cipher_param->u.cipher_IV_array)) + rte_memcpy(cipher_param->u.cipher_IV_array, iv->base, iv_len); + else { + ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET( + qat_req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_CIPH_IV_64BIT_PTR); + cipher_param->u.s.cipher_IV_ptr = iv->iova; + } +} + +#define QAT_SYM_DP_IS_RESP_SUCCESS(resp) \ + (ICP_QAT_FW_COMN_STATUS_FLAG_OK == \ + ICP_QAT_FW_COMN_RESP_CRYPTO_STAT_GET(resp->comn_hdr.comn_status)) + +static __rte_always_inline void +qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n) +{ + uint32_t i; + + for (i = 0; i < n; i++) + sta[i] = status; +} + +static __rte_always_inline void +submit_one_aead_job(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_data *iv_vec, + struct rte_crypto_data *digest_vec, struct rte_crypto_data *aad_vec, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param = + (void *)&req->serv_specif_rqpars; + struct icp_qat_fw_la_auth_req_params *auth_param = + (void *)((uint8_t *)&req->serv_specif_rqpars + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + uint8_t *aad_data; + uint8_t aad_ccm_real_len; + uint8_t aad_len_field_sz; + uint32_t msg_len_be; + rte_iova_t aad_iova = 0; + uint8_t q; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + rte_memcpy_generic(cipher_param->u.cipher_IV_array, + iv_vec->base, ctx->cipher_iv.length); + aad_iova = aad_vec->iova; + break; + case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC: + aad_data = aad_vec->base; + aad_iova = aad_vec->iova; + aad_ccm_real_len = 0; + aad_len_field_sz = 0; + msg_len_be = rte_bswap32((uint32_t)data_len - + ofs.ofs.cipher.head); + + if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) { + aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO; + aad_ccm_real_len = ctx->aad_len - + ICP_QAT_HW_CCM_AAD_B0_LEN - + ICP_QAT_HW_CCM_AAD_LEN_INFO; + } else { + aad_data = iv_vec->base; + aad_iova = iv_vec->iova; + } + + q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length; + aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS( + aad_len_field_sz, ctx->digest_length, q); + if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET + (q - + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE), + (uint8_t *)&msg_len_be, + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE); + } else { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)&msg_len_be + + (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE + - q), q); + } + + if (aad_len_field_sz > 0) { + *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] = + rte_bswap16(aad_ccm_real_len); + + if ((aad_ccm_real_len + aad_len_field_sz) + % ICP_QAT_HW_CCM_AAD_B0_LEN) { + uint8_t pad_len = 0; + uint8_t pad_idx = 0; + + pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN - + ((aad_ccm_real_len + + aad_len_field_sz) % + ICP_QAT_HW_CCM_AAD_B0_LEN); + pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN + + aad_ccm_real_len + + aad_len_field_sz; + memset(&aad_data[pad_idx], 0, pad_len); + } + + rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array) + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)iv_vec->base + + ICP_QAT_HW_CCM_NONCE_OFFSET, + ctx->cipher_iv.length); + *(uint8_t *)&cipher_param->u.cipher_IV_array[0] = + q - ICP_QAT_HW_CCM_NONCE_OFFSET; + + rte_memcpy((uint8_t *)aad_vec->base + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)iv_vec->base + + ICP_QAT_HW_CCM_NONCE_OFFSET, + ctx->cipher_iv.length); + } + break; + default: + break; + } + + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + auth_param->auth_off = ofs.ofs.cipher.head; + auth_param->auth_len = cipher_param->cipher_length; + auth_param->auth_res_addr = digest_vec->iova; + auth_param->u1.aad_adr = aad_iova; + + if (ctx->is_single_pass) { + cipher_param->spc_aad_addr = aad_iova; + cipher_param->spc_auth_res_addr = digest_vec->iova; + } +} + +static __rte_always_inline int +qat_sym_dp_submit_single_aead(void *qp_data, uint8_t *service_data, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec, + struct rte_crypto_data *digest_vec, struct rte_crypto_data *aad_vec, + void *opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = service_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = service_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs); + if (unlikely(data_len < 0)) + return -1; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque; + + submit_one_aead_job(ctx, req, iv_vec, digest_vec, aad_vec, ofs, + (uint32_t)data_len); + + service_ctx->tail = tail; + + return 0; +} + +static __rte_always_inline uint32_t +qat_sym_dp_submit_aead_jobs(void *qp_data, uint8_t *service_data, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = service_ctx->session; + uint32_t i; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + if (unlikely(qp->enqueued - qp->dequeued + vec->num >= + qp->max_inflights)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + tail = service_ctx->tail; + + for (i = 0; i < vec->num; i++) { + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec, + vec->sgl[i].num) - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + if (unlikely(data_len < 0)) + break; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i]; + submit_one_aead_job(ctx, req, vec->iv_vec + i, + vec->digest_vec + i, vec->aad_vec + i, ofs, + (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + service_ctx->tail = tail; + return i; +} + +static __rte_always_inline void +submit_one_cipher_job(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_data *iv_vec, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param; + + cipher_param = (void *)&req->serv_specif_rqpars; + + /* cipher IV */ + set_cipher_iv(cipher_param, iv_vec, ctx->cipher_iv.length, req); + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; +} + +static __rte_always_inline int +qat_sym_dp_submit_single_cipher(void *qp_data, uint8_t *service_data, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec, + __rte_unused struct rte_crypto_data *digest_vec, + __rte_unused struct rte_crypto_data *aad_vec, + void *opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = service_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = service_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs); + if (unlikely(data_len < 0)) + return -1; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque; + + submit_one_cipher_job(ctx, req, iv_vec, ofs, (uint32_t)data_len); + + service_ctx->tail = tail; + + return 0; +} + +static __rte_always_inline uint32_t +qat_sym_dp_submit_cipher_jobs(void *qp_data, uint8_t *service_data, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = service_ctx->session; + uint32_t i; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + if (unlikely(qp->enqueued - qp->dequeued + vec->num >= + qp->max_inflights)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + tail = service_ctx->tail; + + for (i = 0; i < vec->num; i++) { + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec, + vec->sgl[i].num) - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + if (unlikely(data_len < 0)) + break; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i]; + submit_one_cipher_job(ctx, req, vec->iv_vec + i, ofs, + (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + service_ctx->tail = tail; + return i; +} + +static __rte_always_inline void +submit_one_auth_job(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_data *iv_vec, + struct rte_crypto_data *digest_vec, union rte_crypto_sym_ofs ofs, + uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + + cipher_param = (void *)&req->serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + auth_param->auth_off = ofs.ofs.auth.head; + auth_param->auth_len = data_len - ofs.ofs.auth.head - + ofs.ofs.auth.tail; + auth_param->auth_res_addr = digest_vec->iova; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: + case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: + case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: + auth_param->u1.aad_adr = iv_vec->iova; + break; + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + rte_memcpy_generic(cipher_param->u.cipher_IV_array, + iv_vec->base, ctx->cipher_iv.length); + break; + default: + break; + } +} + +static __rte_always_inline int +qat_sym_dp_submit_single_auth(void *qp_data, uint8_t *service_data, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec, + struct rte_crypto_data *digest_vec, + __rte_unused struct rte_crypto_data *aad_vec, + void *opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = service_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = service_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs); + if (unlikely(data_len < 0)) + return -1; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque; + + submit_one_auth_job(ctx, req, iv_vec, digest_vec, ofs, + (uint32_t)data_len); + + service_ctx->tail = tail; + + return 0; +} + +static __rte_always_inline uint32_t +qat_sym_dp_submit_auth_jobs(void *qp_data, uint8_t *service_data, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = service_ctx->session; + uint32_t i; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + if (unlikely(qp->enqueued - qp->dequeued + vec->num >= + qp->max_inflights)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + tail = service_ctx->tail; + + for (i = 0; i < vec->num; i++) { + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec, + vec->sgl[i].num) - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + if (unlikely(data_len < 0)) + break; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i]; + submit_one_auth_job(ctx, req, vec->iv_vec + i, + vec->digest_vec + i, ofs, (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + service_ctx->tail = tail; + return i; +} + +static __rte_always_inline void +submit_one_chain_job(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, struct rte_crypto_vec *data, + uint16_t n_data_vecs, struct rte_crypto_data *iv_vec, + struct rte_crypto_data *digest_vec, union rte_crypto_sym_ofs ofs, + uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + rte_iova_t auth_iova_end; + int32_t cipher_len, auth_len; + + cipher_param = (void *)&req->serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + cipher_len = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail; + + assert(cipher_len > 0 && auth_len > 0); + + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = cipher_len; + set_cipher_iv(cipher_param, iv_vec, ctx->cipher_iv.length, req); + + auth_param->auth_off = ofs.ofs.cipher.head; + auth_param->auth_len = auth_len; + auth_param->auth_res_addr = digest_vec->iova; + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: + case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9: + case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3: + auth_param->u1.aad_adr = iv_vec->iova; + + if (unlikely(n_data_vecs > 1)) { + int auth_end_get = 0, i = n_data_vecs - 1; + struct rte_crypto_vec *cvec = &data[i]; + uint32_t len; + + len = data_len - ofs.ofs.auth.tail; + + while (i >= 0 && len > 0) { + if (cvec->len >= len) { + auth_iova_end = cvec->iova + + (cvec->len - len); + len = 0; + auth_end_get = 1; + break; + } + len -= cvec->len; + i--; + cvec--; + } + + assert(auth_end_get != 0); + } else + auth_iova_end = digest_vec->iova + + ctx->digest_length; + + /* Then check if digest-encrypted conditions are met */ + if ((auth_param->auth_off + auth_param->auth_len < + cipher_param->cipher_offset + + cipher_param->cipher_length) && + (digest_vec->iova == auth_iova_end)) { + /* Handle partial digest encryption */ + if (cipher_param->cipher_offset + + cipher_param->cipher_length < + auth_param->auth_off + + auth_param->auth_len + + ctx->digest_length) + req->comn_mid.dst_length = + req->comn_mid.src_length = + auth_param->auth_off + + auth_param->auth_len + + ctx->digest_length; + struct icp_qat_fw_comn_req_hdr *header = + &req->comn_hdr; + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( + header->serv_specif_flags, + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); + } + break; + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + break; + default: + break; + } +} + +static __rte_always_inline int +qat_sym_dp_submit_single_chain(void *qp_data, uint8_t *service_data, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, struct rte_crypto_data *iv_vec, + struct rte_crypto_data *digest_vec, + __rte_unused struct rte_crypto_data *aad_vec, + void *opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = service_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = service_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_dp_get_data(qp, req, data, n_data_vecs); + if (unlikely(data_len < 0)) + return -1; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque; + + submit_one_chain_job(ctx, req, data, n_data_vecs, iv_vec, digest_vec, + ofs, (uint32_t)data_len); + + service_ctx->tail = tail; + + return 0; +} + +static __rte_always_inline uint32_t +qat_sym_dp_submit_chain_jobs(void *qp_data, uint8_t *service_data, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void **opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = service_ctx->session; + uint32_t i; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + if (unlikely(qp->enqueued - qp->dequeued + vec->num >= + qp->max_inflights)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + return 0; + } + + tail = service_ctx->tail; + + for (i = 0; i < vec->num; i++) { + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_dp_get_data(qp, req, vec->sgl[i].vec, + vec->sgl[i].num) - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + if (unlikely(data_len < 0)) + break; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)opaque[i]; + submit_one_chain_job(ctx, req, vec->sgl[i].vec, vec->sgl[i].num, + vec->iv_vec + i, vec->digest_vec + i, ofs, + (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < vec->num)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, vec->num - i); + + service_ctx->tail = tail; + return i; +} + +static __rte_always_inline uint32_t +qat_sym_dp_dequeue(void *qp_data, uint8_t *service_data, + rte_cryptodev_get_dequeue_count_t get_dequeue_count, + rte_cryptodev_post_dequeue_t post_dequeue, + void **out_opaque, uint8_t is_opaque_array, + uint32_t *n_success_jobs) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *rx_queue = &qp->rx_q; + struct icp_qat_fw_comn_resp *resp; + void *resp_opaque; + uint32_t i, n, inflight; + uint32_t head; + uint8_t status; + + *n_success_jobs = 0; + head = service_ctx->head; + + inflight = qp->enqueued - qp->dequeued; + if (unlikely(inflight == 0)) + return 0; + + resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr + + head); + /* no operation ready */ + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + return 0; + + resp_opaque = (void *)(uintptr_t)resp->opaque_data; + /* get the dequeue count */ + n = get_dequeue_count(resp_opaque); + if (unlikely(n == 0)) + return 0; + + out_opaque[0] = resp_opaque; + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + post_dequeue(resp_opaque, 0, status); + *n_success_jobs += status; + + head = (head + rx_queue->msg_size) & rx_queue->modulo_mask; + + /* we already finished dequeue when n == 1 */ + if (unlikely(n == 1)) { + i = 1; + goto end_deq; + } + + if (is_opaque_array) { + for (i = 1; i < n; i++) { + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + if (unlikely(*(uint32_t *)resp == + ADF_RING_EMPTY_SIG)) + goto end_deq; + out_opaque[i] = (void *)(uintptr_t) + resp->opaque_data; + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + *n_success_jobs += status; + post_dequeue(out_opaque[i], i, status); + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + } + + goto end_deq; + } + + /* opaque is not array */ + for (i = 1; i < n; i++) { + resp = (struct icp_qat_fw_comn_resp *)( + (uint8_t *)rx_queue->base_addr + head); + status = QAT_SYM_DP_IS_RESP_SUCCESS(resp); + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + goto end_deq; + head = (head + rx_queue->msg_size) & + rx_queue->modulo_mask; + post_dequeue(resp_opaque, i, status); + *n_success_jobs += status; + } + +end_deq: + service_ctx->head = head; + return i; +} + +static __rte_always_inline int +qat_sym_dp_dequeue_single_job(void *qp_data, uint8_t *service_data, + void **out_opaque) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + struct qat_queue *rx_queue = &qp->rx_q; + + register struct icp_qat_fw_comn_resp *resp; + + resp = (struct icp_qat_fw_comn_resp *)((uint8_t *)rx_queue->base_addr + + service_ctx->head); + + if (unlikely(*(uint32_t *)resp == ADF_RING_EMPTY_SIG)) + return -1; + + *out_opaque = (void *)(uintptr_t)resp->opaque_data; + + service_ctx->head = (service_ctx->head + rx_queue->msg_size) & + rx_queue->modulo_mask; + + return QAT_SYM_DP_IS_RESP_SUCCESS(resp); +} + +static __rte_always_inline void +qat_sym_dp_kick_tail(void *qp_data, uint8_t *service_data, uint32_t n) +{ + struct qat_qp *qp = qp_data; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + + qp->enqueued += n; + qp->stats.enqueued_count += n; + + assert(service_ctx->tail == ((tx_queue->tail + tx_queue->msg_size * n) & + tx_queue->modulo_mask)); + + tx_queue->tail = service_ctx->tail; + + WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, + tx_queue->hw_bundle_number, + tx_queue->hw_queue_number, tx_queue->tail); + tx_queue->csr_tail = tx_queue->tail; +} + +static __rte_always_inline void +qat_sym_dp_update_head(void *qp_data, uint8_t *service_data, uint32_t n) +{ + struct qat_qp *qp = qp_data; + struct qat_queue *rx_queue = &qp->rx_q; + struct qat_sym_dp_service_ctx *service_ctx = (void *)service_data; + + assert(service_ctx->head == ((rx_queue->head + rx_queue->msg_size * n) & + rx_queue->modulo_mask)); + + rx_queue->head = service_ctx->head; + rx_queue->nb_processed_responses += n; + qp->dequeued += n; + qp->stats.dequeued_count += n; + if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) { + uint32_t old_head, new_head; + uint32_t max_head; + + old_head = rx_queue->csr_head; + new_head = rx_queue->head; + max_head = qp->nb_descriptors * rx_queue->msg_size; + + /* write out free descriptors */ + void *cur_desc = (uint8_t *)rx_queue->base_addr + old_head; + + if (new_head < old_head) { + memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, + max_head - old_head); + memset(rx_queue->base_addr, ADF_RING_EMPTY_SIG_BYTE, + new_head); + } else { + memset(cur_desc, ADF_RING_EMPTY_SIG_BYTE, new_head - + old_head); + } + rx_queue->nb_processed_responses = 0; + rx_queue->csr_head = new_head; + + /* write current head to CSR */ + WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, + rx_queue->hw_bundle_number, rx_queue->hw_queue_number, + new_head); + } +} + +int +qat_sym_dp_configure_service_ctx(struct rte_cryptodev *dev, uint16_t qp_id, + struct rte_crypto_dp_service_ctx *service_ctx, + enum rte_crypto_dp_service service_type, + enum rte_crypto_op_sess_type sess_type, + union rte_cryptodev_session_ctx session_ctx, + uint8_t is_update) +{ + struct qat_qp *qp; + struct qat_sym_session *ctx; + struct qat_sym_dp_service_ctx *dp_ctx; + + if (service_ctx == NULL || session_ctx.crypto_sess == NULL || + sess_type != RTE_CRYPTO_OP_WITH_SESSION) + return -EINVAL; + + qp = dev->data->queue_pairs[qp_id]; + ctx = (struct qat_sym_session *)get_sym_session_private_data( + session_ctx.crypto_sess, qat_sym_driver_id); + dp_ctx = (struct qat_sym_dp_service_ctx *) + service_ctx->drv_service_data; + + if (!is_update) { + memset(service_ctx, 0, sizeof(*service_ctx) + + sizeof(struct qat_sym_dp_service_ctx)); + service_ctx->qp_data = dev->data->queue_pairs[qp_id]; + dp_ctx->tail = qp->tx_q.tail; + dp_ctx->head = qp->rx_q.head; + } + + dp_ctx->session = ctx; + + service_ctx->submit_done = qat_sym_dp_kick_tail; + service_ctx->dequeue_opaque = qat_sym_dp_dequeue; + service_ctx->dequeue_single = qat_sym_dp_dequeue_single_job; + service_ctx->dequeue_done = qat_sym_dp_update_head; + + if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || + ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) { + /* AES-GCM or AES-CCM */ + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 || + (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES128 + && ctx->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE + && ctx->qat_hash_alg == + ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC)) { + if (service_type != RTE_CRYPTO_DP_SYM_AEAD) + return -1; + service_ctx->submit_vec = qat_sym_dp_submit_aead_jobs; + service_ctx->submit_single_job = + qat_sym_dp_submit_single_aead; + } else { + if (service_type != RTE_CRYPTO_DP_SYM_CHAIN) + return -1; + service_ctx->submit_vec = qat_sym_dp_submit_chain_jobs; + service_ctx->submit_single_job = + qat_sym_dp_submit_single_chain; + } + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) { + if (service_type != RTE_CRYPTO_DP_SYM_AUTH_ONLY) + return -1; + service_ctx->submit_vec = qat_sym_dp_submit_auth_jobs; + service_ctx->submit_single_job = qat_sym_dp_submit_single_auth; + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { + if (service_type != RTE_CRYPTO_DP_SYM_CIPHER_ONLY) + return -1; + service_ctx->submit_vec = qat_sym_dp_submit_cipher_jobs; + service_ctx->submit_single_job = + qat_sym_dp_submit_single_cipher; + } + + return 0; +} + +int +qat_sym_get_service_ctx_size(__rte_unused struct rte_cryptodev *dev) +{ + return sizeof(struct qat_sym_dp_service_ctx); +} diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index 314742f53..bef08c3bc 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -258,7 +258,11 @@ static struct rte_cryptodev_ops crypto_qat_ops = { /* Crypto related operations */ .sym_session_get_size = qat_sym_session_get_private_size, .sym_session_configure = qat_sym_session_configure, - .sym_session_clear = qat_sym_session_clear + .sym_session_clear = qat_sym_session_clear, + + /* Data plane service related operations */ + .get_drv_ctx_size = qat_sym_get_service_ctx_size, + .configure_service = qat_sym_dp_configure_service_ctx, }; #ifdef RTE_LIBRTE_SECURITY @@ -376,7 +380,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | - RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED; + RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | + RTE_CRYPTODEV_FF_DATA_PLANE_SERVICE; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; From patchwork Fri Aug 28 12:58:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 76140 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF462A04B1; Fri, 28 Aug 2020 14:58:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E2BD01C1CA; Fri, 28 Aug 2020 14:58:32 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id BBC7F1C1C4 for ; Fri, 28 Aug 2020 14:58:28 +0200 (CEST) IronPort-SDR: gWt2rmtTqp1RtzV3AAXkFQqSezCOZJbVCy8ad4gneNQRx7etVVnTnom90d0jPYPDULAGpnzW26 ooodEmTs4www== X-IronPort-AV: E=McAfee;i="6000,8403,9726"; a="154072187" X-IronPort-AV: E=Sophos;i="5.76,363,1592895600"; d="scan'208";a="154072187" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Aug 2020 05:58:26 -0700 IronPort-SDR: 8Z08YEWQcF3L/yMtXZ3MmogIKEBgn/AAfae7j1ZEazELJJ5shbHS4PQPoE8pzJ1MT/h3WdkSzY vBoYx89coUsA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,363,1592895600"; d="scan'208";a="324004442" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga004.fm.intel.com with ESMTP; 28 Aug 2020 05:58:24 -0700 From: Fan Zhang To: dev@dpdk.org Cc: akhil.goyal@nxp.com, fiona.trahe@intel.com, arkadiuszx.kusztal@intel.com, adamx.dybkowski@intel.com, roy.fan.zhang@intel.com Date: Fri, 28 Aug 2020 13:58:14 +0100 Message-Id: <20200828125815.21614-4-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200828125815.21614-1-roy.fan.zhang@intel.com> References: <20200818162833.20219-1-roy.fan.zhang@intel.com> <20200828125815.21614-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v7 3/4] test/crypto: add unit-test for cryptodev direct APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds the QAT test to use cryptodev symmetric crypto direct APIs. Signed-off-by: Fan Zhang --- app/test/test_cryptodev.c | 354 ++++++++++++++++++++++++-- app/test/test_cryptodev.h | 6 + app/test/test_cryptodev_blockcipher.c | 50 ++-- 3 files changed, 373 insertions(+), 37 deletions(-) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 70bf6fe2c..d6909984d 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -49,6 +49,8 @@ #define VDEV_ARGS_SIZE 100 #define MAX_NB_SESSIONS 4 +#define MAX_DRV_SERVICE_CTX_SIZE 256 + #define IN_PLACE 0 #define OUT_OF_PLACE 1 @@ -57,6 +59,8 @@ static int gbl_driver_id; static enum rte_security_session_action_type gbl_action_type = RTE_SECURITY_ACTION_TYPE_NONE; +int hw_dp_test; + struct crypto_testsuite_params { struct rte_mempool *mbuf_pool; struct rte_mempool *large_mbuf_pool; @@ -147,6 +151,153 @@ ceil_byte_length(uint32_t num_bits) return (num_bits >> 3); } +void +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op, + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits) +{ + int32_t n; + struct rte_crypto_sym_op *sop; + struct rte_crypto_op *ret_op = NULL; + struct rte_crypto_vec data_vec[UINT8_MAX]; + struct rte_crypto_data iv_vec, aad_vec, digest_vec; + union rte_crypto_sym_ofs ofs; + int32_t status; + uint32_t min_ofs, max_len; + union rte_cryptodev_session_ctx sess; + enum rte_crypto_dp_service service_type; + uint32_t count = 0; + uint8_t service_data[MAX_DRV_SERVICE_CTX_SIZE] = {0}; + struct rte_crypto_dp_service_ctx *ctx = (void *)service_data; + int ctx_service_size; + + sop = op->sym; + + sess.crypto_sess = sop->session; + + if (is_cipher && is_auth) { + service_type = RTE_CRYPTO_DP_SYM_CHAIN; + min_ofs = RTE_MIN(sop->cipher.data.offset, + sop->auth.data.offset); + max_len = RTE_MAX(sop->cipher.data.length, + sop->auth.data.length); + } else if (is_cipher) { + service_type = RTE_CRYPTO_DP_SYM_CIPHER_ONLY; + min_ofs = sop->cipher.data.offset; + max_len = sop->cipher.data.length; + } else if (is_auth) { + service_type = RTE_CRYPTO_DP_SYM_AUTH_ONLY; + min_ofs = sop->auth.data.offset; + max_len = sop->auth.data.length; + } else { /* aead */ + service_type = RTE_CRYPTO_DP_SYM_AEAD; + min_ofs = sop->aead.data.offset; + max_len = sop->aead.data.length; + } + + if (len_in_bits) { + max_len = max_len >> 3; + min_ofs = min_ofs >> 3; + } + + ctx_service_size = rte_cryptodev_get_dp_service_ctx_data_size(dev_id); + assert(ctx_service_size <= MAX_DRV_SERVICE_CTX_SIZE && + ctx_service_size > 0); + + if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type, + RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 0) < 0) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + /* test update service */ + if (rte_cryptodev_dp_configure_service(dev_id, qp_id, service_type, + RTE_CRYPTO_OP_WITH_SESSION, sess, ctx, 1) < 0) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + n = rte_crypto_mbuf_to_vec(sop->m_src, 0, min_ofs + max_len, + data_vec, RTE_DIM(data_vec)); + if (n < 0 || n != sop->m_src->nb_segs) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + ofs.raw = 0; + + iv_vec.base = rte_crypto_op_ctod_offset(op, void *, IV_OFFSET); + iv_vec.iova = rte_crypto_op_ctophys_offset(op, IV_OFFSET); + + switch (service_type) { + case RTE_CRYPTO_DP_SYM_AEAD: + ofs.ofs.cipher.head = sop->cipher.data.offset; + aad_vec.base = (void *)sop->aead.aad.data; + aad_vec.iova = sop->aead.aad.phys_addr; + digest_vec.base = (void *)sop->aead.digest.data; + digest_vec.iova = sop->aead.digest.phys_addr; + if (len_in_bits) { + ofs.ofs.cipher.head >>= 3; + ofs.ofs.cipher.tail >>= 3; + } + break; + case RTE_CRYPTO_DP_SYM_CIPHER_ONLY: + ofs.ofs.cipher.head = sop->cipher.data.offset; + if (len_in_bits) { + ofs.ofs.cipher.head >>= 3; + ofs.ofs.cipher.tail >>= 3; + } + break; + case RTE_CRYPTO_DP_SYM_AUTH_ONLY: + ofs.ofs.auth.head = sop->auth.data.offset; + digest_vec.base = (void *)sop->auth.digest.data; + digest_vec.iova = sop->auth.digest.phys_addr; + break; + case RTE_CRYPTO_DP_SYM_CHAIN: + ofs.ofs.cipher.head = + sop->cipher.data.offset - sop->auth.data.offset; + ofs.ofs.cipher.tail = + (sop->auth.data.offset + sop->auth.data.length) - + (sop->cipher.data.offset + sop->cipher.data.length); + if (len_in_bits) { + ofs.ofs.cipher.head >>= 3; + ofs.ofs.cipher.tail >>= 3; + } + digest_vec.base = (void *)sop->auth.digest.data; + digest_vec.iova = sop->auth.digest.phys_addr; + break; + default: + break; + } + + status = rte_cryptodev_dp_submit_single_job(ctx, data_vec, n, ofs, + &iv_vec, &digest_vec, &aad_vec, (void *)op); + if (status < 0) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + rte_cryptodev_dp_submit_done(ctx, 1); + + status = -1; + while (count++ < 1024 && status == -1) { + status = rte_cryptodev_dp_sym_dequeue_single_job(ctx, + (void **)&ret_op); + if (status == -1) + rte_pause(); + } + + if (status != -1) + rte_cryptodev_dp_dequeue_done(ctx, 1); + + if (count == 1025 || status != 1 || ret_op != op) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return; + } + + op->status = status == 1 ? RTE_CRYPTO_OP_STATUS_SUCCESS : + RTE_CRYPTO_OP_STATUS_ERROR; +} + static void process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op) { @@ -2470,7 +2621,11 @@ test_snow3g_authentication(const struct snow3g_hash_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); ut_params->obuf = ut_params->op->sym->m_src; TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -2549,7 +2704,11 @@ test_snow3g_authentication_verify(const struct snow3g_hash_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -2619,6 +2778,9 @@ test_kasumi_authentication(const struct kasumi_hash_test_data *tdata) if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_crypt_auth_op(ts_params->valid_devs[0], ut_params->op); + else if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); else ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); @@ -2690,7 +2852,11 @@ test_kasumi_authentication_verify(const struct kasumi_hash_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -2897,8 +3063,12 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], - ut_params->op); + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], + ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_dst; @@ -2983,7 +3153,11 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -3306,7 +3480,11 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -3381,7 +3559,11 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -3756,7 +3938,11 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_dst; @@ -3924,7 +4110,11 @@ test_zuc_cipher_auth(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -4019,7 +4209,11 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); ut_params->obuf = ut_params->op->sym->m_src; @@ -4155,7 +4349,11 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4344,7 +4542,11 @@ test_snow3g_auth_cipher_sgl(const struct snow3g_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4526,7 +4728,11 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4716,7 +4922,11 @@ test_kasumi_auth_cipher_sgl(const struct kasumi_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4857,7 +5067,11 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -4944,7 +5158,11 @@ test_zuc_encryption(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5031,7 +5249,11 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 0, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5119,7 +5341,11 @@ test_zuc_authentication(const struct wireless_test_data *tdata) if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); ut_params->obuf = ut_params->op->sym->m_src; TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5251,7 +5477,11 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -5437,7 +5667,11 @@ test_zuc_auth_cipher_sgl(const struct wireless_test_data *tdata, if (retval < 0) return retval; - ut_params->op = process_crypto_request(ts_params->valid_devs[0], + if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 1, 1, 1); + else + ut_params->op = process_crypto_request(ts_params->valid_devs[0], ut_params->op); TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf"); @@ -7043,6 +7277,9 @@ test_authenticated_encryption(const struct aead_test_data *tdata) /* Process crypto operation */ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op); + else if (hw_dp_test) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -8540,6 +8777,9 @@ test_authenticated_decryption(const struct aead_test_data *tdata) /* Process crypto operation */ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op); + else if (hw_dp_test == 1) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -11480,6 +11720,9 @@ test_authenticated_encryption_SGL(const struct aead_test_data *tdata, if (oop == IN_PLACE && gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) process_cpu_aead_op(ts_params->valid_devs[0], ut_params->op); + else if (oop == IN_PLACE && hw_dp_test == 1) + process_sym_hw_api_op(ts_params->valid_devs[0], 0, + ut_params->op, 0, 0, 0); else TEST_ASSERT_NOT_NULL( process_crypto_request(ts_params->valid_devs[0], @@ -13041,6 +13284,75 @@ test_cryptodev_nitrox(void) return unit_test_suite_runner(&cryptodev_nitrox_testsuite); } +static struct unit_test_suite cryptodev_sym_direct_api_testsuite = { + .suite_name = "Crypto Sym direct API Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_encryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_decryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_auth_cipher_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_auth_cipher_verify_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_kasumi_hash_generate_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_kasumi_hash_verify_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_kasumi_encryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_kasumi_decryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, test_AES_cipheronly_all), + TEST_CASE_ST(ut_setup, ut_teardown, test_authonly_all), + TEST_CASE_ST(ut_setup, ut_teardown, test_AES_chain_all), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_CCM_authenticated_encryption_test_case_128_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_CCM_authenticated_decryption_test_case_128_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_authenticated_encryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_authenticated_decryption_test_case_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_encryption_test_case_192_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_decryption_test_case_192_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_encryption_test_case_256_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_decryption_test_case_256_1), + TEST_CASE_ST(ut_setup, ut_teardown, + test_AES_GCM_auth_encrypt_SGL_in_place_1500B), + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + +static int +test_qat_sym_direct_api(void /*argv __rte_unused, int argc __rte_unused*/) +{ + int ret; + + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)); + + if (gbl_driver_id == -1) { + RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check that both " + "CONFIG_RTE_LIBRTE_PMD_QAT and CONFIG_RTE_LIBRTE_PMD_QAT_SYM " + "are enabled in config file to run this testsuite.\n"); + return TEST_SKIPPED; + } + + hw_dp_test = 1; + ret = unit_test_suite_runner(&cryptodev_sym_direct_api_testsuite); + hw_dp_test = 0; + + return ret; +} + +REGISTER_TEST_COMMAND(cryptodev_qat_sym_api_autotest, test_qat_sym_direct_api); REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat); REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb); REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest, diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h index 41542e055..c382c12c4 100644 --- a/app/test/test_cryptodev.h +++ b/app/test/test_cryptodev.h @@ -71,6 +71,8 @@ #define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr #define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym +extern int hw_dp_test; + /** * Write (spread) data from buffer to mbuf data * @@ -209,4 +211,8 @@ create_segmented_mbuf(struct rte_mempool *mbuf_pool, int pkt_len, return NULL; } +void +process_sym_hw_api_op(uint8_t dev_id, uint16_t qp_id, struct rte_crypto_op *op, + uint8_t is_cipher, uint8_t is_auth, uint8_t len_in_bits); + #endif /* TEST_CRYPTODEV_H_ */ diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c index 221262341..fc540e362 100644 --- a/app/test/test_cryptodev_blockcipher.c +++ b/app/test/test_cryptodev_blockcipher.c @@ -462,25 +462,43 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, } /* Process crypto operation */ - if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) { - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "line %u FAILED: %s", - __LINE__, "Error sending packet for encryption"); - status = TEST_FAILED; - goto error_exit; - } + if (hw_dp_test) { + uint8_t is_cipher = 0, is_auth = 0; + + if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) { + RTE_LOG(DEBUG, USER1, + "QAT direct API does not support OOP, Test Skipped.\n"); + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "SKIPPED"); + status = TEST_SUCCESS; + goto error_exit; + } + if (t->op_mask & BLOCKCIPHER_TEST_OP_CIPHER) + is_cipher = 1; + if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH) + is_auth = 1; + + process_sym_hw_api_op(dev_id, 0, op, is_cipher, is_auth, 0); + } else { + if (rte_cryptodev_enqueue_burst(dev_id, 0, &op, 1) != 1) { + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, + "line %u FAILED: %s", + __LINE__, "Error sending packet for encryption"); + status = TEST_FAILED; + goto error_exit; + } - op = NULL; + op = NULL; - while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0) - rte_pause(); + while (rte_cryptodev_dequeue_burst(dev_id, 0, &op, 1) == 0) + rte_pause(); - if (!op) { - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "line %u FAILED: %s", - __LINE__, "Failed to process sym crypto op"); - status = TEST_FAILED; - goto error_exit; + if (!op) { + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, + "line %u FAILED: %s", + __LINE__, "Failed to process sym crypto op"); + status = TEST_FAILED; + goto error_exit; + } } debug_hexdump(stdout, "m_src(after):", From patchwork Fri Aug 28 12:58:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 76142 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 55584A04B1; Fri, 28 Aug 2020 14:59:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 028031C1D8; Fri, 28 Aug 2020 14:58:36 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id AC1161C1B9 for ; Fri, 28 Aug 2020 14:58:29 +0200 (CEST) IronPort-SDR: yHr7+Fn1u+HiT4RvCneSTUQyC3gHkpuCWjEBmJAJiMm3RXFBlCLTp24peyMmiRBwLMjoA+pXN+ b1FGnGOYXj6g== X-IronPort-AV: E=McAfee;i="6000,8403,9726"; a="154072189" X-IronPort-AV: E=Sophos;i="5.76,363,1592895600"; d="scan'208";a="154072189" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Aug 2020 05:58:27 -0700 IronPort-SDR: pkl2vPVoOzt/a2YO4ZrvLxK1G09P5VME2V46ftRLETwXQWhEKTJdKl8BF3flwUUGUNH7hFPZJ8 /6BJPm10XBzw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,363,1592895600"; d="scan'208";a="324004449" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga004.fm.intel.com with ESMTP; 28 Aug 2020 05:58:26 -0700 From: Fan Zhang To: dev@dpdk.org Cc: akhil.goyal@nxp.com, fiona.trahe@intel.com, arkadiuszx.kusztal@intel.com, adamx.dybkowski@intel.com, roy.fan.zhang@intel.com Date: Fri, 28 Aug 2020 13:58:15 +0100 Message-Id: <20200828125815.21614-5-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200828125815.21614-1-roy.fan.zhang@intel.com> References: <20200818162833.20219-1-roy.fan.zhang@intel.com> <20200828125815.21614-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v7 4/4] doc: add cryptodev service APIs guide X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates programmer's guide to demonstrate the usage and limitations of cryptodev symmetric crypto data-path service APIs. Signed-off-by: Fan Zhang --- doc/guides/prog_guide/cryptodev_lib.rst | 90 +++++++++++++++++++++++++ 1 file changed, 90 insertions(+) diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst index c14f750fa..77521c959 100644 --- a/doc/guides/prog_guide/cryptodev_lib.rst +++ b/doc/guides/prog_guide/cryptodev_lib.rst @@ -631,6 +631,96 @@ a call argument. Status different than zero must be treated as error. For more details, e.g. how to convert an mbuf to an SGL, please refer to an example usage in the IPsec library implementation. +Cryptodev Direct Data-plane Service API +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Direct crypto data-path service are a set of APIs that especially provided for +the external libraries/applications who want to take advantage of the rich +features provided by cryptodev, but not necessarily depend on cryptodev +operations, mempools, or mbufs in the their data-path implementations. + +The direct crypto data-path service has the following advantages: +- Supports raw data pointer and physical addresses as input. +- Do not require specific data structure allocated from heap, such as + cryptodev operation. +- Enqueue in a burst or single operation. The service allow enqueuing in + a burst similar to ``rte_cryptodev_enqueue_burst`` operation, or only + enqueue one job at a time but maintaining necessary context data locally for + next single job enqueue operation. The latter method is especially helpful + when the user application's crypto operations are clustered into a burst. + Allowing enqueue one operation at a time helps reducing one additional loop + and also reduced the cache misses during the double "looping" situation. +- Customerizable dequeue count. Instead of dequeue maximum possible operations + as same as ``rte_cryptodev_dequeue_burst`` operation, the service allows the + user to provide a callback function to decide how many operations to be + dequeued. This is especially helpful when the expected dequeue count is + hidden inside the opaque data stored during enqueue. The user can provide + the callback function to parse the opaque data structure. +- Abandon enqueue and dequeue anytime. One of the drawbacks of + ``rte_cryptodev_enqueue_burst`` and ``rte_cryptodev_dequeue_burst`` + operations are: once an operation is enqueued/dequeued there is no way to + undo the operation. The service make the operation abandon possible by + creating a local copy of the queue operation data in the service context + data. The data will be written back to driver maintained operation data + when enqueue or dequeue done function is called. + +The cryptodev data-path service uses + +Cryptodev PMDs who supports this feature will have +``RTE_CRYPTODEV_FF_SYM_HW_DIRECT_API`` feature flag presented. To use this +feature the function ``rte_cryptodev_get_dp_service_ctx_data_size`` should +be called to get the data path service context data size. The user should +creates a local buffer at least this size long and initialize it using +``rte_cryptodev_dp_configure_service`` function call. + +The ``rte_cryptodev_dp_configure_service`` function call initialize or +updates the ``struct rte_crypto_dp_service_ctx`` buffer, in which contains the +driver specific queue pair data pointer and service context buffer, and a +set of function pointers to enqueue and dequeue different algorithms' +operations. The ``rte_cryptodev_dp_configure_service`` should be called when: + +- Before enqueuing or dequeuing starts (set ``is_update`` parameter to 0). +- When different cryptodev session, security session, or session-less xform + is used (set ``is_update`` parameter to 1). + +Two different enqueue functions are provided. + +- ``rte_cryptodev_dp_sym_submit_vec``: submit a burst of operations stored in + the ``rte_crypto_sym_vec`` structure. +- ``rte_cryptodev_dp_submit_single_job``: submit single operation. + +Either enqueue functions will not command the crypto device to start processing +until ``rte_cryptodev_dp_submit_done`` function is called. Before then the user +shall expect the driver only stores the necessory context data in the +``rte_crypto_dp_service_ctx`` buffer for the next enqueue operation. If the user +wants to abandon the submitted operations, simply call +``rte_cryptodev_dp_configure_service`` function instead with the parameter +``is_update`` set to 0. The driver will recover the service context data to +the previous state. + +To dequeue the operations the user also have two operations: + +- ``rte_cryptodev_dp_sym_dequeue``: fully customizable deuqueue operation. The + user needs to provide the callback function for the driver to get the + dequeue count and perform post processing such as write the status field. +- ``rte_cryptodev_dp_sym_dequeue_single_job``: dequeue single job. + +Same as enqueue, the function ``rte_cryptodev_dp_dequeue_done`` is used to +merge user's local service context data with the driver's queue operation +data. Also to abandon the dequeue operation (still keep the operations in the +queue), the user shall avoid ``rte_cryptodev_dp_dequeue_done`` function call +but calling ``rte_cryptodev_dp_configure_service`` function with the parameter +``is_update`` set to 0. + +There are a few limitations to the data path service: + +* Only support in-place operations. +* APIs are NOT thread-safe. +* CANNOT mix the direct API's enqueue with rte_cryptodev_enqueue_burst, or + vice versa. + +See *DPDK API Reference* for details on each API definitions. + Sample code -----------