From patchwork Tue Jan 28 14:22:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 65233 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27650A04B3; Tue, 28 Jan 2020 15:22:38 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9B4751C2E1; Tue, 28 Jan 2020 15:22:32 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id DDCF01C2B9 for ; Tue, 28 Jan 2020 15:22:27 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 06:22:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,374,1574150400"; d="scan'208";a="309092825" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.103.102.190]) by orsmga001.jf.intel.com with ESMTP; 28 Jan 2020 06:22:25 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Tue, 28 Jan 2020 15:22:13 +0100 Message-Id: <20200128142220.16644-2-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200128142220.16644-1-marcinx.smoczynski@intel.com> References: <20200128031642.15256-1-marcinx.smoczynski@intel.com> <20200128142220.16644-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 1/8] cryptodev: introduce cpu crypto support API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add new API allowing to process crypto operations in a synchronous manner. Operations are performed on a set of SG arrays. Sync mode is selected by setting appropriate flag in an xform type number. Cryptodevs which allows CPU crypto operation mode have to use RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO capability. Signed-off-by: Konstantin Ananyev Signed-off-by: Marcin Smoczynski --- lib/librte_cryptodev/rte_crypto_sym.h | 63 ++++++++++++++++++- lib/librte_cryptodev/rte_cryptodev.c | 35 ++++++++++- lib/librte_cryptodev/rte_cryptodev.h | 22 ++++++- lib/librte_cryptodev/rte_cryptodev_pmd.h | 21 ++++++- .../rte_cryptodev_version.map | 1 + 5 files changed, 138 insertions(+), 4 deletions(-) diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h index bc356f6ff..d6f3105fe 100644 --- a/lib/librte_cryptodev/rte_crypto_sym.h +++ b/lib/librte_cryptodev/rte_crypto_sym.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016-2019 Intel Corporation + * Copyright(c) 2016-2020 Intel Corporation */ #ifndef _RTE_CRYPTO_SYM_H_ @@ -25,6 +25,67 @@ extern "C" { #include #include +/** + * Crypto IO Vector (in analogy with struct iovec) + * Supposed be used to pass input/output data buffers for crypto data-path + * functions. + */ +struct rte_crypto_vec { + /** virtual address of the data buffer */ + void *base; + /** IOVA of the data buffer */ + rte_iova_t *iova; + /** length of the data buffer */ + uint32_t len; +}; + +/** + * Crypto scatter-gather list descriptor. Consists of a pointer to an array + * of Crypto IO vectors with its size. + */ +struct rte_crypto_sgl { + /** start of an array of vectors */ + struct rte_crypto_vec *vec; + /** size of an array of vectors */ + uint32_t num; +}; + +/** + * Synchronous operation descriptor. + * Supposed to be used with CPU crypto API call. + */ +struct rte_crypto_sym_vec { + /** array of SGL vectors */ + struct rte_crypto_sgl *sgl; + /** array of pointers to IV */ + void **iv; + /** array of pointers to AAD */ + void **aad; + /** array of pointers to digest */ + void **digest; + /** + * array of statuses for each operation: + * - 0 on success + * - errno on error + */ + int32_t *status; + /** number of operations to perform */ + uint32_t num; +}; + +/** + * used for cpu_crypto_process_bulk() to specify head/tail offsets + * for auth/cipher processing. + */ +union rte_crypto_sym_ofs { + uint64_t raw; + struct { + struct { + uint16_t head; + uint16_t tail; + } auth, cipher; + } ofs; +}; /** Symmetric Cipher Algorithms */ enum rte_crypto_cipher_algorithm { diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c index 5c6359b5c..889d61319 100644 --- a/lib/librte_cryptodev/rte_cryptodev.c +++ b/lib/librte_cryptodev/rte_cryptodev.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2015-2017 Intel Corporation + * Copyright(c) 2015-2020 Intel Corporation */ #include @@ -494,6 +494,8 @@ rte_cryptodev_get_feature_name(uint64_t flag) return "RSA_PRIV_OP_KEY_QT"; case RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED: return "DIGEST_ENCRYPTED"; + case RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO: + return "SYM_CPU_CRYPTO"; default: return NULL; } @@ -1619,6 +1621,37 @@ rte_cryptodev_sym_session_get_user_data( return (void *)(sess->sess_data + sess->nb_drivers); } +static inline void +sym_crypto_fill_status(struct rte_crypto_sym_vec *vec, int32_t errnum) +{ + uint32_t i; + for (i = 0; i < vec->num; i++) + vec->status[i] = errnum; +} + +uint32_t +rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, + struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, + struct rte_crypto_sym_vec *vec) +{ + struct rte_cryptodev *dev; + + if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) { + sym_crypto_fill_status(vec, EINVAL); + return 0; + } + + dev = rte_cryptodev_pmd_get_dev(dev_id); + + if (*dev->dev_ops->sym_cpu_process == NULL || + !(dev->feature_flags & RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO)) { + sym_crypto_fill_status(vec, ENOTSUP); + return 0; + } + + return dev->dev_ops->sym_cpu_process(dev, sess, ofs, vec); +} + /** Initialise rte_crypto_op mempool element */ static void rte_crypto_op_init(struct rte_mempool *mempool, diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h index c6ffa3b35..7603af9f6 100644 --- a/lib/librte_cryptodev/rte_cryptodev.h +++ b/lib/librte_cryptodev/rte_cryptodev.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2015-2017 Intel Corporation. + * Copyright(c) 2015-2020 Intel Corporation. */ #ifndef _RTE_CRYPTODEV_H_ @@ -450,6 +450,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum, /**< Support encrypted-digest operations where digest is appended to data */ #define RTE_CRYPTODEV_FF_ASYM_SESSIONLESS (1ULL << 20) /**< Support asymmetric session-less operations */ +#define RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO (1ULL << 21) +/**< Support symmeteric cpu-crypto processing */ /** @@ -1274,6 +1276,24 @@ void * rte_cryptodev_sym_session_get_user_data( struct rte_cryptodev_sym_session *sess); +/** + * Perform actual crypto processing (encrypt/digest or auth/decrypt) + * on user provided data. + * + * @param dev_id The device identifier. + * @param sess Cryptodev session structure + * @param ofs Start and stop offsets for auth and cipher operations + * @param vec Vectorized operation descriptor + * + * @return + * - Returns number of successfully processed packets. + */ +__rte_experimental +uint32_t +rte_cryptodev_sym_cpu_crypto_process(uint8_t dev_id, + struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, + struct rte_crypto_sym_vec *vec); + #ifdef __cplusplus } #endif diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h index fba14f2fa..0e6b5f443 100644 --- a/lib/librte_cryptodev/rte_cryptodev_pmd.h +++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2015-2016 Intel Corporation. + * Copyright(c) 2015-2020 Intel Corporation. */ #ifndef _RTE_CRYPTODEV_PMD_H_ @@ -308,6 +308,23 @@ typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev, */ typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev, struct rte_cryptodev_asym_session *sess); +/** + * Perform actual crypto processing (encrypt/digest or auth/decrypt) + * on user provided data. + * + * @param dev Crypto device pointer + * @param sess Cryptodev session structure + * @param ofs Start and stop offsets for auth and cipher operations + * @param vec Vectorized operation descriptor + * + * @return + * - Returns number of successfully processed packets. + * + */ +typedef uint32_t (*cryptodev_sym_cpu_crypto_process_t) + (struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *sess, + union rte_crypto_sym_ofs ofs, struct rte_crypto_sym_vec *vec); + /** Crypto device operations function pointer table */ struct rte_cryptodev_ops { @@ -342,6 +359,8 @@ struct rte_cryptodev_ops { /**< Clear a Crypto sessions private data. */ cryptodev_asym_free_session_t asym_session_clear; /**< Clear a Crypto sessions private data. */ + cryptodev_sym_cpu_crypto_process_t sym_cpu_process; + /**< process input data synchronously (cpu-crypto). */ }; diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map index 1dd1e259a..6e41b4be5 100644 --- a/lib/librte_cryptodev/rte_cryptodev_version.map +++ b/lib/librte_cryptodev/rte_cryptodev_version.map @@ -71,6 +71,7 @@ EXPERIMENTAL { rte_cryptodev_asym_session_init; rte_cryptodev_asym_xform_capability_check_modlen; rte_cryptodev_asym_xform_capability_check_optype; + rte_cryptodev_sym_cpu_crypto_process; rte_cryptodev_sym_get_existing_header_session_size; rte_cryptodev_sym_session_get_user_data; rte_cryptodev_sym_session_pool_create; From patchwork Tue Jan 28 14:22:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 65234 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F246A04B3; Tue, 28 Jan 2020 15:22:49 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 03B841C2EC; Tue, 28 Jan 2020 15:22:34 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 79D711C2DE for ; Tue, 28 Jan 2020 15:22:30 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 06:22:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,374,1574150400"; d="scan'208";a="309092830" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.103.102.190]) by orsmga001.jf.intel.com with ESMTP; 28 Jan 2020 06:22:27 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Tue, 28 Jan 2020 15:22:14 +0100 Message-Id: <20200128142220.16644-3-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200128142220.16644-1-marcinx.smoczynski@intel.com> References: <20200128031642.15256-1-marcinx.smoczynski@intel.com> <20200128142220.16644-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 2/8] crypto/aesni_gcm: cpu crypto support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for CPU crypto mode by introducing required handler. Crypto mode (sync/async) is chosen during sym session create if an appropriate flag is set in an xform type number. Authenticated encryption and decryption are supported with tag generation/verification. Signed-off-by: Marcin Smoczynski Acked-by: Pablo de Lara Acked-by: Fan Zhang Tested-by: Konstantin Ananyev Acked-by: Konstantin Ananyev --- drivers/crypto/aesni_gcm/aesni_gcm_ops.h | 11 +- drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 222 +++++++++++++++++- drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 4 +- .../crypto/aesni_gcm/aesni_gcm_pmd_private.h | 13 +- 4 files changed, 240 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h index e272f1067..74acac09c 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_ops.h +++ b/drivers/crypto/aesni_gcm/aesni_gcm_ops.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2016-2020 Intel Corporation */ #ifndef _AESNI_GCM_OPS_H_ @@ -65,4 +65,13 @@ struct aesni_gcm_ops { aesni_gcm_finalize_t finalize_dec; }; +/** GCM per-session operation handlers */ +struct aesni_gcm_session_ops { + aesni_gcm_t cipher; + aesni_gcm_pre_t pre; + aesni_gcm_init_t init; + aesni_gcm_update_t update; + aesni_gcm_finalize_t finalize; +}; + #endif /* _AESNI_GCM_OPS_H_ */ diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c index 1a03be31d..a1caab993 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2016-2020 Intel Corporation */ #include @@ -15,6 +15,31 @@ static uint8_t cryptodev_driver_id; +/* setup session handlers */ +static void +set_func_ops(struct aesni_gcm_session *s, const struct aesni_gcm_ops *gcm_ops) +{ + s->ops.pre = gcm_ops->pre; + s->ops.init = gcm_ops->init; + + switch (s->op) { + case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION: + s->ops.cipher = gcm_ops->enc; + s->ops.update = gcm_ops->update_enc; + s->ops.finalize = gcm_ops->finalize_enc; + break; + case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION: + s->ops.cipher = gcm_ops->dec; + s->ops.update = gcm_ops->update_dec; + s->ops.finalize = gcm_ops->finalize_dec; + break; + case AESNI_GMAC_OP_GENERATE: + case AESNI_GMAC_OP_VERIFY: + s->ops.finalize = gcm_ops->finalize_enc; + break; + } +} + /** Parse crypto xform chain and set private session parameters */ int aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops, @@ -65,6 +90,7 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops, /* Select Crypto operation */ if (aead_xform->aead.op == RTE_CRYPTO_AEAD_OP_ENCRYPT) sess->op = AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION; + /* op == RTE_CRYPTO_AEAD_OP_DECRYPT */ else sess->op = AESNI_GCM_OP_AUTHENTICATED_DECRYPTION; @@ -78,7 +104,6 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops, return -ENOTSUP; } - /* IV check */ if (sess->iv.length != 16 && sess->iv.length != 12 && sess->iv.length != 0) { @@ -102,6 +127,10 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *gcm_ops, return -EINVAL; } + /* setup session handlers */ + set_func_ops(sess, &gcm_ops[sess->key]); + + /* pre-generate key */ gcm_ops[sess->key].pre(key, &sess->gdata_key); /* Digest check */ @@ -356,6 +385,191 @@ process_gcm_crypto_op(struct aesni_gcm_qp *qp, struct rte_crypto_op *op, return 0; } +static inline void +aesni_gcm_fill_error_code(struct rte_crypto_sym_vec *vec, int32_t errnum) +{ + uint32_t i; + + for (i = 0; i < vec->num; i++) + vec->status[i] = errnum; +} + + +static inline int32_t +aesni_gcm_sgl_op_finalize_encryption(const struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, uint8_t *digest) +{ + if (s->req_digest_length != s->gen_digest_length) { + uint8_t tmpdigest[s->gen_digest_length]; + + s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest, + s->gen_digest_length); + memcpy(digest, tmpdigest, s->req_digest_length); + } else { + s->ops.finalize(&s->gdata_key, gdata_ctx, digest, + s->gen_digest_length); + } + + return 0; +} + +static inline int32_t +aesni_gcm_sgl_op_finalize_decryption(const struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, uint8_t *digest) +{ + uint8_t tmpdigest[s->gen_digest_length]; + + s->ops.finalize(&s->gdata_key, gdata_ctx, tmpdigest, + s->gen_digest_length); + + return memcmp(digest, tmpdigest, s->req_digest_length) == 0 ? 0 : + EBADMSG; +} + +static inline void +aesni_gcm_process_gcm_sgl_op(const struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl, + void *iv, void *aad) +{ + uint32_t i; + + /* init crypto operation */ + s->ops.init(&s->gdata_key, gdata_ctx, iv, aad, + (uint64_t)s->aad_length); + + /* update with sgl data */ + for (i = 0; i < sgl->num; i++) { + struct rte_crypto_vec *vec = &sgl->vec[i]; + + s->ops.update(&s->gdata_key, gdata_ctx, vec->base, vec->base, + vec->len); + } +} + +static inline void +aesni_gcm_process_gmac_sgl_op(const struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, struct rte_crypto_sgl *sgl, + void *iv) +{ + s->ops.init(&s->gdata_key, gdata_ctx, iv, sgl->vec[0].base, + sgl->vec[0].len); +} + +static inline uint32_t +aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec) +{ + uint32_t i, processed; + + processed = 0; + for (i = 0; i < vec->num; ++i) { + aesni_gcm_process_gcm_sgl_op(s, gdata_ctx, + &vec->sgl[i], vec->iv[i], vec->aad[i]); + vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s, + gdata_ctx, vec->digest[i]); + processed += (vec->status[i] == 0); + } + + return processed; +} + +static inline uint32_t +aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec) +{ + uint32_t i, processed; + + processed = 0; + for (i = 0; i < vec->num; ++i) { + aesni_gcm_process_gcm_sgl_op(s, gdata_ctx, + &vec->sgl[i], vec->iv[i], vec->aad[i]); + vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s, + gdata_ctx, vec->digest[i]); + processed += (vec->status[i] == 0); + } + + return processed; +} + +static inline uint32_t +aesni_gmac_sgl_generate(struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec) +{ + uint32_t i, processed; + + processed = 0; + for (i = 0; i < vec->num; ++i) { + if (vec->sgl[i].num != 1) { + vec->status[i] = ENOTSUP; + continue; + } + + aesni_gcm_process_gmac_sgl_op(s, gdata_ctx, + &vec->sgl[i], vec->iv[i]); + vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s, + gdata_ctx, vec->digest[i]); + processed += (vec->status[i] == 0); + } + + return processed; +} + +static inline uint32_t +aesni_gmac_sgl_verify(struct aesni_gcm_session *s, + struct gcm_context_data *gdata_ctx, struct rte_crypto_sym_vec *vec) +{ + uint32_t i, processed; + + processed = 0; + for (i = 0; i < vec->num; ++i) { + if (vec->sgl[i].num != 1) { + vec->status[i] = ENOTSUP; + continue; + } + + aesni_gcm_process_gmac_sgl_op(s, gdata_ctx, + &vec->sgl[i], vec->iv[i]); + vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s, + gdata_ctx, vec->digest[i]); + processed += (vec->status[i] == 0); + } + + return processed; +} + +/** Process CPU crypto bulk operations */ +uint32_t +aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev, + struct rte_cryptodev_sym_session *sess, + __rte_unused union rte_crypto_sym_ofs ofs, + struct rte_crypto_sym_vec *vec) +{ + void *sess_priv; + struct aesni_gcm_session *s; + struct gcm_context_data gdata_ctx; + + sess_priv = get_sym_session_private_data(sess, dev->driver_id); + if (unlikely(sess_priv == NULL)) { + aesni_gcm_fill_error_code(vec, EINVAL); + return 0; + } + + s = sess_priv; + switch (s->op) { + case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION: + return aesni_gcm_sgl_encrypt(s, &gdata_ctx, vec); + case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION: + return aesni_gcm_sgl_decrypt(s, &gdata_ctx, vec); + case AESNI_GMAC_OP_GENERATE: + return aesni_gmac_sgl_generate(s, &gdata_ctx, vec); + case AESNI_GMAC_OP_VERIFY: + return aesni_gmac_sgl_verify(s, &gdata_ctx, vec); + default: + aesni_gcm_fill_error_code(vec, EINVAL); + return 0; + } +} + /** * Process a completed job and return rte_mbuf which job processed * @@ -527,7 +741,8 @@ aesni_gcm_create(const char *name, RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_IN_PLACE_SGL | RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT; + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | + RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO; /* Check CPU for support for AES instruction set */ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) @@ -672,7 +887,6 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_GCM_PMD, RTE_PMD_REGISTER_CRYPTO_DRIVER(aesni_gcm_crypto_drv, aesni_gcm_pmd_drv.driver, cryptodev_driver_id); - RTE_INIT(aesni_gcm_init_log) { aesni_gcm_logtype_driver = rte_log_register("pmd.crypto.aesni_gcm"); diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c index 2f66c7c58..c5e0878f5 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016 Intel Corporation + * Copyright(c) 2016-2020 Intel Corporation */ #include @@ -331,6 +331,8 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = { .queue_pair_release = aesni_gcm_pmd_qp_release, .queue_pair_count = aesni_gcm_pmd_qp_count, + .sym_cpu_process = aesni_gcm_pmd_cpu_crypto_process, + .sym_session_get_size = aesni_gcm_pmd_sym_session_get_size, .sym_session_configure = aesni_gcm_pmd_sym_session_configure, .sym_session_clear = aesni_gcm_pmd_sym_session_clear diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h index 2039adb53..080d4f7e4 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2016-2020 Intel Corporation */ #ifndef _AESNI_GCM_PMD_PRIVATE_H_ @@ -92,6 +92,8 @@ struct aesni_gcm_session { /**< GCM key type */ struct gcm_key_data gdata_key; /**< GCM parameters */ + struct aesni_gcm_session_ops ops; + /**< Session handlers */ }; @@ -109,10 +111,13 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *ops, struct aesni_gcm_session *sess, const struct rte_crypto_sym_xform *xform); - -/** - * Device specific operations function pointer structure */ +/* Device specific operations function pointer structure */ extern struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops; +/** CPU crypto bulk process handler */ +uint32_t +aesni_gcm_pmd_cpu_crypto_process(struct rte_cryptodev *dev, + struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, + struct rte_crypto_sym_vec *vec); #endif /* _AESNI_GCM_PMD_PRIVATE_H_ */ From patchwork Tue Jan 28 14:22:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 65235 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3CE9A04B3; Tue, 28 Jan 2020 15:23:04 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 099391C2DE; Tue, 28 Jan 2020 15:22:39 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id A55D11C2E9 for ; Tue, 28 Jan 2020 15:22:33 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 06:22:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,374,1574150400"; d="scan'208";a="309092841" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.103.102.190]) by orsmga001.jf.intel.com with ESMTP; 28 Jan 2020 06:22:30 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Tue, 28 Jan 2020 15:22:15 +0100 Message-Id: <20200128142220.16644-4-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200128142220.16644-1-marcinx.smoczynski@intel.com> References: <20200128031642.15256-1-marcinx.smoczynski@intel.com> <20200128142220.16644-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 3/8] test/crypto: add CPU crypto tests X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add unit and performance tests for CPU crypto mode currently implemented by AESNI-GCM cryptodev. Unit tests cover AES-GCM and GMAC test vectors. Signed-off-by: Marcin Smoczynski Acked-by: Pablo de Lara --- app/test/Makefile | 3 +- app/test/cpu_crypto_all_gcm_perf_test_cases.h | 11 + app/test/cpu_crypto_all_gcm_unit_test_cases.h | 49 + .../cpu_crypto_all_gmac_unit_test_cases.h | 7 + app/test/meson.build | 3 +- app/test/test_cryptodev_cpu_crypto.c | 931 ++++++++++++++++++ 6 files changed, 1002 insertions(+), 2 deletions(-) create mode 100644 app/test/cpu_crypto_all_gcm_perf_test_cases.h create mode 100644 app/test/cpu_crypto_all_gcm_unit_test_cases.h create mode 100644 app/test/cpu_crypto_all_gmac_unit_test_cases.h create mode 100644 app/test/test_cryptodev_cpu_crypto.c diff --git a/app/test/Makefile b/app/test/Makefile index 57930c00b..bbe26bd0c 100644 --- a/app/test/Makefile +++ b/app/test/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2010-2017 Intel Corporation +# Copyright(c) 2010-2020 Intel Corporation include $(RTE_SDK)/mk/rte.vars.mk @@ -203,6 +203,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_asym.c +SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_cpu_crypto.c SRCS-$(CONFIG_RTE_LIBRTE_SECURITY) += test_cryptodev_security_pdcp.c SRCS-$(CONFIG_RTE_LIBRTE_METRICS) += test_metrics.c diff --git a/app/test/cpu_crypto_all_gcm_perf_test_cases.h b/app/test/cpu_crypto_all_gcm_perf_test_cases.h new file mode 100644 index 000000000..425fcb510 --- /dev/null +++ b/app/test/cpu_crypto_all_gcm_perf_test_cases.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ + +TEST_EXPAND(_128, 16, SGL_ONE_SEG) +TEST_EXPAND(_192, 24, SGL_ONE_SEG) +TEST_EXPAND(_256, 32, SGL_ONE_SEG) + +TEST_EXPAND(_128, 16, SGL_MAX_SEG) +TEST_EXPAND(_192, 24, SGL_MAX_SEG) +TEST_EXPAND(_256, 32, SGL_MAX_SEG) diff --git a/app/test/cpu_crypto_all_gcm_unit_test_cases.h b/app/test/cpu_crypto_all_gcm_unit_test_cases.h new file mode 100644 index 000000000..a2bc11b39 --- /dev/null +++ b/app/test/cpu_crypto_all_gcm_unit_test_cases.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ + +TEST_EXPAND(gcm_test_case_1, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_2, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_3, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_4, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_5, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_6, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_7, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_8, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_192_1, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_192_2, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_192_3, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_192_4, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_192_5, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_192_6, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_192_7, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_256_1, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_256_2, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_256_3, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_256_4, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_256_5, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_256_6, SGL_ONE_SEG) +TEST_EXPAND(gcm_test_case_256_7, SGL_ONE_SEG) + +TEST_EXPAND(gcm_test_case_1, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_2, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_3, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_4, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_5, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_6, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_7, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_8, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_192_1, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_192_2, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_192_3, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_192_4, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_192_5, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_192_6, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_192_7, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_256_1, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_256_2, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_256_3, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_256_4, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_256_5, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_256_6, SGL_MAX_SEG) +TEST_EXPAND(gcm_test_case_256_7, SGL_MAX_SEG) diff --git a/app/test/cpu_crypto_all_gmac_unit_test_cases.h b/app/test/cpu_crypto_all_gmac_unit_test_cases.h new file mode 100644 index 000000000..97f9c2bec --- /dev/null +++ b/app/test/cpu_crypto_all_gmac_unit_test_cases.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ + +TEST_EXPAND(gmac_test_case_1, SGL_ONE_SEG) +TEST_EXPAND(gmac_test_case_2, SGL_ONE_SEG) +TEST_EXPAND(gmac_test_case_3, SGL_ONE_SEG) diff --git a/app/test/meson.build b/app/test/meson.build index 22b0cefaa..175e3c0fd 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2017 Intel Corporation +# Copyright(c) 2017-2020 Intel Corporation if not get_option('tests') subdir_done() @@ -30,6 +30,7 @@ test_sources = files('commands.c', 'test_cryptodev.c', 'test_cryptodev_asym.c', 'test_cryptodev_blockcipher.c', + 'test_cryptodev_cpu_crypto.c', 'test_cryptodev_security_pdcp.c', 'test_cycles.c', 'test_debug.c', diff --git a/app/test/test_cryptodev_cpu_crypto.c b/app/test/test_cryptodev_cpu_crypto.c new file mode 100644 index 000000000..4393bcdcc --- /dev/null +++ b/app/test/test_cryptodev_cpu_crypto.c @@ -0,0 +1,931 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "test.h" +#include "test_cryptodev.h" +#include "test_cryptodev_blockcipher.h" +#include "test_cryptodev_aes_test_vectors.h" +#include "test_cryptodev_aead_test_vectors.h" +#include "test_cryptodev_des_test_vectors.h" +#include "test_cryptodev_hash_test_vectors.h" + +#define CPU_CRYPTO_TEST_MAX_AAD_LENGTH 16 +#define MAX_NB_SEGMENTS 4 +#define CACHE_WARM_ITER 2048 +#define MAX_SEG_SIZE 2048 + +#define TOP_ENC BLOCKCIPHER_TEST_OP_ENCRYPT +#define TOP_DEC BLOCKCIPHER_TEST_OP_DECRYPT +#define TOP_AUTH_GEN BLOCKCIPHER_TEST_OP_AUTH_GEN +#define TOP_AUTH_VER BLOCKCIPHER_TEST_OP_AUTH_VERIFY +#define TOP_ENC_AUTH BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN +#define TOP_AUTH_DEC BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC + +enum buffer_assemble_option { + SGL_MAX_SEG, + SGL_ONE_SEG, +}; + +struct cpu_crypto_test_case { + struct { + uint8_t seg[MAX_SEG_SIZE]; + uint32_t seg_len; + } seg_buf[MAX_NB_SEGMENTS]; + uint8_t iv[MAXIMUM_IV_LENGTH * 2]; + uint8_t aad[CPU_CRYPTO_TEST_MAX_AAD_LENGTH * 4]; + uint8_t digest[DIGEST_BYTE_LENGTH_SHA512]; +} __rte_cache_aligned; + +struct cpu_crypto_test_obj { + struct rte_crypto_vec vec[MAX_NUM_OPS_INFLIGHT][MAX_NB_SEGMENTS]; + struct rte_crypto_sgl sec_buf[MAX_NUM_OPS_INFLIGHT]; + void *iv[MAX_NUM_OPS_INFLIGHT]; + void *digest[MAX_NUM_OPS_INFLIGHT]; + void *aad[MAX_NUM_OPS_INFLIGHT]; + int status[MAX_NUM_OPS_INFLIGHT]; +}; + +struct cpu_crypto_testsuite_params { + struct rte_mempool *buf_pool; + struct rte_mempool *session_priv_mpool; +}; + +struct cpu_crypto_unittest_params { + struct rte_cryptodev_sym_session *sess; + void *test_datas[MAX_NUM_OPS_INFLIGHT]; + struct cpu_crypto_test_obj test_obj; + uint32_t nb_bufs; +}; + +static struct cpu_crypto_testsuite_params testsuite_params; +static struct cpu_crypto_unittest_params unittest_params; + +static int gbl_driver_id; + +static uint32_t valid_dev; + +static int +testsuite_setup(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + uint32_t i, nb_devs; + size_t sess_sz; + struct rte_cryptodev_info info; + + const char * const pool_name = "CPU_CRYPTO_MBUFPOOL"; + + memset(ts_params, 0, sizeof(*ts_params)); + + ts_params->buf_pool = rte_mempool_lookup(pool_name); + if (ts_params->buf_pool == NULL) { + /* Not already created so create */ + ts_params->buf_pool = rte_pktmbuf_pool_create(pool_name, + NUM_MBUFS, MBUF_CACHE_SIZE, 0, + sizeof(struct cpu_crypto_test_case), + rte_socket_id()); + if (ts_params->buf_pool == NULL) { + RTE_LOG(ERR, USER1, "Can't create %s\n", pool_name); + return TEST_FAILED; + } + } + + /* Create an AESNI GCM device if required */ + if (gbl_driver_id == rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) { + nb_devs = rte_cryptodev_device_count_by_driver( + rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))); + if (nb_devs < 1) { + TEST_ASSERT_SUCCESS(rte_vdev_init( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL), + "Failed to create instance of" + " pmd : %s", + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)); + } + } + + nb_devs = rte_cryptodev_count(); + if (nb_devs < 1) { + RTE_LOG(ERR, USER1, "No crypto devices found?\n"); + return TEST_FAILED; + } + + /* get first valid crypto dev */ + valid_dev = UINT32_MAX; + for (i = 0; i < nb_devs; i++) { + rte_cryptodev_info_get(i, &info); + if (info.driver_id == gbl_driver_id && + (info.feature_flags & + RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO) != 0) { + valid_dev = i; + break; + } + } + + RTE_LOG(INFO, USER1, "Crypto device %u selected for CPU mode test", + valid_dev); + + if (valid_dev == UINT32_MAX) { + RTE_LOG(ERR, USER1, "No crypto devices that support CPU mode"); + return TEST_FAILED; + } + + /* get session size */ + sess_sz = rte_cryptodev_sym_get_private_session_size(valid_dev); + + ts_params->session_priv_mpool = rte_cryptodev_sym_session_pool_create( + "CRYPTO_SESPOOL", 2, sess_sz, 0, 0, SOCKET_ID_ANY); + if (!ts_params->session_priv_mpool) { + RTE_LOG(ERR, USER1, "Not enough memory\n"); + return TEST_FAILED; + } + + return TEST_SUCCESS; +} + +static void +testsuite_teardown(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + + if (ts_params->buf_pool) + rte_mempool_free(ts_params->buf_pool); + + if (ts_params->session_priv_mpool) + rte_mempool_free(ts_params->session_priv_mpool); +} + +static int +ut_setup(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + + memset(ut_params, 0, sizeof(*ut_params)); + + ut_params->sess = rte_cryptodev_sym_session_create( + ts_params->session_priv_mpool); + + return ut_params->sess ? TEST_SUCCESS : TEST_FAILED; +} + +static void +ut_teardown(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + + if (ut_params->sess) { + rte_cryptodev_sym_session_clear(valid_dev, ut_params->sess); + rte_cryptodev_sym_session_free(ut_params->sess); + ut_params->sess = NULL; + } + + if (ut_params->nb_bufs) { + uint32_t i; + + for (i = 0; i < ut_params->nb_bufs; i++) + memset(ut_params->test_datas[i], 0, + sizeof(struct cpu_crypto_test_case)); + + rte_mempool_put_bulk(ts_params->buf_pool, ut_params->test_datas, + ut_params->nb_bufs); + } +} + +static int +allocate_buf(uint32_t n) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + int ret; + + ret = rte_mempool_get_bulk(ts_params->buf_pool, ut_params->test_datas, + n); + + if (ret == 0) + ut_params->nb_bufs = n; + + return ret; +} + +static int +check_status(struct cpu_crypto_test_obj *obj, uint32_t n) +{ + uint32_t i; + + for (i = 0; i < n; i++) + if (obj->status[i] != 0) + return -1; + + return 0; +} + +static inline int +init_aead_session(struct rte_cryptodev_sym_session *ses, + struct rte_mempool *sess_mp, + enum rte_crypto_aead_operation op, + const struct aead_test_data *test_data, + uint32_t is_unit_test) +{ + struct rte_crypto_sym_xform xform = {0}; + + if (is_unit_test) + debug_hexdump(stdout, "key:", test_data->key.data, + test_data->key.len); + + /* Setup AEAD Parameters */ + xform.type = RTE_CRYPTO_SYM_XFORM_AEAD; + xform.next = NULL; + xform.aead.algo = test_data->algo; + xform.aead.op = op; + xform.aead.key.data = test_data->key.data; + xform.aead.key.length = test_data->key.len; + xform.aead.iv.offset = 0; + xform.aead.iv.length = test_data->iv.len; + xform.aead.digest_length = test_data->auth_tag.len; + xform.aead.aad_length = test_data->aad.len; + + return rte_cryptodev_sym_session_init(valid_dev, ses, &xform, + sess_mp); +} + +static inline int +init_gmac_session(struct rte_cryptodev_sym_session *ses, + struct rte_mempool *sess_mp, + enum rte_crypto_auth_operation op, + const struct gmac_test_data *test_data, + uint32_t is_unit_test) +{ + struct rte_crypto_sym_xform xform = {0}; + + if (is_unit_test) + debug_hexdump(stdout, "key:", test_data->key.data, + test_data->key.len); + + /* Setup AEAD Parameters */ + xform.type = RTE_CRYPTO_SYM_XFORM_AUTH; + xform.next = NULL; + xform.auth.algo = RTE_CRYPTO_AUTH_AES_GMAC; + xform.auth.op = op; + xform.auth.digest_length = test_data->gmac_tag.len; + xform.auth.key.length = test_data->key.len; + xform.auth.key.data = test_data->key.data; + xform.auth.iv.length = test_data->iv.len; + xform.auth.iv.offset = 0; + + return rte_cryptodev_sym_session_init(valid_dev, ses, &xform, sess_mp); +} + + +static inline int +prepare_sgl(struct cpu_crypto_test_case *data, + struct cpu_crypto_test_obj *obj, + uint32_t obj_idx, + enum buffer_assemble_option sgl_option, + const uint8_t *src, + uint32_t src_len) +{ + uint32_t seg_idx; + uint32_t bytes_per_seg; + uint32_t left; + + switch (sgl_option) { + case SGL_MAX_SEG: + seg_idx = 0; + bytes_per_seg = src_len / MAX_NB_SEGMENTS + 1; + left = src_len; + + if (bytes_per_seg > MAX_SEG_SIZE) + return -ENOMEM; + + while (left) { + uint32_t cp_len = RTE_MIN(left, bytes_per_seg); + memcpy(data->seg_buf[seg_idx].seg, src, cp_len); + data->seg_buf[seg_idx].seg_len = cp_len; + obj->vec[obj_idx][seg_idx].base = + (void *)data->seg_buf[seg_idx].seg; + obj->vec[obj_idx][seg_idx].len = cp_len; + src += cp_len; + left -= cp_len; + seg_idx++; + } + + if (left) + return -ENOMEM; + + obj->sec_buf[obj_idx].vec = obj->vec[obj_idx]; + obj->sec_buf[obj_idx].num = seg_idx; + + break; + case SGL_ONE_SEG: + memcpy(data->seg_buf[0].seg, src, src_len); + data->seg_buf[0].seg_len = src_len; + obj->vec[obj_idx][0].base = + (void *)data->seg_buf[0].seg; + obj->vec[obj_idx][0].len = src_len; + + obj->sec_buf[obj_idx].vec = obj->vec[obj_idx]; + obj->sec_buf[obj_idx].num = 1; + break; + default: + return -1; + } + + return 0; +} + +static inline int +assemble_aead_buf(struct cpu_crypto_test_case *data, + struct cpu_crypto_test_obj *obj, + uint32_t obj_idx, + enum rte_crypto_aead_operation op, + const struct aead_test_data *test_data, + enum buffer_assemble_option sgl_option, + uint32_t is_unit_test) +{ + const uint8_t *src; + uint32_t src_len; + int ret; + + if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) { + src = test_data->plaintext.data; + src_len = test_data->plaintext.len; + if (is_unit_test) + debug_hexdump(stdout, "plaintext:", src, src_len); + } else { + src = test_data->ciphertext.data; + src_len = test_data->ciphertext.len; + memcpy(data->digest, test_data->auth_tag.data, + test_data->auth_tag.len); + if (is_unit_test) { + debug_hexdump(stdout, "ciphertext:", src, src_len); + debug_hexdump(stdout, "digest:", + test_data->auth_tag.data, + test_data->auth_tag.len); + } + } + + if (src_len > MAX_SEG_SIZE) + return -ENOMEM; + + ret = prepare_sgl(data, obj, obj_idx, sgl_option, src, src_len); + if (ret < 0) + return ret; + + memcpy(data->iv, test_data->iv.data, test_data->iv.len); + memcpy(data->aad, test_data->aad.data, test_data->aad.len); + + if (is_unit_test) { + debug_hexdump(stdout, "iv:", test_data->iv.data, + test_data->iv.len); + debug_hexdump(stdout, "aad:", test_data->aad.data, + test_data->aad.len); + } + + obj->iv[obj_idx] = (void *)data->iv; + obj->digest[obj_idx] = (void *)data->digest; + obj->aad[obj_idx] = (void *)data->aad; + + return 0; +} + +static inline int +assemble_gmac_buf(struct cpu_crypto_test_case *data, + struct cpu_crypto_test_obj *obj, + uint32_t obj_idx, + enum rte_crypto_auth_operation op, + const struct gmac_test_data *test_data, + enum buffer_assemble_option sgl_option, + uint32_t is_unit_test) +{ + const uint8_t *src; + uint32_t src_len; + int ret; + + if (op == RTE_CRYPTO_AUTH_OP_GENERATE) { + src = test_data->plaintext.data; + src_len = test_data->plaintext.len; + if (is_unit_test) + debug_hexdump(stdout, "plaintext:", src, src_len); + } else { + src = test_data->plaintext.data; + src_len = test_data->plaintext.len; + memcpy(data->digest, test_data->gmac_tag.data, + test_data->gmac_tag.len); + if (is_unit_test) + debug_hexdump(stdout, "gmac_tag:", src, src_len); + } + + if (src_len > MAX_SEG_SIZE) + return -ENOMEM; + + ret = prepare_sgl(data, obj, obj_idx, sgl_option, src, src_len); + if (ret < 0) + return ret; + + memcpy(data->iv, test_data->iv.data, test_data->iv.len); + + if (is_unit_test) { + debug_hexdump(stdout, "iv:", test_data->iv.data, + test_data->iv.len); + } + + obj->iv[obj_idx] = (void *)data->iv; + obj->digest[obj_idx] = (void *)data->digest; + + return 0; +} + +#define CPU_CRYPTO_ERR_EXP_CT "expect ciphertext:" +#define CPU_CRYPTO_ERR_GEN_CT "gen ciphertext:" +#define CPU_CRYPTO_ERR_EXP_PT "expect plaintext:" +#define CPU_CRYPTO_ERR_GEN_PT "gen plaintext:" + +static int +check_aead_result(struct cpu_crypto_test_case *tcase, + enum rte_crypto_aead_operation op, + const struct aead_test_data *tdata) +{ + const char *err_msg1, *err_msg2; + const uint8_t *src_pt_ct; + const uint8_t *tmp_src; + uint32_t src_len; + uint32_t left; + uint32_t i = 0; + int ret; + + if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) { + err_msg1 = CPU_CRYPTO_ERR_EXP_CT; + err_msg2 = CPU_CRYPTO_ERR_GEN_CT; + + src_pt_ct = tdata->ciphertext.data; + src_len = tdata->ciphertext.len; + + ret = memcmp(tcase->digest, tdata->auth_tag.data, + tdata->auth_tag.len); + if (ret != 0) { + debug_hexdump(stdout, "expect digest:", + tdata->auth_tag.data, + tdata->auth_tag.len); + debug_hexdump(stdout, "gen digest:", + tcase->digest, + tdata->auth_tag.len); + return -1; + } + } else { + src_pt_ct = tdata->plaintext.data; + src_len = tdata->plaintext.len; + err_msg1 = CPU_CRYPTO_ERR_EXP_PT; + err_msg2 = CPU_CRYPTO_ERR_GEN_PT; + } + + tmp_src = src_pt_ct; + left = src_len; + + while (left && i < MAX_NB_SEGMENTS) { + ret = memcmp(tcase->seg_buf[i].seg, tmp_src, + tcase->seg_buf[i].seg_len); + if (ret != 0) + goto sgl_err_dump; + tmp_src += tcase->seg_buf[i].seg_len; + left -= tcase->seg_buf[i].seg_len; + i++; + } + + if (left) { + ret = -ENOMEM; + goto sgl_err_dump; + } + + return 0; + +sgl_err_dump: + left = src_len; + i = 0; + + debug_hexdump(stdout, err_msg1, + tdata->ciphertext.data, + tdata->ciphertext.len); + + while (left && i < MAX_NB_SEGMENTS) { + debug_hexdump(stdout, err_msg2, + tcase->seg_buf[i].seg, + tcase->seg_buf[i].seg_len); + left -= tcase->seg_buf[i].seg_len; + i++; + } + return ret; +} + +static int +check_gmac_result(struct cpu_crypto_test_case *tcase, + enum rte_crypto_auth_operation op, + const struct gmac_test_data *tdata) +{ + int ret; + + if (op == RTE_CRYPTO_AUTH_OP_GENERATE) { + ret = memcmp(tcase->digest, tdata->gmac_tag.data, + tdata->gmac_tag.len); + if (ret != 0) { + debug_hexdump(stdout, "expect digest:", + tdata->gmac_tag.data, + tdata->gmac_tag.len); + debug_hexdump(stdout, "gen digest:", + tcase->digest, + tdata->gmac_tag.len); + return -1; + } + } + + return 0; +} + +static inline int32_t +run_test(struct rte_cryptodev_sym_session *sess, union rte_crypto_sym_ofs ofs, + struct cpu_crypto_test_obj *obj, uint32_t n) +{ + struct rte_crypto_sym_vec symvec; + + symvec.sgl = obj->sec_buf; + symvec.iv = obj->iv; + symvec.aad = obj->aad; + symvec.digest = obj->digest; + symvec.status = obj->status; + symvec.num = n; + + return rte_cryptodev_sym_cpu_crypto_process(valid_dev, sess, ofs, + &symvec); +} + +static int +cpu_crypto_test_aead(const struct aead_test_data *tdata, + enum rte_crypto_aead_operation dir, + enum buffer_assemble_option sgl_option) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + union rte_crypto_sym_ofs ofs; + int ret; + + ret = init_aead_session(ut_params->sess, ts_params->session_priv_mpool, + dir, tdata, 1); + if (ret < 0) + return ret; + + ret = allocate_buf(1); + if (ret) + return ret; + + tcase = ut_params->test_datas[0]; + ret = assemble_aead_buf(tcase, obj, 0, dir, tdata, sgl_option, 1); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + + /* prepare offset descriptor */ + ofs.raw = 0; + + run_test(ut_params->sess, ofs, obj, 1); + + ret = check_status(obj, 1); + if (ret < 0) + return ret; + + ret = check_aead_result(tcase, dir, tdata); + if (ret < 0) + return ret; + + return 0; +} + +static int +cpu_crypto_test_gmac(const struct gmac_test_data *tdata, + enum rte_crypto_auth_operation dir, + enum buffer_assemble_option sgl_option) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + union rte_crypto_sym_ofs ofs; + int ret; + + ret = init_gmac_session(ut_params->sess, ts_params->session_priv_mpool, + dir, tdata, 1); + if (ret < 0) + return ret; + + ret = allocate_buf(1); + if (ret) + return ret; + + tcase = ut_params->test_datas[0]; + ret = assemble_gmac_buf(tcase, obj, 0, dir, tdata, sgl_option, 1); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + + /* prepare offset descriptor */ + ofs.raw = 0; + + run_test(ut_params->sess, ofs, obj, 1); + + ret = check_status(obj, 1); + if (ret < 0) + return ret; + + ret = check_gmac_result(tcase, dir, tdata); + if (ret < 0) + return ret; + + return 0; +} + +#define TEST_EXPAND(t, o) \ +static int \ +cpu_crypto_aead_enc_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_ENCRYPT, o); \ +} \ +static int \ +cpu_crypto_aead_dec_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_DECRYPT, o); \ +} \ + +#include "cpu_crypto_all_gcm_unit_test_cases.h" +#undef TEST_EXPAND + +#define TEST_EXPAND(t, o) \ +static int \ +cpu_crypto_gmac_gen_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_gmac(&t, RTE_CRYPTO_AUTH_OP_GENERATE, o);\ +} \ +static int \ +cpu_crypto_gmac_ver_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_gmac(&t, RTE_CRYPTO_AUTH_OP_VERIFY, o); \ +} + +#include "cpu_crypto_all_gmac_unit_test_cases.h" +#undef TEST_EXPAND + +static struct unit_test_suite cpu_crypto_aesgcm_testsuite = { + .suite_name = "CPU Crypto AESNI-GCM Unit Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { + +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_aead_enc_test_##t##_##o), + +#include "cpu_crypto_all_gcm_unit_test_cases.h" +#undef TEST_EXPAND + +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_aead_dec_test_##t##_##o), + +#include "cpu_crypto_all_gcm_unit_test_cases.h" +#undef TEST_EXPAND + +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_gmac_gen_test_##t##_##o), + +#include "cpu_crypto_all_gmac_unit_test_cases.h" +#undef TEST_EXPAND + +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, cpu_crypto_gmac_ver_test_##t##_##o), + +#include "cpu_crypto_all_gmac_unit_test_cases.h" +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_cpu_crypto_aesni_gcm(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)); + + return unit_test_suite_runner(&cpu_crypto_aesgcm_testsuite); +} + + +static inline void +gen_rand(uint8_t *data, uint32_t len) +{ + uint32_t i; + + for (i = 0; i < len; i++) + data[i] = (uint8_t)rte_rand(); +} + +static inline void +switch_aead_enc_to_dec(struct aead_test_data *tdata, + struct cpu_crypto_test_case *tcase, + enum buffer_assemble_option sgl_option) +{ + uint32_t i; + uint8_t *dst = tdata->ciphertext.data; + + switch (sgl_option) { + case SGL_ONE_SEG: + memcpy(dst, tcase->seg_buf[0].seg, tcase->seg_buf[0].seg_len); + tdata->ciphertext.len = tcase->seg_buf[0].seg_len; + break; + case SGL_MAX_SEG: + tdata->ciphertext.len = 0; + for (i = 0; i < MAX_NB_SEGMENTS; i++) { + memcpy(dst, tcase->seg_buf[i].seg, + tcase->seg_buf[i].seg_len); + tdata->ciphertext.len += tcase->seg_buf[i].seg_len; + } + break; + } + + memcpy(tdata->auth_tag.data, tcase->digest, tdata->auth_tag.len); +} + +static int +cpu_crypto_test_aead_perf(enum buffer_assemble_option sgl_option, + uint32_t key_sz) +{ + struct aead_test_data tdata = {0}; + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + union rte_crypto_sym_ofs ofs; + uint64_t hz = rte_get_tsc_hz(), time_start, time_now; + double rate, cycles_per_buf; + uint32_t test_data_szs[] = {64, 128, 256, 512, 1024, 2048}; + uint32_t i, j; + uint8_t aad[16]; + int ret; + + tdata.key.len = key_sz; + gen_rand(tdata.key.data, tdata.key.len); + tdata.algo = RTE_CRYPTO_AEAD_AES_GCM; + tdata.aad.data = aad; + ofs.raw = 0; + + if (!ut_params->sess) + return -1; + + init_aead_session(ut_params->sess, ts_params->session_priv_mpool, + RTE_CRYPTO_AEAD_OP_DECRYPT, &tdata, 0); + + ret = allocate_buf(MAX_NUM_OPS_INFLIGHT); + if (ret) + return ret; + + for (i = 0; i < RTE_DIM(test_data_szs); i++) { + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tdata.plaintext.len = test_data_szs[i]; + gen_rand(tdata.plaintext.data, + tdata.plaintext.len); + + tdata.aad.len = 12; + gen_rand(tdata.aad.data, tdata.aad.len); + + tdata.auth_tag.len = 16; + + tdata.iv.len = 16; + gen_rand(tdata.iv.data, tdata.iv.len); + + tcase = ut_params->test_datas[j]; + ret = assemble_aead_buf(tcase, obj, j, + RTE_CRYPTO_AEAD_OP_ENCRYPT, + &tdata, sgl_option, 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + /* warm up cache */ + for (j = 0; j < CACHE_WARM_ITER; j++) + run_test(ut_params->sess, ofs, obj, + MAX_NUM_OPS_INFLIGHT); + + time_start = rte_rdtsc(); + + run_test(ut_params->sess, ofs, obj, MAX_NUM_OPS_INFLIGHT); + + time_now = rte_rdtsc(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("AES-GCM-%u(%4uB) Enc %03.3fMpps (%03.3fGbps) ", + key_sz * 8, test_data_szs[i], rate, + rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, + cycles_per_buf / test_data_szs[i]); + + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tcase = ut_params->test_datas[j]; + + switch_aead_enc_to_dec(&tdata, tcase, sgl_option); + ret = assemble_aead_buf(tcase, obj, j, + RTE_CRYPTO_AEAD_OP_DECRYPT, + &tdata, sgl_option, 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + time_start = rte_get_timer_cycles(); + + run_test(ut_params->sess, ofs, obj, MAX_NUM_OPS_INFLIGHT); + + time_now = rte_get_timer_cycles(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("AES-GCM-%u(%4uB) Dec %03.3fMpps (%03.3fGbps) ", + key_sz * 8, test_data_szs[i], rate, + rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, + cycles_per_buf / test_data_szs[i]); + } + + return 0; +} + +/* test-perfix/key-size/sgl-type */ +#define TEST_EXPAND(a, b, c) \ +static int \ +cpu_crypto_gcm_perf##a##_##c(void) \ +{ \ + return cpu_crypto_test_aead_perf(c, b); \ +} \ + +#include "cpu_crypto_all_gcm_perf_test_cases.h" +#undef TEST_EXPAND + +static struct unit_test_suite security_cpu_crypto_aesgcm_perf_testsuite = { + .suite_name = "Security CPU Crypto AESNI-GCM Perf Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { +#define TEST_EXPAND(a, b, c) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_gcm_perf##a##_##c), \ + +#include "cpu_crypto_all_gcm_perf_test_cases.h" +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_cpu_crypto_aesni_gcm_perf(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)); + + return unit_test_suite_runner( + &security_cpu_crypto_aesgcm_perf_testsuite); +} + +REGISTER_TEST_COMMAND(cpu_crypto_aesni_gcm_autotest, + test_cpu_crypto_aesni_gcm); + +REGISTER_TEST_COMMAND(cpu_crypto_aesni_gcm_perftest, + test_cpu_crypto_aesni_gcm_perf); From patchwork Tue Jan 28 14:22:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 65236 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB34AA04B3; Tue, 28 Jan 2020 15:23:16 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 47C2B1C2FA; Tue, 28 Jan 2020 15:22:40 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id EADB21C2F7 for ; Tue, 28 Jan 2020 15:22:35 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 06:22:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,374,1574150400"; d="scan'208";a="309092847" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.103.102.190]) by orsmga001.jf.intel.com with ESMTP; 28 Jan 2020 06:22:33 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Tue, 28 Jan 2020 15:22:16 +0100 Message-Id: <20200128142220.16644-5-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200128142220.16644-1-marcinx.smoczynski@intel.com> References: <20200128031642.15256-1-marcinx.smoczynski@intel.com> <20200128142220.16644-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 4/8] security: add cpu crypto action type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce CPU crypto action type allowing to differentiate between regular async 'none security' and synchronous, CPU crypto accelerated sessions. Signed-off-by: Marcin Smoczynski Acked-by: Konstantin Ananyev Acked-by: Fan Zhang --- lib/librte_security/rte_security.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h index 546779df2..c8b2dd5ed 100644 --- a/lib/librte_security/rte_security.h +++ b/lib/librte_security/rte_security.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright 2017,2019 NXP - * Copyright(c) 2017 Intel Corporation. + * Copyright(c) 2017-2020 Intel Corporation. */ #ifndef _RTE_SECURITY_H_ @@ -307,10 +307,14 @@ enum rte_security_session_action_type { /**< All security protocol processing is performed inline during * transmission */ - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, /**< All security protocol processing including crypto is performed * on a lookaside accelerator */ + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO + /**< Crypto processing for security protocol is processed by CPU + * synchronously + */ }; /** Security session protocol definition */ From patchwork Tue Jan 28 14:22:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 65237 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFF3DA04B3; Tue, 28 Jan 2020 15:23:34 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 35A361C43C; Tue, 28 Jan 2020 15:22:46 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id A336B1C2F7 for ; Tue, 28 Jan 2020 15:22:39 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 06:22:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,374,1574150400"; d="scan'208";a="309092858" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.103.102.190]) by orsmga001.jf.intel.com with ESMTP; 28 Jan 2020 06:22:35 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Tue, 28 Jan 2020 15:22:17 +0100 Message-Id: <20200128142220.16644-6-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200128142220.16644-1-marcinx.smoczynski@intel.com> References: <20200128031642.15256-1-marcinx.smoczynski@intel.com> <20200128142220.16644-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 5/8] ipsec: introduce support for cpu crypto mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update library to handle CPU cypto security mode which utilizes cryptodev's synchronous, CPU accelerated crypto operations. Signed-off-by: Konstantin Ananyev Signed-off-by: Marcin Smoczynski Acked-by: Fan Zhang Tested-by: Konstantin Ananyev --- lib/librte_ipsec/esp_inb.c | 156 ++++++++++++++++++++++++++++++----- lib/librte_ipsec/esp_outb.c | 136 +++++++++++++++++++++++++++--- lib/librte_ipsec/misc.h | 120 ++++++++++++++++++++++++++- lib/librte_ipsec/rte_ipsec.h | 20 ++++- lib/librte_ipsec/sa.c | 114 ++++++++++++++++++++----- lib/librte_ipsec/sa.h | 19 ++++- lib/librte_ipsec/ses.c | 5 +- 7 files changed, 513 insertions(+), 57 deletions(-) diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c index 5c653dd39..7b8ab81f6 100644 --- a/lib/librte_ipsec/esp_inb.c +++ b/lib/librte_ipsec/esp_inb.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2018 Intel Corporation + * Copyright(c) 2018-2020 Intel Corporation */ #include @@ -105,6 +105,39 @@ inb_cop_prepare(struct rte_crypto_op *cop, } } +static inline uint32_t +inb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t *pofs, uint32_t plen, void *iv) +{ + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + uint64_t *ivp; + uint32_t clen; + + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + *pofs + sizeof(struct rte_esp_hdr)); + clen = 0; + + switch (sa->algo_type) { + case ALGO_TYPE_AES_GCM: + gcm = (struct aead_gcm_iv *)iv; + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + break; + case ALGO_TYPE_AES_CBC: + case ALGO_TYPE_3DES_CBC: + copy_iv(iv, ivp, sa->iv_len); + break; + case ALGO_TYPE_AES_CTR: + ctr = (struct aesctr_cnt_blk *)iv; + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + } + + *pofs += sa->ctp.auth.offset; + clen = plen - sa->ctp.auth.length; + return clen; +} + /* * Helper function for prepare() to deal with situation when * ICV is spread by two segments. Tries to move ICV completely into the @@ -157,17 +190,12 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, } } -/* - * setup/update packet data and metadata for ESP inbound tunnel case. - */ -static inline int32_t -inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, - struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv) +static inline int +inb_get_sqn(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, + struct rte_mbuf *mb, uint32_t hlen, rte_be64_t *sqc) { int32_t rc; uint64_t sqn; - uint32_t clen, icv_len, icv_ofs, plen; - struct rte_mbuf *ml; struct rte_esp_hdr *esph; esph = rte_pktmbuf_mtod_offset(mb, struct rte_esp_hdr *, hlen); @@ -179,12 +207,21 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, sqn = rte_be_to_cpu_32(esph->seq); if (IS_ESN(sa)) sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz); + *sqc = rte_cpu_to_be_64(sqn); + /* check IPsec window */ rc = esn_inb_check_sqn(rsn, sa, sqn); - if (rc != 0) - return rc; - sqn = rte_cpu_to_be_64(sqn); + return rc; +} + +/* prepare packet for upcoming processing */ +static inline int32_t +inb_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t hlen, union sym_op_data *icv) +{ + uint32_t clen, icv_len, icv_ofs, plen; + struct rte_mbuf *ml; /* start packet manipulation */ plen = mb->pkt_len; @@ -217,7 +254,8 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, icv_ofs += sa->sqh_len; - /* we have to allocate space for AAD somewhere, + /* + * we have to allocate space for AAD somewhere, * right now - just use free trailing space at the last segment. * Would probably be more convenient to reserve space for AAD * inside rte_crypto_op itself @@ -238,10 +276,28 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, mb->pkt_len += sa->sqh_len; ml->data_len += sa->sqh_len; - inb_pkt_xprepare(sa, sqn, icv); return plen; } +static inline int32_t +inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, + struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv) +{ + int rc; + rte_be64_t sqn; + + rc = inb_get_sqn(sa, rsn, mb, hlen, &sqn); + if (rc != 0) + return rc; + + rc = inb_prepare(sa, mb, hlen, icv); + if (rc < 0) + return rc; + + inb_pkt_xprepare(sa, sqn, icv); + return rc; +} + /* * setup/update packets and crypto ops for ESP inbound case. */ @@ -270,17 +326,17 @@ esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], lksd_none_cop_prepare(cop[k], cs, mb[i]); inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc); k++; - } else + } else { dr[i - k] = i; + rte_errno = -rc; + } } rsn_release(sa, rsn); /* copy not prepared mbufs beyond good ones */ - if (k != num && k != 0) { + if (k != num && k != 0) move_bad_mbufs(mb, dr, num, num - k); - rte_errno = EBADMSG; - } return k; } @@ -512,7 +568,6 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], return k; } - /* * *process* function for tunnel packets */ @@ -612,7 +667,7 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], if (k != num && k != 0) move_bad_mbufs(mb, dr, num, num - k); - /* update SQN and replay winow */ + /* update SQN and replay window */ n = esp_inb_rsn_update(sa, sqn, dr, k); /* handle mbufs with wrong SQN */ @@ -625,6 +680,67 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], return n; } +/* + * Prepare (plus actual crypto/auth) routine for inbound CPU-CRYPTO + * (synchronous mode). + */ +uint16_t +cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + int32_t rc; + uint32_t i, k; + struct rte_ipsec_sa *sa; + struct replay_sqn *rsn; + union sym_op_data icv; + void *iv[num]; + void *aad[num]; + void *dgst[num]; + uint32_t dr[num]; + uint32_t l4ofs[num]; + uint32_t clen[num]; + uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD]; + + sa = ss->sa; + + /* grab rsn lock */ + rsn = rsn_acquire(sa); + + /* do preparation for all packets */ + for (i = 0, k = 0; i != num; i++) { + + /* calculate ESP header offset */ + l4ofs[k] = mb[i]->l2_len + mb[i]->l3_len; + + /* prepare ESP packet for processing */ + rc = inb_pkt_prepare(sa, rsn, mb[i], l4ofs[k], &icv); + if (rc >= 0) { + /* get encrypted data offset and length */ + clen[k] = inb_cpu_crypto_prepare(sa, mb[i], + l4ofs + k, rc, ivbuf[k]); + + /* fill iv, digest and aad */ + iv[k] = ivbuf[k]; + aad[k] = icv.va + sa->icv_len; + dgst[k++] = icv.va; + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* release rsn lock */ + rsn_release(sa, rsn); + + /* copy not prepared mbufs beyond good ones */ + if (k != num && k != 0) + move_bad_mbufs(mb, dr, num, num - k); + + /* convert mbufs to iovecs and do actual crypto/auth processing */ + cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k); + return k; +} + /* * process group of ESP inbound tunnel packets. */ diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c index e983b25a3..b6d9cbe98 100644 --- a/lib/librte_ipsec/esp_outb.c +++ b/lib/librte_ipsec/esp_outb.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2018 Intel Corporation + * Copyright(c) 2018-2020 Intel Corporation */ #include @@ -15,6 +15,9 @@ #include "misc.h" #include "pad.h" +typedef int32_t (*esp_outb_prepare_t)(struct rte_ipsec_sa *sa, rte_be64_t sqc, + const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, + union sym_op_data *icv, uint8_t sqh_len); /* * helper function to fill crypto_sym op for cipher+auth algorithms. @@ -177,6 +180,7 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, espt->pad_len = pdlen; espt->next_proto = sa->proto; + /* set icv va/pa value(s) */ icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); @@ -270,8 +274,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], static inline int32_t outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, - uint32_t l2len, uint32_t l3len, union sym_op_data *icv, - uint8_t sqh_len) + union sym_op_data *icv, uint8_t sqh_len) { uint8_t np; uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen; @@ -280,6 +283,10 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, struct rte_esp_tail *espt; char *ph, *pt; uint64_t *iv; + uint32_t l2len, l3len; + + l2len = mb->l2_len; + l3len = mb->l3_len; uhlen = l2len + l3len; plen = mb->pkt_len - uhlen; @@ -340,6 +347,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, espt->pad_len = pdlen; espt->next_proto = np; + /* set icv va/pa value(s) */ icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); @@ -381,8 +389,8 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], gen_iv(iv, sqc); /* try to update the packet itself */ - rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, &icv, - sa->sqh_len); + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, + sa->sqh_len); /* success, setup crypto op */ if (rc >= 0) { outb_pkt_xprepare(sa, sqc, &icv); @@ -403,6 +411,116 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], return k; } + +static inline uint32_t +outb_cpu_crypto_prepare(const struct rte_ipsec_sa *sa, uint32_t *pofs, + uint32_t plen, void *iv) +{ + uint64_t *ivp = iv; + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + uint32_t clen; + + switch (sa->algo_type) { + case ALGO_TYPE_AES_GCM: + gcm = iv; + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + break; + case ALGO_TYPE_AES_CTR: + ctr = iv; + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + } + + *pofs += sa->ctp.auth.offset; + clen = plen + sa->ctp.auth.length; + return clen; +} + +static uint16_t +cpu_outb_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num, + esp_outb_prepare_t prepare, uint32_t cofs_mask) +{ + int32_t rc; + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + uint32_t i, k, n; + uint32_t l2, l3; + union sym_op_data icv; + void *iv[num]; + void *aad[num]; + void *dgst[num]; + uint32_t dr[num]; + uint32_t l4ofs[num]; + uint32_t clen[num]; + uint64_t ivbuf[num][IPSEC_MAX_IV_QWORD]; + + sa = ss->sa; + + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + for (i = 0, k = 0; i != n; i++) { + + l2 = mb[i]->l2_len; + l3 = mb[i]->l3_len; + + /* calculate ESP header offset */ + l4ofs[k] = (l2 + l3) & cofs_mask; + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(ivbuf[k], sqc); + + /* try to update the packet itself */ + rc = prepare(sa, sqc, ivbuf[k], mb[i], &icv, sa->sqh_len); + + /* success, proceed with preparations */ + if (rc >= 0) { + + outb_pkt_xprepare(sa, sqc, &icv); + + /* get encrypted data offset and length */ + clen[k] = outb_cpu_crypto_prepare(sa, l4ofs + k, rc, + ivbuf[k]); + + /* fill iv, digest and aad */ + iv[k] = ivbuf[k]; + aad[k] = icv.va + sa->icv_len; + dgst[k++] = icv.va; + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not prepared mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + /* convert mbufs to iovecs and do actual crypto/auth processing */ + cpu_crypto_bulk(ss, sa->cofs, mb, iv, aad, dgst, l4ofs, clen, k); + return k; +} + +uint16_t +cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return cpu_outb_pkt_prepare(ss, mb, num, outb_tun_pkt_prepare, 0); +} + +uint16_t +cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return cpu_outb_pkt_prepare(ss, mb, num, outb_trs_pkt_prepare, + UINT32_MAX); +} + /* * process outbound packets for SA with ESN support, * for algorithms that require SQN.hibits to be implictly included @@ -526,7 +644,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num) { int32_t rc; - uint32_t i, k, n, l2, l3; + uint32_t i, k, n; uint64_t sqn; rte_be64_t sqc; struct rte_ipsec_sa *sa; @@ -544,15 +662,11 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, k = 0; for (i = 0; i != n; i++) { - l2 = mb[i]->l2_len; - l3 = mb[i]->l3_len; - sqc = rte_cpu_to_be_64(sqn + i); gen_iv(iv, sqc); /* try to update the packet itself */ - rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], - l2, l3, &icv, 0); + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0); k += (rc >= 0); diff --git a/lib/librte_ipsec/misc.h b/lib/librte_ipsec/misc.h index fe4641bfc..fc4b3dc69 100644 --- a/lib/librte_ipsec/misc.h +++ b/lib/librte_ipsec/misc.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2018 Intel Corporation + * Copyright(c) 2018-2020 Intel Corporation */ #ifndef _MISC_H_ @@ -105,4 +105,122 @@ mbuf_cut_seg_ofs(struct rte_mbuf *mb, struct rte_mbuf *ms, uint32_t ofs, mb->pkt_len -= len; } +static inline int +mbuf_to_cryptovec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t data_len, + struct rte_crypto_vec vec[], uint32_t num) +{ + uint32_t i; + struct rte_mbuf *nseg; + uint32_t left; + uint32_t seglen; + + /* assuming that requested data starts in the first segment */ + RTE_ASSERT(mb->data_len > ofs); + + if (mb->nb_segs > num) + return -mb->nb_segs; + + vec[0].base = rte_pktmbuf_mtod_offset(mb, void *, ofs); + + /* whole data lies in the first segment */ + seglen = mb->data_len - ofs; + if (data_len <= seglen) { + vec[0].len = data_len; + return 1; + } + + /* data spread across segments */ + vec[0].len = seglen; + left = data_len - seglen; + for (i = 1, nseg = mb->next; nseg != NULL; nseg = nseg->next, i++) { + vec[i].base = rte_pktmbuf_mtod(nseg, void *); + + seglen = nseg->data_len; + if (left <= seglen) { + /* whole requested data is completed */ + vec[i].len = left; + left = 0; + break; + } + + /* use whole segment */ + vec[i].len = seglen; + left -= seglen; + } + + RTE_ASSERT(left == 0); + return i + 1; +} + +/* + * process packets using sync crypto engine + */ +static inline void +cpu_crypto_bulk(const struct rte_ipsec_session *ss, + union rte_crypto_sym_ofs ofs, struct rte_mbuf *mb[], + void *iv[], void *aad[], void *dgst[], uint32_t l4ofs[], + uint32_t clen[], uint32_t num) +{ + uint32_t i, j, n; + int32_t vcnt, vofs; + int32_t st[num]; + struct rte_crypto_sgl vecpkt[num]; + struct rte_crypto_vec vec[UINT8_MAX]; + struct rte_crypto_sym_vec symvec; + + const uint32_t vnum = RTE_DIM(vec); + + j = 0, n = 0; + vofs = 0; + for (i = 0; i != num; i++) { + + vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], &vec[vofs], + vnum - vofs); + + /* not enough space in vec[] to hold all segments */ + if (vcnt < 0) { + /* fill the request structure */ + symvec.sgl = &vecpkt[j]; + symvec.iv = &iv[j]; + symvec.aad = &aad[j]; + symvec.digest = &dgst[j]; + symvec.status = &st[j]; + symvec.num = i - j; + + /* flush vec array and try again */ + n += rte_cryptodev_sym_cpu_crypto_process( + ss->crypto.dev_id, ss->crypto.ses, ofs, + &symvec); + vofs = 0; + vcnt = mbuf_to_cryptovec(mb[i], l4ofs[i], clen[i], vec, + vnum); + RTE_ASSERT(vcnt > 0); + j = i; + } + + vecpkt[i].vec = &vec[vofs]; + vecpkt[i].num = vcnt; + vofs += vcnt; + } + + /* fill the request structure */ + symvec.sgl = &vecpkt[j]; + symvec.iv = &iv[j]; + symvec.aad = &aad[j]; + symvec.digest = &dgst[j]; + symvec.status = &st[j]; + symvec.num = i - j; + + n += rte_cryptodev_sym_cpu_crypto_process(ss->crypto.dev_id, + ss->crypto.ses, ofs, &symvec); + + j = num - n; + for (i = 0; j != 0 && i != num; i++) { + if (st[i] != 0) { + mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED; + j--; + } + } +} + #endif /* _MISC_H_ */ diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h index f3b1f936b..6666cf761 100644 --- a/lib/librte_ipsec/rte_ipsec.h +++ b/lib/librte_ipsec/rte_ipsec.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2018 Intel Corporation + * Copyright(c) 2018-2020 Intel Corporation */ #ifndef _RTE_IPSEC_H_ @@ -33,10 +33,15 @@ struct rte_ipsec_session; * (see rte_ipsec_pkt_process for more details). */ struct rte_ipsec_sa_pkt_func { - uint16_t (*prepare)(const struct rte_ipsec_session *ss, + union { + uint16_t (*async)(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num); + uint16_t (*sync)(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], + uint16_t num); + } prepare; uint16_t (*process)(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); @@ -62,6 +67,7 @@ struct rte_ipsec_session { union { struct { struct rte_cryptodev_sym_session *ses; + uint8_t dev_id; } crypto; struct { struct rte_security_session *ses; @@ -114,7 +120,15 @@ static inline uint16_t rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) { - return ss->pkt_func.prepare(ss, mb, cop, num); + return ss->pkt_func.prepare.async(ss, mb, cop, num); +} + +__rte_experimental +static inline uint16_t +rte_ipsec_pkt_cpu_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return ss->pkt_func.prepare.sync(ss, mb, num); } /** diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 6f1d92c3c..ada195cf8 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2018 Intel Corporation + * Copyright(c) 2018-2020 Intel Corporation */ #include @@ -243,10 +243,26 @@ static void esp_inb_init(struct rte_ipsec_sa *sa) { /* these params may differ with new algorithms support */ - sa->ctp.auth.offset = 0; - sa->ctp.auth.length = sa->icv_len - sa->sqh_len; sa->ctp.cipher.offset = sizeof(struct rte_esp_hdr) + sa->iv_len; sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset; + + /* + * for AEAD and NULL algorithms we can assume that + * auth and cipher offsets would be equal. + */ + switch (sa->algo_type) { + case ALGO_TYPE_AES_GCM: + case ALGO_TYPE_NULL: + sa->ctp.auth.raw = sa->ctp.cipher.raw; + break; + default: + sa->ctp.auth.offset = 0; + sa->ctp.auth.length = sa->icv_len - sa->sqh_len; + sa->cofs.ofs.cipher.tail = sa->sqh_len; + break; + } + + sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset; } /* @@ -269,13 +285,13 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen) sa->sqn.outb.raw = 1; - /* these params may differ with new algorithms support */ - sa->ctp.auth.offset = hlen; - sa->ctp.auth.length = sizeof(struct rte_esp_hdr) + - sa->iv_len + sa->sqh_len; - algo_type = sa->algo_type; + /* + * Setup auth and cipher length and offset. + * these params may differ with new algorithms support + */ + switch (algo_type) { case ALGO_TYPE_AES_GCM: case ALGO_TYPE_AES_CTR: @@ -286,11 +302,30 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen) break; case ALGO_TYPE_AES_CBC: case ALGO_TYPE_3DES_CBC: - sa->ctp.cipher.offset = sa->hdr_len + - sizeof(struct rte_esp_hdr); + sa->ctp.cipher.offset = hlen + sizeof(struct rte_esp_hdr); sa->ctp.cipher.length = sa->iv_len; break; } + + /* + * for AEAD and NULL algorithms we can assume that + * auth and cipher offsets would be equal. + */ + switch (algo_type) { + case ALGO_TYPE_AES_GCM: + case ALGO_TYPE_NULL: + sa->ctp.auth.raw = sa->ctp.cipher.raw; + break; + default: + sa->ctp.auth.offset = hlen; + sa->ctp.auth.length = sizeof(struct rte_esp_hdr) + + sa->iv_len + sa->sqh_len; + break; + } + + sa->cofs.ofs.cipher.head = sa->ctp.cipher.offset - sa->ctp.auth.offset; + sa->cofs.ofs.cipher.tail = (sa->ctp.auth.offset + sa->ctp.auth.length) - + (sa->ctp.cipher.offset + sa->ctp.cipher.length); } /* @@ -544,9 +579,9 @@ lksd_proto_prepare(const struct rte_ipsec_session *ss, * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled */ -static uint16_t -pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - uint16_t num) +uint16_t +pkt_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) { uint32_t i, k; uint32_t dr[num]; @@ -588,21 +623,59 @@ lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa, switch (sa->type & msk) { case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): - pf->prepare = esp_inb_pkt_prepare; + pf->prepare.async = esp_inb_pkt_prepare; pf->process = esp_inb_tun_pkt_process; break; case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): - pf->prepare = esp_inb_pkt_prepare; + pf->prepare.async = esp_inb_pkt_prepare; pf->process = esp_inb_trs_pkt_process; break; case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): - pf->prepare = esp_outb_tun_prepare; + pf->prepare.async = esp_outb_tun_prepare; pf->process = (sa->sqh_len != 0) ? esp_outb_sqh_process : pkt_flag_process; break; case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): - pf->prepare = esp_outb_trs_prepare; + pf->prepare.async = esp_outb_trs_prepare; + pf->process = (sa->sqh_len != 0) ? + esp_outb_sqh_process : pkt_flag_process; + break; + default: + rc = -ENOTSUP; + } + + return rc; +} + +static int +cpu_crypto_pkt_func_select(const struct rte_ipsec_sa *sa, + struct rte_ipsec_sa_pkt_func *pf) +{ + int32_t rc; + + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + rc = 0; + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->prepare.sync = cpu_inb_pkt_prepare; + pf->process = esp_inb_tun_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): + pf->prepare.sync = cpu_inb_pkt_prepare; + pf->process = esp_inb_trs_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->prepare.sync = cpu_outb_tun_pkt_prepare; + pf->process = (sa->sqh_len != 0) ? + esp_outb_sqh_process : pkt_flag_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): + pf->prepare.sync = cpu_outb_trs_pkt_prepare; pf->process = (sa->sqh_len != 0) ? esp_outb_sqh_process : pkt_flag_process; break; @@ -660,7 +733,7 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, int32_t rc; rc = 0; - pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 }; + pf[0] = (struct rte_ipsec_sa_pkt_func) { {NULL}, NULL }; switch (ss->type) { case RTE_SECURITY_ACTION_TYPE_NONE: @@ -677,9 +750,12 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, pf->process = inline_proto_outb_pkt_process; break; case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: - pf->prepare = lksd_proto_prepare; + pf->prepare.async = lksd_proto_prepare; pf->process = pkt_flag_process; break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + rc = cpu_crypto_pkt_func_select(sa, pf); + break; default: rc = -ENOTSUP; } diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index 51e69ad05..d22451b38 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2018 Intel Corporation + * Copyright(c) 2018-2020 Intel Corporation */ #ifndef _SA_H_ @@ -88,6 +88,8 @@ struct rte_ipsec_sa { union sym_op_ofslen cipher; union sym_op_ofslen auth; } ctp; + /* cpu-crypto offsets */ + union rte_crypto_sym_ofs cofs; /* tx_offload template for tunnel mbuf */ struct { uint64_t msk; @@ -156,6 +158,10 @@ uint16_t inline_inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +cpu_inb_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + /* outbound processing */ uint16_t @@ -170,6 +176,10 @@ uint16_t esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +pkt_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + uint16_t inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); @@ -182,4 +192,11 @@ uint16_t inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +cpu_outb_tun_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); +uint16_t +cpu_outb_trs_pkt_prepare(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + #endif /* _SA_H_ */ diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c index 82c765a33..3d51ac498 100644 --- a/lib/librte_ipsec/ses.c +++ b/lib/librte_ipsec/ses.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2018 Intel Corporation + * Copyright(c) 2018-2020 Intel Corporation */ #include @@ -11,7 +11,8 @@ session_check(struct rte_ipsec_session *ss) if (ss == NULL || ss->sa == NULL) return -EINVAL; - if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) { + if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE || + ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { if (ss->crypto.ses == NULL) return -EINVAL; } else { From patchwork Tue Jan 28 14:22:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 65239 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6842A04B3; Tue, 28 Jan 2020 15:24:01 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B67471D14C; Tue, 28 Jan 2020 15:22:52 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 082CB1D149 for ; Tue, 28 Jan 2020 15:22:47 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 06:22:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,374,1574150400"; d="scan'208";a="309092867" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.103.102.190]) by orsmga001.jf.intel.com with ESMTP; 28 Jan 2020 06:22:39 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Tue, 28 Jan 2020 15:22:18 +0100 Message-Id: <20200128142220.16644-7-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200128142220.16644-1-marcinx.smoczynski@intel.com> References: <20200128031642.15256-1-marcinx.smoczynski@intel.com> <20200128142220.16644-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 6/8] examples/ipsec-secgw: cpu crypto support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for CPU accelerated crypto. 'cpu-crypto' SA type has been introduced in configuration allowing to use abovementioned acceleration. Legacy mode is not currently supported. Signed-off-by: Konstantin Ananyev Signed-off-by: Marcin Smoczynski Acked-by: Fan Zhang --- examples/ipsec-secgw/ipsec.c | 25 ++++- examples/ipsec-secgw/ipsec_process.c | 136 +++++++++++++++++---------- examples/ipsec-secgw/sa.c | 30 ++++-- 3 files changed, 131 insertions(+), 60 deletions(-) diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index d4b57121a..6e8120702 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2016-2020 Intel Corporation */ #include #include @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -86,7 +87,8 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa, ipsec_ctx->tbl[cdev_id_qp].id, ipsec_ctx->tbl[cdev_id_qp].qp); - if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE) { + if (ips->type != RTE_SECURITY_ACTION_TYPE_NONE && + ips->type != RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { struct rte_security_session_conf sess_conf = { .action_type = ips->type, .protocol = RTE_SECURITY_PROTOCOL_IPSEC, @@ -126,6 +128,18 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa, return -1; } } else { + if (ips->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { + struct rte_cryptodev_info info; + uint16_t cdev_id; + + cdev_id = ipsec_ctx->tbl[cdev_id_qp].id; + rte_cryptodev_info_get(cdev_id, &info); + if (!(info.feature_flags & + RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO)) + return -ENOTSUP; + + ips->crypto.dev_id = cdev_id; + } ips->crypto.ses = rte_cryptodev_sym_session_create( ipsec_ctx->session_pool); rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id, @@ -476,6 +490,13 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, rte_security_attach_session(&priv->cop, ips->security.ses); break; + + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + RTE_LOG(ERR, IPSEC, "CPU crypto is not supported by the" + " legacy mode."); + rte_pktmbuf_free(pkts[i]); + continue; + case RTE_SECURITY_ACTION_TYPE_NONE: priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c index 2eb5c8b34..bb2f2b82d 100644 --- a/examples/ipsec-secgw/ipsec_process.c +++ b/examples/ipsec-secgw/ipsec_process.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2016-2020 Intel Corporation */ #include #include @@ -92,7 +92,8 @@ fill_ipsec_session(struct rte_ipsec_session *ss, struct ipsec_ctx *ctx, int32_t rc; /* setup crypto section */ - if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) { + if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE || + ss->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { RTE_ASSERT(ss->crypto.ses == NULL); rc = create_lookaside_session(ctx, sa, ss); if (rc != 0) @@ -215,6 +216,62 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa, return k; } +/* + * helper routine for inline and cpu(synchronous) processing + * this is just to satisfy inbound_sa_check() and get_hop_for_offload_pkt(). + * Should be removed in future. + */ +static inline void +prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt) +{ + uint32_t j; + struct ipsec_mbuf_metadata *priv; + + for (j = 0; j != cnt; j++) { + priv = get_priv(mb[j]); + priv->sa = sa; + } +} + +/* + * finish processing of packets successfully decrypted by an inline processor + */ +static uint32_t +ipsec_process_inline_group(struct rte_ipsec_session *ips, void *sa, + struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt) +{ + uint64_t satp; + uint32_t k; + + /* get SA type */ + satp = rte_ipsec_sa_type(ips->sa); + prep_process_group(sa, mb, cnt); + + k = rte_ipsec_pkt_process(ips, mb, cnt); + copy_to_trf(trf, satp, mb, k); + return k; +} + +/* + * process packets synchronously + */ +static uint32_t +ipsec_process_cpu_group(struct rte_ipsec_session *ips, void *sa, + struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t cnt) +{ + uint64_t satp; + uint32_t k; + + /* get SA type */ + satp = rte_ipsec_sa_type(ips->sa); + prep_process_group(sa, mb, cnt); + + k = rte_ipsec_pkt_cpu_prepare(ips, mb, cnt); + k = rte_ipsec_pkt_process(ips, mb, k); + copy_to_trf(trf, satp, mb, k); + return k; +} + /* * Process ipsec packets. * If packet belong to SA that is subject of inline-crypto, @@ -225,10 +282,8 @@ ipsec_prepare_crypto_group(struct ipsec_ctx *ctx, struct ipsec_sa *sa, void ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf) { - uint64_t satp; - uint32_t i, j, k, n; + uint32_t i, k, n; struct ipsec_sa *sa; - struct ipsec_mbuf_metadata *priv; struct rte_ipsec_group *pg; struct rte_ipsec_session *ips; struct rte_ipsec_group grp[RTE_DIM(trf->ipsec.pkts)]; @@ -236,10 +291,17 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf) n = sa_group(trf->ipsec.saptr, trf->ipsec.pkts, grp, trf->ipsec.num); for (i = 0; i != n; i++) { + pg = grp + i; sa = ipsec_mask_saptr(pg->id.ptr); - ips = ipsec_get_primary_session(sa); + /* fallback to cryptodev with RX packets which inline + * processor was unable to process + */ + if (sa != NULL) + ips = (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) ? + ipsec_get_fallback_session(sa) : + ipsec_get_primary_session(sa); /* no valid HW session for that SA, try to create one */ if (sa == NULL || (ips->crypto.ses == NULL && @@ -247,50 +309,26 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf) k = 0; /* process packets inline */ - else if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO || - ips->type == - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { - - /* get SA type */ - satp = rte_ipsec_sa_type(ips->sa); - - /* - * This is just to satisfy inbound_sa_check() - * and get_hop_for_offload_pkt(). - * Should be removed in future. - */ - for (j = 0; j != pg->cnt; j++) { - priv = get_priv(pg->m[j]); - priv->sa = sa; + else { + switch (ips->type) { + /* enqueue packets to crypto dev */ + case RTE_SECURITY_ACTION_TYPE_NONE: + case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: + k = ipsec_prepare_crypto_group(ctx, sa, ips, + pg->m, pg->cnt); + break; + case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO: + case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL: + k = ipsec_process_inline_group(ips, sa, + trf, pg->m, pg->cnt); + break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + k = ipsec_process_cpu_group(ips, sa, + trf, pg->m, pg->cnt); + break; + default: + k = 0; } - - /* fallback to cryptodev with RX packets which inline - * processor was unable to process - */ - if (pg->id.val & IPSEC_SA_OFFLOAD_FALLBACK_FLAG) { - /* offload packets to cryptodev */ - struct rte_ipsec_session *fallback; - - fallback = ipsec_get_fallback_session(sa); - if (fallback->crypto.ses == NULL && - fill_ipsec_session(fallback, ctx, sa) - != 0) - k = 0; - else - k = ipsec_prepare_crypto_group(ctx, sa, - fallback, pg->m, pg->cnt); - } else { - /* finish processing of packets successfully - * decrypted by an inline processor - */ - k = rte_ipsec_pkt_process(ips, pg->m, pg->cnt); - copy_to_trf(trf, satp, pg->m, k); - - } - /* enqueue packets to crypto dev */ - } else { - k = ipsec_prepare_crypto_group(ctx, sa, ips, pg->m, - pg->cnt); } /* drop packets that cannot be enqueued/processed */ diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index c75a5a15f..e9e8d624c 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016-2017 Intel Corporation + * Copyright(c) 2016-2020 Intel Corporation */ /* @@ -586,6 +586,8 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL; else if (strcmp(tokens[ti], "no-offload") == 0) ips->type = RTE_SECURITY_ACTION_TYPE_NONE; + else if (strcmp(tokens[ti], "cpu-crypto") == 0) + ips->type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO; else { APP_CHECK(0, status, "Invalid input \"%s\"", tokens[ti]); @@ -679,10 +681,12 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, if (status->status < 0) return; - if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0)) + if ((ips->type != RTE_SECURITY_ACTION_TYPE_NONE && ips->type != + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && (portid_p == 0)) printf("Missing portid option, falling back to non-offload\n"); - if (!type_p || !portid_p) { + if (!type_p || (!portid_p && ips->type != + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)) { ips->type = RTE_SECURITY_ACTION_TYPE_NONE; rule->portid = -1; } @@ -768,15 +772,25 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: printf("lookaside-protocol-offload "); break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + printf("cpu-crypto-accelerated"); + break; } fallback_ips = &sa->sessions[IPSEC_SESSION_FALLBACK]; if (fallback_ips != NULL && sa->fallback_sessions > 0) { printf("inline fallback: "); - if (fallback_ips->type == RTE_SECURITY_ACTION_TYPE_NONE) + switch (fallback_ips->type) { + case RTE_SECURITY_ACTION_TYPE_NONE: printf("lookaside-none"); - else + break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + printf("cpu-crypto-accelerated"); + break; + default: printf("invalid"); + break; + } } printf("\n"); } @@ -975,7 +989,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], return -EINVAL; } - switch (WITHOUT_TRANSPORT_VERSION(sa->flags)) { case IP4_TUNNEL: sa->src.ip.ip4 = rte_cpu_to_be_32(sa->src.ip.ip4); @@ -1026,7 +1039,6 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], return -EINVAL; } } - print_one_sa_rule(sa, inbound); } else { switch (sa->cipher_algo) { case RTE_CRYPTO_CIPHER_NULL: @@ -1091,9 +1103,9 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[], sa_ctx->xf[idx].a.next = &sa_ctx->xf[idx].b; sa_ctx->xf[idx].b.next = NULL; sa->xforms = &sa_ctx->xf[idx].a; - - print_one_sa_rule(sa, inbound); } + + print_one_sa_rule(sa, inbound); } return 0; From patchwork Tue Jan 28 14:22:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 65238 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5CC4DA04B3; Tue, 28 Jan 2020 15:23:50 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 399651C440; Tue, 28 Jan 2020 15:22:49 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 85C8D1C440 for ; Tue, 28 Jan 2020 15:22:47 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 06:22:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,374,1574150400"; d="scan'208";a="309092873" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.103.102.190]) by orsmga001.jf.intel.com with ESMTP; 28 Jan 2020 06:22:42 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Tue, 28 Jan 2020 15:22:19 +0100 Message-Id: <20200128142220.16644-8-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200128142220.16644-1-marcinx.smoczynski@intel.com> References: <20200128031642.15256-1-marcinx.smoczynski@intel.com> <20200128142220.16644-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 7/8] examples/ipsec-secgw: cpu crypto testing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enable cpu-crypto mode testing by adding dedicated environmental variable CRYPTO_PRIM_TYPE. Setting it to 'type cpu-crypto' allows to run test scenario with cpu crypto acceleration. Signed-off-by: Konstantin Ananyev Signed-off-by: Marcin Smoczynski Acked-by: Fan Zhang --- examples/ipsec-secgw/test/common_defs.sh | 21 +++++++++++++++++++ examples/ipsec-secgw/test/linux_test4.sh | 11 +--------- examples/ipsec-secgw/test/linux_test6.sh | 11 +--------- .../test/trs_3descbc_sha1_common_defs.sh | 8 +++---- .../test/trs_aescbc_sha1_common_defs.sh | 8 +++---- .../test/trs_aesctr_sha1_common_defs.sh | 8 +++---- .../test/tun_3descbc_sha1_common_defs.sh | 8 +++---- .../test/tun_aescbc_sha1_common_defs.sh | 8 +++---- .../test/tun_aesctr_sha1_common_defs.sh | 8 +++---- 9 files changed, 47 insertions(+), 44 deletions(-) diff --git a/examples/ipsec-secgw/test/common_defs.sh b/examples/ipsec-secgw/test/common_defs.sh index 4aac4981a..6b6ae06f3 100644 --- a/examples/ipsec-secgw/test/common_defs.sh +++ b/examples/ipsec-secgw/test/common_defs.sh @@ -42,6 +42,27 @@ DPDK_BUILD=${RTE_TARGET:-x86_64-native-linux-gcc} DEF_MTU_LEN=1400 DEF_PING_LEN=1200 +#upsate operation mode based on env vars values +select_mode() +{ + # select sync/async mode + if [[ -n "${CRYPTO_PRIM_TYPE}" && -n "${SGW_CMD_XPRM}" ]]; then + echo "${CRYPTO_PRIM_TYPE} is enabled" + SGW_CFG_XPRM="${SGW_CFG_XPRM} ${CRYPTO_PRIM_TYPE}" + fi + + #make linux to generate fragmented packets + if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then + echo "multi-segment test is enabled" + SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}" + PING_LEN=5000 + MTU_LEN=1500 + else + PING_LEN=${DEF_PING_LEN} + MTU_LEN=${DEF_MTU_LEN} + fi +} + #setup mtu on local iface set_local_mtu() { diff --git a/examples/ipsec-secgw/test/linux_test4.sh b/examples/ipsec-secgw/test/linux_test4.sh index 760451000..fb8ae1023 100644 --- a/examples/ipsec-secgw/test/linux_test4.sh +++ b/examples/ipsec-secgw/test/linux_test4.sh @@ -45,16 +45,7 @@ MODE=$1 . ${DIR}/common_defs.sh . ${DIR}/${MODE}_defs.sh -#make linux to generate fragmented packets -if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then - echo "multi-segment test is enabled" - SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}" - PING_LEN=5000 - MTU_LEN=1500 -else - PING_LEN=${DEF_PING_LEN} - MTU_LEN=${DEF_MTU_LEN} -fi +select_mode config_secgw diff --git a/examples/ipsec-secgw/test/linux_test6.sh b/examples/ipsec-secgw/test/linux_test6.sh index 479f29be3..dbcca7936 100644 --- a/examples/ipsec-secgw/test/linux_test6.sh +++ b/examples/ipsec-secgw/test/linux_test6.sh @@ -46,16 +46,7 @@ MODE=$1 . ${DIR}/common_defs.sh . ${DIR}/${MODE}_defs.sh -#make linux to generate fragmented packets -if [[ -n "${MULTI_SEG_TEST}" && -n "${SGW_CMD_XPRM}" ]]; then - echo "multi-segment test is enabled" - SGW_CMD_XPRM="${SGW_CMD_XPRM} ${MULTI_SEG_TEST}" - PING_LEN=5000 - MTU_LEN=1500 -else - PING_LEN=${DEF_PING_LEN} - MTU_LEN=${DEF_MTU_LEN} -fi +select_mode config_secgw diff --git a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh index 3c5c18afd..62118bb3f 100644 --- a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh @@ -33,14 +33,14 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} sa in 9 cipher_algo 3des-cbc \ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo 3des-cbc \ @@ -48,7 +48,7 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 9 cipher_algo 3des-cbc \ @@ -56,7 +56,7 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh index 9dbdd1765..7ddeb2b5a 100644 --- a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh @@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh index 6aba680f9..f0178355a 100644 --- a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh @@ -32,27 +32,27 @@ sa in 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh index 7c3226f84..d8869fad0 100644 --- a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh @@ -33,14 +33,14 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} +mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM} sa in 9 cipher_algo 3des-cbc \ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} +mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo 3des-cbc \ @@ -48,14 +48,14 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} +mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM} sa out 9 cipher_algo 3des-cbc \ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} +mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh index bdf5938a0..2616926b2 100644 --- a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh @@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} +mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} +mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} +mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM} sa out 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} +mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh index 06f2ef0c6..06b561fd7 100644 --- a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh @@ -32,26 +32,26 @@ sa in 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} +mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} +mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} +mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM} sa out 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} +mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 From patchwork Tue Jan 28 14:22:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcin Smoczynski X-Patchwork-Id: 65240 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4DAF4A04B3; Tue, 28 Jan 2020 15:24:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 29DCC1D152; Tue, 28 Jan 2020 15:22:54 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 3A50A1D149 for ; Tue, 28 Jan 2020 15:22:49 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jan 2020 06:22:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,374,1574150400"; d="scan'208";a="309092880" Received: from msmoczyx-mobl.ger.corp.intel.com ([10.103.102.190]) by orsmga001.jf.intel.com with ESMTP; 28 Jan 2020 06:22:45 -0800 From: Marcin Smoczynski To: akhil.goyal@nxp.com, konstantin.ananyev@intel.com, roy.fan.zhang@intel.com, declan.doherty@intel.com, radu.nicolau@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, Marcin Smoczynski Date: Tue, 28 Jan 2020 15:22:20 +0100 Message-Id: <20200128142220.16644-9-marcinx.smoczynski@intel.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200128142220.16644-1-marcinx.smoczynski@intel.com> References: <20200128031642.15256-1-marcinx.smoczynski@intel.com> <20200128142220.16644-1-marcinx.smoczynski@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 8/8] doc: add cpu crypto related documentation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update documentation with a description of cpu crypto in cryptodev, ipsec and security libraries. Add release notes for 20.02. Signed-off-by: Marcin Smoczynski --- doc/guides/cryptodevs/aesni_gcm.rst | 7 +++++- doc/guides/prog_guide/cryptodev_lib.rst | 33 ++++++++++++++++++++++++- doc/guides/prog_guide/ipsec_lib.rst | 10 +++++++- doc/guides/prog_guide/rte_security.rst | 15 ++++++++--- doc/guides/rel_notes/release_20_02.rst | 8 ++++++ 5 files changed, 66 insertions(+), 7 deletions(-) diff --git a/doc/guides/cryptodevs/aesni_gcm.rst b/doc/guides/cryptodevs/aesni_gcm.rst index 151aa3060..a25b63109 100644 --- a/doc/guides/cryptodevs/aesni_gcm.rst +++ b/doc/guides/cryptodevs/aesni_gcm.rst @@ -1,5 +1,5 @@ .. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2016-2019 Intel Corporation. + Copyright(c) 2016-2020 Intel Corporation. AES-NI GCM Crypto Poll Mode Driver ================================== @@ -9,6 +9,11 @@ The AES-NI GCM PMD (**librte_pmd_aesni_gcm**) provides poll mode crypto driver support for utilizing Intel multi buffer library (see AES-NI Multi-buffer PMD documentation to learn more about it, including installation). +The AES-NI GCM PMD supports synchronous mode of operation with +``rte_cryptodev_sym_cpu_crypto_process`` function call for both AES-GCM and +GMAC, however GMAC support is limited to one segment per operation. Please +refer to ``rte_crypto`` programmer's guide for more detail. + Features -------- diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst index ac1643774..b91f7c8b7 100644 --- a/doc/guides/prog_guide/cryptodev_lib.rst +++ b/doc/guides/prog_guide/cryptodev_lib.rst @@ -1,5 +1,5 @@ .. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2016-2017 Intel Corporation. + Copyright(c) 2016-2020 Intel Corporation. Cryptography Device Library =========================== @@ -600,6 +600,37 @@ chain. }; }; +Synchronous mode +---------------- + +Some cryptodevs support synchronous mode alongside with a standard asynchronous +mode. In that case operations are performed directly when calling +``rte_cryptodev_sym_cpu_crypto_process`` method instead of enqueuing and +dequeuing an operation before. This mode of operation allows cryptodevs which +utilize CPU cryptographic acceleration to have significant performance boost +comparing to standard asynchronous approach. Cryptodevs supporting synchronous +mode have ``RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO`` feature flag set. + +To perform a synchronous operation a call to +``rte_cryptodev_sym_cpu_crypto_process`` has to be made with vectorized +operation descriptor (``struct rte_crypto_sym_vec``) containing: + +- ``num`` - number of operations to perform, +- pointer to an array of size ``num`` containing a scatter-gather list + descriptors of performed operations (``struct rte_crypto_sgl``). Each instance + of ``struct rte_crypto_sgl`` consists of a number of segments and a pointer to + an array of segment descriptors ``struct rte_crypto_vec``; +- pointers to arrays of size ``num`` containing IV, AAD and digest information, +- pointer to an array of size ``num`` where status information will be stored + for each operation. + +Function returns a number of successfully completed operations and sets +appropriate status number for each operation in the status array provided as +a call argument. Status different than zero must be treated as error. + +For more details, e.g. how to convert an mbuf to an SGL, please refer to an +example usage in the IPsec library implementation. + Sample code ----------- diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst index 1ce0db453..0a860eb47 100644 --- a/doc/guides/prog_guide/ipsec_lib.rst +++ b/doc/guides/prog_guide/ipsec_lib.rst @@ -1,5 +1,5 @@ .. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2018 Intel Corporation. + Copyright(c) 2018-2020 Intel Corporation. IPsec Packet Processing Library =============================== @@ -81,6 +81,14 @@ In that mode the library functions perform - verify that crypto device operations (encryption, ICV generation) were completed successfully +RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In that mode the library functions perform same operations as in +``RTE_SECURITY_ACTION_TYPE_NONE``. The only differnce is that crypto operations +are performed with CPU crypto synchronous API. + + RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst index f77fb89dc..a911c676b 100644 --- a/doc/guides/prog_guide/rte_security.rst +++ b/doc/guides/prog_guide/rte_security.rst @@ -511,13 +511,20 @@ Offload. /**< No security actions */ RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, /**< Crypto processing for security protocol is processed inline - * during transmission */ + * during transmission + */ RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL, /**< All security protocol processing is performed inline during - * transmission */ - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL + * transmission + */ + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, /**< All security protocol processing including crypto is performed - * on a lookaside accelerator */ + * on a lookaside accelerator + */ + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO + /**< Crypto processing for security protocol is processed by CPU + * synchronously + */ }; The ``rte_security_session_protocol`` is defined as diff --git a/doc/guides/rel_notes/release_20_02.rst b/doc/guides/rel_notes/release_20_02.rst index 50e2c1484..b6cf0c4d1 100644 --- a/doc/guides/rel_notes/release_20_02.rst +++ b/doc/guides/rel_notes/release_20_02.rst @@ -143,6 +143,14 @@ New Features Added a new OCTEON TX2 rawdev PMD for End Point mode of operation. See the :doc:`../rawdevs/octeontx2_ep` for more details on this new PMD. +* **Added synchronous Crypto burst API.** + + A new API is introduced in crypto library to handle synchronous cryptographic + operations allowing to achieve performance gain for cryptodevs which use + CPU based acceleration, such as Intel AES-NI. An example implementation + for aesni_gcm cryptodev is provided including unit tests. The IPsec example + application and ipsec library itself were changed to allow utilization of this + new feature. Removed Items -------------