From patchwork Fri Sep 6 13:13:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58863 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D51C61F37D; Fri, 6 Sep 2019 15:13:39 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 65BE81F2BA for ; Fri, 6 Sep 2019 15:13:36 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140701" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:34 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:21 +0100 Message-Id: <20190906131330.40185-2-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 01/10] security: introduce CPU Crypto action type and API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce new RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO action type to security library. The type represents performing crypto operation with CPU cycles. The patch also includes a new API to process crypto operations in bulk and the function pointers for PMDs. Signed-off-by: Fan Zhang --- lib/librte_security/rte_security.c | 16 +++++++++ lib/librte_security/rte_security.h | 51 +++++++++++++++++++++++++++- lib/librte_security/rte_security_driver.h | 19 +++++++++++ lib/librte_security/rte_security_version.map | 1 + 4 files changed, 86 insertions(+), 1 deletion(-) diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c index bc81ce15d..0f85c1b59 100644 --- a/lib/librte_security/rte_security.c +++ b/lib/librte_security/rte_security.c @@ -141,3 +141,19 @@ rte_security_capability_get(struct rte_security_ctx *instance, return NULL; } + +void +rte_security_process_cpu_crypto_bulk(struct rte_security_ctx *instance, + struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num) +{ + uint32_t i; + + for (i = 0; i < num; i++) + status[i] = -1; + + RTE_FUNC_PTR_OR_RET(*instance->ops->process_cpu_crypto_bulk); + instance->ops->process_cpu_crypto_bulk(sess, buf, iv, + aad, digest, status, num); +} diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h index 96806e3a2..5a0f8901b 100644 --- a/lib/librte_security/rte_security.h +++ b/lib/librte_security/rte_security.h @@ -18,6 +18,7 @@ extern "C" { #endif #include +#include #include #include @@ -272,6 +273,20 @@ struct rte_security_pdcp_xform { uint32_t hfn_threshold; }; +struct rte_security_cpu_crypto_xform { + /** For cipher/authentication crypto operation the authentication may + * cover more content then the cipher. E.g., for IPSec ESP encryption + * with AES-CBC and SHA1-HMAC, the encryption happens after the ESP + * header but whole packet (apart from MAC header) is authenticated. + * The cipher_offset field is used to deduct the cipher data pointer + * from the buffer to be processed. + * + * NOTE this parameter shall be ignored by AEAD algorithms, since it + * uses the same offset for cipher and authentication. + */ + int32_t cipher_offset; +}; + /** * Security session action type. */ @@ -286,10 +301,14 @@ enum rte_security_session_action_type { /**< All security protocol processing is performed inline during * transmission */ - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, /**< All security protocol processing including crypto is performed * on a lookaside accelerator */ + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO + /**< Crypto processing for security protocol is processed by CPU + * synchronously + */ }; /** Security session protocol definition */ @@ -315,6 +334,7 @@ struct rte_security_session_conf { struct rte_security_ipsec_xform ipsec; struct rte_security_macsec_xform macsec; struct rte_security_pdcp_xform pdcp; + struct rte_security_cpu_crypto_xform cpucrypto; }; /**< Configuration parameters for security session */ struct rte_crypto_sym_xform *crypto_xform; @@ -639,6 +659,35 @@ const struct rte_security_capability * rte_security_capability_get(struct rte_security_ctx *instance, struct rte_security_capability_idx *idx); +/** + * Security vector structure, contains pointer to vector array and the length + * of the array + */ +struct rte_security_vec { + struct iovec *vec; + uint32_t num; +}; + +/** + * Processing bulk crypto workload with CPU + * + * @param instance security instance. + * @param sess security session + * @param buf array of buffer SGL vectors + * @param iv array of IV pointers + * @param aad array of AAD pointers + * @param digest array of digest pointers + * @param status array of status for the function to return + * @param num number of elements in each array + * + */ +__rte_experimental +void +rte_security_process_cpu_crypto_bulk(struct rte_security_ctx *instance, + struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + #ifdef __cplusplus } #endif diff --git a/lib/librte_security/rte_security_driver.h b/lib/librte_security/rte_security_driver.h index 1b561f852..70fcb0c26 100644 --- a/lib/librte_security/rte_security_driver.h +++ b/lib/librte_security/rte_security_driver.h @@ -132,6 +132,23 @@ typedef int (*security_get_userdata_t)(void *device, typedef const struct rte_security_capability *(*security_capabilities_get_t)( void *device); +/** + * Process security operations in bulk using CPU accelerated method. + * + * @param sess Security session structure. + * @param buf Buffer to the vectors to be processed. + * @param iv IV pointers. + * @param aad AAD pointers. + * @param digest Digest pointers. + * @param status Array of status value. + * @param num Number of elements in each array. + */ + +typedef void (*security_process_cpu_crypto_bulk_t)( + struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + /** Security operations function pointer table */ struct rte_security_ops { security_session_create_t session_create; @@ -150,6 +167,8 @@ struct rte_security_ops { /**< Get userdata associated with session which processed the packet. */ security_capabilities_get_t capabilities_get; /**< Get security capabilities. */ + security_process_cpu_crypto_bulk_t process_cpu_crypto_bulk; + /**< Process data in bulk. */ }; #ifdef __cplusplus diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map index 53267bf3c..2132e7a00 100644 --- a/lib/librte_security/rte_security_version.map +++ b/lib/librte_security/rte_security_version.map @@ -18,4 +18,5 @@ EXPERIMENTAL { rte_security_get_userdata; rte_security_session_stats_get; rte_security_session_update; + rte_security_process_cpu_crypto_bulk; }; From patchwork Fri Sep 6 13:13:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58864 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BC8801F388; Fri, 6 Sep 2019 15:13:41 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 6E4821F379 for ; Fri, 6 Sep 2019 15:13:38 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140721" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:36 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:22 +0100 Message-Id: <20190906131330.40185-3-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 02/10] crypto/aesni_gcm: add rte_security handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add rte_security support support to AESNI-GCM PMD. The PMD now initialize security context instance, create/delete PMD specific security sessions, and process crypto workloads in synchronous mode with scatter-gather list buffer supported. Signed-off-by: Fan Zhang --- drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 91 ++++++++++++++++++++++- drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 95 ++++++++++++++++++++++++ drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h | 23 ++++++ 3 files changed, 208 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c index 1006a5c4d..0a346eddd 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -174,6 +175,56 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_op *op) return sess; } +static __rte_always_inline int +process_gcm_security_sgl_buf(struct aesni_gcm_security_session *sess, + struct rte_security_vec *buf, uint8_t *iv, + uint8_t *aad, uint8_t *digest) +{ + struct aesni_gcm_session *session = &sess->sess; + uint8_t *tag; + uint32_t i; + + sess->init(&session->gdata_key, &sess->gdata_ctx, iv, aad, + (uint64_t)session->aad_length); + + for (i = 0; i < buf->num; i++) { + struct iovec *vec = &buf->vec[i]; + + sess->update(&session->gdata_key, &sess->gdata_ctx, + vec->iov_base, vec->iov_base, vec->iov_len); + } + + switch (session->op) { + case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION: + if (session->req_digest_length != session->gen_digest_length) + tag = sess->temp_digest; + else + tag = digest; + + sess->finalize(&session->gdata_key, &sess->gdata_ctx, tag, + session->gen_digest_length); + + if (session->req_digest_length != session->gen_digest_length) + memcpy(digest, sess->temp_digest, + session->req_digest_length); + break; + + case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION: + tag = sess->temp_digest; + + sess->finalize(&session->gdata_key, &sess->gdata_ctx, tag, + session->gen_digest_length); + + if (memcmp(tag, digest, session->req_digest_length) != 0) + return -1; + break; + default: + return -1; + } + + return 0; +} + /** * Process a crypto operation, calling * the GCM API from the multi buffer library. @@ -488,8 +539,10 @@ aesni_gcm_create(const char *name, { struct rte_cryptodev *dev; struct aesni_gcm_private *internals; + struct rte_security_ctx *sec_ctx; enum aesni_gcm_vector_mode vector_mode; MB_MGR *mb_mgr; + char sec_name[RTE_DEV_NAME_MAX_LEN]; /* Check CPU for support for AES instruction set */ if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) { @@ -524,7 +577,8 @@ aesni_gcm_create(const char *name, RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_CPU_AESNI | RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT; + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | + RTE_CRYPTODEV_FF_SECURITY; mb_mgr = alloc_mb_mgr(0); if (mb_mgr == NULL) @@ -587,6 +641,21 @@ aesni_gcm_create(const char *name, internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs; + /* setup security operations */ + snprintf(sec_name, sizeof(sec_name) - 1, "aes_gcm_sec_%u", + dev->driver_id); + sec_ctx = rte_zmalloc_socket(sec_name, + sizeof(struct rte_security_ctx), + RTE_CACHE_LINE_SIZE, init_params->socket_id); + if (sec_ctx == NULL) { + AESNI_GCM_LOG(ERR, "memory allocation failed\n"); + goto error_exit; + } + + sec_ctx->device = (void *)dev; + sec_ctx->ops = rte_aesni_gcm_pmd_security_ops; + dev->security_ctx = sec_ctx; + #if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0) AESNI_GCM_LOG(INFO, "IPSec Multi-buffer library version used: %s\n", imb_get_version_str()); @@ -641,6 +710,8 @@ aesni_gcm_remove(struct rte_vdev_device *vdev) if (cryptodev == NULL) return -ENODEV; + rte_free(cryptodev->security_ctx); + internals = cryptodev->data->dev_private; free_mb_mgr(internals->mb_mgr); @@ -648,6 +719,24 @@ aesni_gcm_remove(struct rte_vdev_device *vdev) return rte_cryptodev_pmd_destroy(cryptodev); } +void +aesni_gcm_sec_crypto_process_bulk(struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num) +{ + struct aesni_gcm_security_session *session = + get_sec_session_private_data(sess); + uint32_t i; + + if (unlikely(!session)) + return; + + for (i = 0; i < num; i++) + status[i] = process_gcm_security_sgl_buf(session, &buf[i], + (uint8_t *)iv[i], (uint8_t *)aad[i], + (uint8_t *)digest[i]); +} + static struct rte_vdev_driver aesni_gcm_pmd_drv = { .probe = aesni_gcm_probe, .remove = aesni_gcm_remove diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c index 2f66c7c58..cc71dbd60 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c @@ -7,6 +7,7 @@ #include #include #include +#include #include "aesni_gcm_pmd_private.h" @@ -316,6 +317,85 @@ aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev, } } +static int +aesni_gcm_security_session_create(void *dev, + struct rte_security_session_conf *conf, + struct rte_security_session *sess, + struct rte_mempool *mempool) +{ + struct rte_cryptodev *cdev = dev; + struct aesni_gcm_private *internals = cdev->data->dev_private; + struct aesni_gcm_security_session *sess_priv; + int ret; + + if (!conf->crypto_xform) { + AESNI_GCM_LOG(ERR, "Invalid security session conf"); + return -EINVAL; + } + + if (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + AESNI_GCM_LOG(ERR, "GMAC is not supported in security session"); + return -EINVAL; + } + + + if (rte_mempool_get(mempool, (void **)(&sess_priv))) { + AESNI_GCM_LOG(ERR, + "Couldn't get object from session mempool"); + return -ENOMEM; + } + + ret = aesni_gcm_set_session_parameters(internals->ops, + &sess_priv->sess, conf->crypto_xform); + if (ret != 0) { + AESNI_GCM_LOG(ERR, "Failed configure session parameters"); + + /* Return session to mempool */ + rte_mempool_put(mempool, (void *)sess_priv); + return ret; + } + + sess_priv->pre = internals->ops[sess_priv->sess.key].pre; + sess_priv->init = internals->ops[sess_priv->sess.key].init; + if (sess_priv->sess.op == AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION) { + sess_priv->update = + internals->ops[sess_priv->sess.key].update_enc; + sess_priv->finalize = + internals->ops[sess_priv->sess.key].finalize_enc; + } else { + sess_priv->update = + internals->ops[sess_priv->sess.key].update_dec; + sess_priv->finalize = + internals->ops[sess_priv->sess.key].finalize_dec; + } + + sess->sess_private_data = sess_priv; + + return 0; +} + +static int +aesni_gcm_security_session_destroy(void *dev __rte_unused, + struct rte_security_session *sess) +{ + void *sess_priv = get_sec_session_private_data(sess); + + if (sess_priv) { + struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv); + + memset(sess, 0, sizeof(struct aesni_gcm_security_session)); + set_sec_session_private_data(sess, NULL); + rte_mempool_put(sess_mp, sess_priv); + } + return 0; +} + +static unsigned int +aesni_gcm_sec_session_get_size(__rte_unused void *device) +{ + return sizeof(struct aesni_gcm_security_session); +} + struct rte_cryptodev_ops aesni_gcm_pmd_ops = { .dev_configure = aesni_gcm_pmd_config, .dev_start = aesni_gcm_pmd_start, @@ -336,4 +416,19 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = { .sym_session_clear = aesni_gcm_pmd_sym_session_clear }; +static struct rte_security_ops aesni_gcm_security_ops = { + .session_create = aesni_gcm_security_session_create, + .session_get_size = aesni_gcm_sec_session_get_size, + .session_update = NULL, + .session_stats_get = NULL, + .session_destroy = aesni_gcm_security_session_destroy, + .set_pkt_metadata = NULL, + .capabilities_get = NULL, + .process_cpu_crypto_bulk = + aesni_gcm_sec_crypto_process_bulk, +}; + struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops = &aesni_gcm_pmd_ops; + +struct rte_security_ops *rte_aesni_gcm_pmd_security_ops = + &aesni_gcm_security_ops; diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h index 56b29e013..8e490b6ce 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h @@ -114,5 +114,28 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *ops, * Device specific operations function pointer structure */ extern struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops; +/** + * Security session structure. + */ +struct aesni_gcm_security_session { + /** Temp digest for decryption */ + uint8_t temp_digest[DIGEST_LENGTH_MAX]; + /** GCM operations */ + aesni_gcm_pre_t pre; + aesni_gcm_init_t init; + aesni_gcm_update_t update; + aesni_gcm_finalize_t finalize; + /** AESNI-GCM session */ + struct aesni_gcm_session sess; + /** AESNI-GCM context */ + struct gcm_context_data gdata_ctx; +}; + +extern void +aesni_gcm_sec_crypto_process_bulk(struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + +extern struct rte_security_ops *rte_aesni_gcm_pmd_security_ops; #endif /* _RTE_AESNI_GCM_PMD_PRIVATE_H_ */ From patchwork Fri Sep 6 13:13:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58865 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 420731F384; Fri, 6 Sep 2019 15:13:44 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id C1AF01F37A for ; Fri, 6 Sep 2019 15:13:39 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140733" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:37 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:23 +0100 Message-Id: <20190906131330.40185-4-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 03/10] app/test: add security cpu crypto autotest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds cpu crypto unit test for AESNI_GCM PMD. Signed-off-by: Fan Zhang --- app/test/Makefile | 1 + app/test/meson.build | 1 + app/test/test_security_cpu_crypto.c | 564 ++++++++++++++++++++++++++++++++++++ 3 files changed, 566 insertions(+) create mode 100644 app/test/test_security_cpu_crypto.c diff --git a/app/test/Makefile b/app/test/Makefile index 26ba6fe2b..090c55746 100644 --- a/app/test/Makefile +++ b/app/test/Makefile @@ -196,6 +196,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += test_pmd_ring_perf.c SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_asym.c +SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_security_cpu_crypto.c SRCS-$(CONFIG_RTE_LIBRTE_METRICS) += test_metrics.c diff --git a/app/test/meson.build b/app/test/meson.build index ec40943bd..b7834ff21 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -103,6 +103,7 @@ test_sources = files('commands.c', 'test_ring_perf.c', 'test_rwlock.c', 'test_sched.c', + 'test_security_cpu_crypto.c', 'test_service_cores.c', 'test_spinlock.c', 'test_stack.c', diff --git a/app/test/test_security_cpu_crypto.c b/app/test/test_security_cpu_crypto.c new file mode 100644 index 000000000..d345922b2 --- /dev/null +++ b/app/test/test_security_cpu_crypto.c @@ -0,0 +1,564 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include +#include + +#include "test.h" +#include "test_cryptodev.h" +#include "test_cryptodev_aead_test_vectors.h" + +#define CPU_CRYPTO_TEST_MAX_AAD_LENGTH 16 +#define MAX_NB_SIGMENTS 4 + +enum buffer_assemble_option { + SGL_MAX_SEG, + SGL_ONE_SEG, +}; + +struct cpu_crypto_test_case { + struct { + uint8_t seg[MBUF_DATAPAYLOAD_SIZE]; + uint32_t seg_len; + } seg_buf[MAX_NB_SIGMENTS]; + uint8_t iv[MAXIMUM_IV_LENGTH]; + uint8_t aad[CPU_CRYPTO_TEST_MAX_AAD_LENGTH]; + uint8_t digest[DIGEST_BYTE_LENGTH_SHA512]; +} __rte_cache_aligned; + +struct cpu_crypto_test_obj { + struct iovec vec[MAX_NUM_OPS_INFLIGHT][MAX_NB_SIGMENTS]; + struct rte_security_vec sec_buf[MAX_NUM_OPS_INFLIGHT]; + void *iv[MAX_NUM_OPS_INFLIGHT]; + void *digest[MAX_NUM_OPS_INFLIGHT]; + void *aad[MAX_NUM_OPS_INFLIGHT]; + int status[MAX_NUM_OPS_INFLIGHT]; +}; + +struct cpu_crypto_testsuite_params { + struct rte_mempool *buf_pool; + struct rte_mempool *session_priv_mpool; + struct rte_security_ctx *ctx; +}; + +struct cpu_crypto_unittest_params { + struct rte_security_session *sess; + void *test_datas[MAX_NUM_OPS_INFLIGHT]; + struct cpu_crypto_test_obj test_obj; + uint32_t nb_bufs; +}; + +static struct cpu_crypto_testsuite_params testsuite_params = { NULL }; +static struct cpu_crypto_unittest_params unittest_params; + +static int gbl_driver_id; + +static int +testsuite_setup(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct rte_cryptodev_info info; + uint32_t i; + uint32_t nb_devs; + uint32_t sess_sz; + int ret; + + memset(ts_params, 0, sizeof(*ts_params)); + + ts_params->buf_pool = rte_mempool_lookup("CPU_CRYPTO_MBUFPOOL"); + if (ts_params->buf_pool == NULL) { + /* Not already created so create */ + ts_params->buf_pool = rte_pktmbuf_pool_create( + "CRYPTO_MBUFPOOL", + NUM_MBUFS, MBUF_CACHE_SIZE, 0, + sizeof(struct cpu_crypto_test_case), + rte_socket_id()); + if (ts_params->buf_pool == NULL) { + RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n"); + return TEST_FAILED; + } + } + + /* Create an AESNI MB device if required */ + if (gbl_driver_id == rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))) { + nb_devs = rte_cryptodev_device_count_by_driver( + rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))); + if (nb_devs < 1) { + ret = rte_vdev_init( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL); + + TEST_ASSERT(ret == 0, + "Failed to create instance of" + " pmd : %s", + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)); + } + } + + /* Create an AESNI GCM device if required */ + if (gbl_driver_id == rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) { + nb_devs = rte_cryptodev_device_count_by_driver( + rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))); + if (nb_devs < 1) { + TEST_ASSERT_SUCCESS(rte_vdev_init( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL), + "Failed to create instance of" + " pmd : %s", + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)); + } + } + + nb_devs = rte_cryptodev_count(); + if (nb_devs < 1) { + RTE_LOG(ERR, USER1, "No crypto devices found?\n"); + return TEST_FAILED; + } + + /* Get security context */ + for (i = 0; i < nb_devs; i++) { + rte_cryptodev_info_get(i, &info); + if (info.driver_id != gbl_driver_id) + continue; + + ts_params->ctx = rte_cryptodev_get_sec_ctx(i); + if (!ts_params->ctx) { + RTE_LOG(ERR, USER1, "Rte_security is not supported\n"); + return TEST_FAILED; + } + } + + sess_sz = rte_security_session_get_size(ts_params->ctx); + ts_params->session_priv_mpool = rte_mempool_create( + "cpu_crypto_test_sess_mp", 2, sess_sz, 0, 0, + NULL, NULL, NULL, NULL, + SOCKET_ID_ANY, 0); + if (!ts_params->session_priv_mpool) { + RTE_LOG(ERR, USER1, "Not enough memory\n"); + return TEST_FAILED; + } + + return TEST_SUCCESS; +} + +static void +testsuite_teardown(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + + if (ts_params->buf_pool) + rte_mempool_free(ts_params->buf_pool); + + if (ts_params->session_priv_mpool) + rte_mempool_free(ts_params->session_priv_mpool); +} + +static int +ut_setup(void) +{ + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + + memset(ut_params, 0, sizeof(*ut_params)); + return TEST_SUCCESS; +} + +static void +ut_teardown(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + + if (ut_params->sess) + rte_security_session_destroy(ts_params->ctx, ut_params->sess); + + if (ut_params->nb_bufs) { + uint32_t i; + + for (i = 0; i < ut_params->nb_bufs; i++) + memset(ut_params->test_datas[i], 0, + sizeof(struct cpu_crypto_test_case)); + + rte_mempool_put_bulk(ts_params->buf_pool, ut_params->test_datas, + ut_params->nb_bufs); + } +} + +static int +allocate_buf(uint32_t n) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + int ret; + + ret = rte_mempool_get_bulk(ts_params->buf_pool, ut_params->test_datas, + n); + + if (ret == 0) + ut_params->nb_bufs = n; + + return ret; +} + +static int +check_status(struct cpu_crypto_test_obj *obj, uint32_t n) +{ + uint32_t i; + + for (i = 0; i < n; i++) + if (obj->status[i] < 0) + return -1; + + return 0; +} + +static struct rte_security_session * +create_aead_session(struct rte_security_ctx *ctx, + struct rte_mempool *sess_mp, + enum rte_crypto_aead_operation op, + const struct aead_test_data *test_data, + uint32_t is_unit_test) +{ + struct rte_security_session_conf sess_conf = {0}; + struct rte_crypto_sym_xform xform = {0}; + + if (is_unit_test) + debug_hexdump(stdout, "key:", test_data->key.data, + test_data->key.len); + + /* Setup AEAD Parameters */ + xform.type = RTE_CRYPTO_SYM_XFORM_AEAD; + xform.next = NULL; + xform.aead.algo = test_data->algo; + xform.aead.op = op; + xform.aead.key.data = test_data->key.data; + xform.aead.key.length = test_data->key.len; + xform.aead.iv.offset = 0; + xform.aead.iv.length = test_data->iv.len; + xform.aead.digest_length = test_data->auth_tag.len; + xform.aead.aad_length = test_data->aad.len; + + sess_conf.action_type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO; + sess_conf.crypto_xform = &xform; + + return rte_security_session_create(ctx, &sess_conf, sess_mp); +} + +static inline int +assemble_aead_buf(struct cpu_crypto_test_case *data, + struct cpu_crypto_test_obj *obj, + uint32_t obj_idx, + enum rte_crypto_aead_operation op, + const struct aead_test_data *test_data, + enum buffer_assemble_option sgl_option, + uint32_t is_unit_test) +{ + const uint8_t *src; + uint32_t src_len; + uint32_t seg_idx; + uint32_t bytes_per_seg; + uint32_t left; + + if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) { + src = test_data->plaintext.data; + src_len = test_data->plaintext.len; + if (is_unit_test) + debug_hexdump(stdout, "plaintext:", src, src_len); + } else { + src = test_data->ciphertext.data; + src_len = test_data->ciphertext.len; + memcpy(data->digest, test_data->auth_tag.data, + test_data->auth_tag.len); + if (is_unit_test) { + debug_hexdump(stdout, "ciphertext:", src, src_len); + debug_hexdump(stdout, "digest:", + test_data->auth_tag.data, + test_data->auth_tag.len); + } + } + + if (src_len > MBUF_DATAPAYLOAD_SIZE) + return -ENOMEM; + + switch (sgl_option) { + case SGL_MAX_SEG: + seg_idx = 0; + bytes_per_seg = src_len / MAX_NB_SIGMENTS + 1; + left = src_len; + + if (bytes_per_seg > (MBUF_DATAPAYLOAD_SIZE / MAX_NB_SIGMENTS)) + return -ENOMEM; + + while (left) { + uint32_t cp_len = RTE_MIN(left, bytes_per_seg); + memcpy(data->seg_buf[seg_idx].seg, src, cp_len); + data->seg_buf[seg_idx].seg_len = cp_len; + obj->vec[obj_idx][seg_idx].iov_base = + (void *)data->seg_buf[seg_idx].seg; + obj->vec[obj_idx][seg_idx].iov_len = cp_len; + src += cp_len; + left -= cp_len; + seg_idx++; + } + + if (left) + return -ENOMEM; + + obj->sec_buf[obj_idx].vec = obj->vec[obj_idx]; + obj->sec_buf[obj_idx].num = seg_idx; + + break; + case SGL_ONE_SEG: + memcpy(data->seg_buf[0].seg, src, src_len); + data->seg_buf[0].seg_len = src_len; + obj->vec[obj_idx][0].iov_base = + (void *)data->seg_buf[0].seg; + obj->vec[obj_idx][0].iov_len = src_len; + + obj->sec_buf[obj_idx].vec = obj->vec[obj_idx]; + obj->sec_buf[obj_idx].num = 1; + break; + default: + return -1; + } + + if (test_data->algo == RTE_CRYPTO_AEAD_AES_CCM) { + memcpy(data->iv + 1, test_data->iv.data, test_data->iv.len); + memcpy(data->aad + 18, test_data->aad.data, test_data->aad.len); + } else { + memcpy(data->iv, test_data->iv.data, test_data->iv.len); + memcpy(data->aad, test_data->aad.data, test_data->aad.len); + } + + if (is_unit_test) { + debug_hexdump(stdout, "iv:", test_data->iv.data, + test_data->iv.len); + debug_hexdump(stdout, "aad:", test_data->aad.data, + test_data->aad.len); + } + + obj->iv[obj_idx] = (void *)data->iv; + obj->digest[obj_idx] = (void *)data->digest; + obj->aad[obj_idx] = (void *)data->aad; + + return 0; +} + +#define CPU_CRYPTO_ERR_EXP_CT "expect ciphertext:" +#define CPU_CRYPTO_ERR_GEN_CT "gen ciphertext:" +#define CPU_CRYPTO_ERR_EXP_PT "expect plaintext:" +#define CPU_CRYPTO_ERR_GEN_PT "gen plaintext:" + +static int +check_aead_result(struct cpu_crypto_test_case *tcase, + enum rte_crypto_aead_operation op, + const struct aead_test_data *tdata) +{ + const char *err_msg1, *err_msg2; + const uint8_t *src_pt_ct; + const uint8_t *tmp_src; + uint32_t src_len; + uint32_t left; + uint32_t i = 0; + int ret; + + if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) { + err_msg1 = CPU_CRYPTO_ERR_EXP_CT; + err_msg2 = CPU_CRYPTO_ERR_GEN_CT; + + src_pt_ct = tdata->ciphertext.data; + src_len = tdata->ciphertext.len; + + ret = memcmp(tcase->digest, tdata->auth_tag.data, + tdata->auth_tag.len); + if (ret != 0) { + debug_hexdump(stdout, "expect digest:", + tdata->auth_tag.data, + tdata->auth_tag.len); + debug_hexdump(stdout, "gen digest:", + tcase->digest, + tdata->auth_tag.len); + return -1; + } + } else { + src_pt_ct = tdata->plaintext.data; + src_len = tdata->plaintext.len; + err_msg1 = CPU_CRYPTO_ERR_EXP_PT; + err_msg2 = CPU_CRYPTO_ERR_GEN_PT; + } + + tmp_src = src_pt_ct; + left = src_len; + + while (left && i < MAX_NB_SIGMENTS) { + ret = memcmp(tcase->seg_buf[i].seg, tmp_src, + tcase->seg_buf[i].seg_len); + if (ret != 0) + goto sgl_err_dump; + tmp_src += tcase->seg_buf[i].seg_len; + left -= tcase->seg_buf[i].seg_len; + i++; + } + + if (left) { + ret = -ENOMEM; + goto sgl_err_dump; + } + + return 0; + +sgl_err_dump: + left = src_len; + i = 0; + + debug_hexdump(stdout, err_msg1, + tdata->ciphertext.data, + tdata->ciphertext.len); + + while (left && i < MAX_NB_SIGMENTS) { + debug_hexdump(stdout, err_msg2, + tcase->seg_buf[i].seg, + tcase->seg_buf[i].seg_len); + left -= tcase->seg_buf[i].seg_len; + i++; + } + return ret; +} + +static inline void +run_test(struct rte_security_ctx *ctx, struct rte_security_session *sess, + struct cpu_crypto_test_obj *obj, uint32_t n) +{ + rte_security_process_cpu_crypto_bulk(ctx, sess, obj->sec_buf, + obj->iv, obj->aad, obj->digest, obj->status, n); +} + +static int +cpu_crypto_test_aead(const struct aead_test_data *tdata, + enum rte_crypto_aead_operation dir, + enum buffer_assemble_option sgl_option) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + int ret; + + ut_params->sess = create_aead_session(ts_params->ctx, + ts_params->session_priv_mpool, + dir, + tdata, + 1); + if (!ut_params->sess) + return -1; + + ret = allocate_buf(1); + if (ret) + return ret; + + tcase = ut_params->test_datas[0]; + ret = assemble_aead_buf(tcase, obj, 0, dir, tdata, sgl_option, 1); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + + run_test(ts_params->ctx, ut_params->sess, obj, 1); + + ret = check_status(obj, 1); + if (ret < 0) + return ret; + + ret = check_aead_result(tcase, dir, tdata); + if (ret < 0) + return ret; + + return 0; +} + +/* test-vector/sgl-option */ +#define all_gcm_unit_test_cases(type) \ + TEST_EXPAND(gcm_test_case_1, type) \ + TEST_EXPAND(gcm_test_case_2, type) \ + TEST_EXPAND(gcm_test_case_3, type) \ + TEST_EXPAND(gcm_test_case_4, type) \ + TEST_EXPAND(gcm_test_case_5, type) \ + TEST_EXPAND(gcm_test_case_6, type) \ + TEST_EXPAND(gcm_test_case_7, type) \ + TEST_EXPAND(gcm_test_case_8, type) \ + TEST_EXPAND(gcm_test_case_192_1, type) \ + TEST_EXPAND(gcm_test_case_192_2, type) \ + TEST_EXPAND(gcm_test_case_192_3, type) \ + TEST_EXPAND(gcm_test_case_192_4, type) \ + TEST_EXPAND(gcm_test_case_192_5, type) \ + TEST_EXPAND(gcm_test_case_192_6, type) \ + TEST_EXPAND(gcm_test_case_192_7, type) \ + TEST_EXPAND(gcm_test_case_256_1, type) \ + TEST_EXPAND(gcm_test_case_256_2, type) \ + TEST_EXPAND(gcm_test_case_256_3, type) \ + TEST_EXPAND(gcm_test_case_256_4, type) \ + TEST_EXPAND(gcm_test_case_256_5, type) \ + TEST_EXPAND(gcm_test_case_256_6, type) \ + TEST_EXPAND(gcm_test_case_256_7, type) + + +#define TEST_EXPAND(t, o) \ +static int \ +cpu_crypto_aead_enc_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_ENCRYPT, o); \ +} \ +static int \ +cpu_crypto_aead_dec_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_DECRYPT, o); \ +} \ + +all_gcm_unit_test_cases(SGL_ONE_SEG) +all_gcm_unit_test_cases(SGL_MAX_SEG) +#undef TEST_EXPAND + +static struct unit_test_suite security_cpu_crypto_aesgcm_testsuite = { + .suite_name = "Security CPU Crypto AESNI-GCM Unit Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_aead_enc_test_##t##_##o), \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_aead_dec_test_##t##_##o), \ + + all_gcm_unit_test_cases(SGL_ONE_SEG) + all_gcm_unit_test_cases(SGL_MAX_SEG) +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_security_cpu_crypto_aesni_gcm(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)); + + return unit_test_suite_runner(&security_cpu_crypto_aesgcm_testsuite); +} + +REGISTER_TEST_COMMAND(security_aesni_gcm_autotest, + test_security_cpu_crypto_aesni_gcm); From patchwork Fri Sep 6 13:13:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58866 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0B16C1F39B; Fri, 6 Sep 2019 15:13:47 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 4B5091F384 for ; Fri, 6 Sep 2019 15:13:41 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140740" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:39 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:24 +0100 Message-Id: <20190906131330.40185-5-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 04/10] app/test: add security cpu crypto perftest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since crypto perf application does not support rte_security, this patch adds a simple GCM CPU crypto performance test to crypto unittest application. The test includes different key and data sizes test with single buffer and SGL buffer test items and will display the throughput as well as cycle count performance information. Signed-off-by: Fan Zhang --- app/test/test_security_cpu_crypto.c | 201 ++++++++++++++++++++++++++++++++++++ 1 file changed, 201 insertions(+) diff --git a/app/test/test_security_cpu_crypto.c b/app/test/test_security_cpu_crypto.c index d345922b2..ca9a8dae6 100644 --- a/app/test/test_security_cpu_crypto.c +++ b/app/test/test_security_cpu_crypto.c @@ -23,6 +23,7 @@ #define CPU_CRYPTO_TEST_MAX_AAD_LENGTH 16 #define MAX_NB_SIGMENTS 4 +#define CACHE_WARM_ITER 2048 enum buffer_assemble_option { SGL_MAX_SEG, @@ -560,5 +561,205 @@ test_security_cpu_crypto_aesni_gcm(void) return unit_test_suite_runner(&security_cpu_crypto_aesgcm_testsuite); } + +static inline void +gen_rand(uint8_t *data, uint32_t len) +{ + uint32_t i; + + for (i = 0; i < len; i++) + data[i] = (uint8_t)rte_rand(); +} + +static inline void +switch_aead_enc_to_dec(struct aead_test_data *tdata, + struct cpu_crypto_test_case *tcase, + enum buffer_assemble_option sgl_option) +{ + uint32_t i; + uint8_t *dst = tdata->ciphertext.data; + + switch (sgl_option) { + case SGL_ONE_SEG: + memcpy(dst, tcase->seg_buf[0].seg, tcase->seg_buf[0].seg_len); + tdata->ciphertext.len = tcase->seg_buf[0].seg_len; + break; + case SGL_MAX_SEG: + tdata->ciphertext.len = 0; + for (i = 0; i < MAX_NB_SIGMENTS; i++) { + memcpy(dst, tcase->seg_buf[i].seg, + tcase->seg_buf[i].seg_len); + tdata->ciphertext.len += tcase->seg_buf[i].seg_len; + } + break; + } + + memcpy(tdata->auth_tag.data, tcase->digest, tdata->auth_tag.len); +} + +static int +cpu_crypto_test_aead_perf(enum buffer_assemble_option sgl_option, + uint32_t key_sz) +{ + struct aead_test_data tdata = {0}; + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + uint64_t hz = rte_get_tsc_hz(), time_start, time_now; + double rate, cycles_per_buf; + uint32_t test_data_szs[] = {64, 128, 256, 512, 1024, 2048}; + uint32_t i, j; + uint8_t aad[16]; + int ret; + + tdata.key.len = key_sz; + gen_rand(tdata.key.data, tdata.key.len); + tdata.algo = RTE_CRYPTO_AEAD_AES_GCM; + tdata.aad.data = aad; + + ut_params->sess = create_aead_session(ts_params->ctx, + ts_params->session_priv_mpool, + RTE_CRYPTO_AEAD_OP_DECRYPT, + &tdata, + 0); + if (!ut_params->sess) + return -1; + + ret = allocate_buf(MAX_NUM_OPS_INFLIGHT); + if (ret) + return ret; + + for (i = 0; i < RTE_DIM(test_data_szs); i++) { + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tdata.plaintext.len = test_data_szs[i]; + gen_rand(tdata.plaintext.data, + tdata.plaintext.len); + + tdata.aad.len = 12; + gen_rand(tdata.aad.data, tdata.aad.len); + + tdata.auth_tag.len = 16; + + tdata.iv.len = 16; + gen_rand(tdata.iv.data, tdata.iv.len); + + tcase = ut_params->test_datas[j]; + ret = assemble_aead_buf(tcase, obj, j, + RTE_CRYPTO_AEAD_OP_ENCRYPT, + &tdata, sgl_option, 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + /* warm up cache */ + for (j = 0; j < CACHE_WARM_ITER; j++) + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_start = rte_rdtsc(); + + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_now = rte_rdtsc(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("AES-GCM-%u(%4uB) Enc %03.3fMpps (%03.3fGbps) ", + key_sz * 8, test_data_szs[i], rate, + rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, + cycles_per_buf / test_data_szs[i]); + + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tcase = ut_params->test_datas[j]; + + switch_aead_enc_to_dec(&tdata, tcase, sgl_option); + ret = assemble_aead_buf(tcase, obj, j, + RTE_CRYPTO_AEAD_OP_DECRYPT, + &tdata, sgl_option, 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + time_start = rte_get_timer_cycles(); + + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_now = rte_get_timer_cycles(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("AES-GCM-%u(%4uB) Dec %03.3fMpps (%03.3fGbps) ", + key_sz * 8, test_data_szs[i], rate, + rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, + cycles_per_buf / test_data_szs[i]); + } + + return 0; +} + +/* test-perfix/key-size/sgl-type */ +#define all_gcm_perf_test_cases(type) \ + TEST_EXPAND(_128, 16, type) \ + TEST_EXPAND(_192, 24, type) \ + TEST_EXPAND(_256, 32, type) + +#define TEST_EXPAND(a, b, c) \ +static int \ +cpu_crypto_gcm_perf##a##_##c(void) \ +{ \ + return cpu_crypto_test_aead_perf(c, b); \ +} \ + +all_gcm_perf_test_cases(SGL_ONE_SEG) +all_gcm_perf_test_cases(SGL_MAX_SEG) +#undef TEST_EXPAND + +static struct unit_test_suite security_cpu_crypto_aesgcm_perf_testsuite = { + .suite_name = "Security CPU Crypto AESNI-GCM Perf Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { +#define TEST_EXPAND(a, b, c) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_gcm_perf##a##_##c), \ + + all_gcm_perf_test_cases(SGL_ONE_SEG) + all_gcm_perf_test_cases(SGL_MAX_SEG) +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_security_cpu_crypto_aesni_gcm_perf(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)); + + return unit_test_suite_runner( + &security_cpu_crypto_aesgcm_perf_testsuite); +} + REGISTER_TEST_COMMAND(security_aesni_gcm_autotest, test_security_cpu_crypto_aesni_gcm); + +REGISTER_TEST_COMMAND(security_aesni_gcm_perftest, + test_security_cpu_crypto_aesni_gcm_perf); From patchwork Fri Sep 6 13:13:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58867 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4CD3C1F3A1; Fri, 6 Sep 2019 15:13:50 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 0C92E1F384 for ; Fri, 6 Sep 2019 15:13:42 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140755" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:40 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:25 +0100 Message-Id: <20190906131330.40185-6-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 05/10] crypto/aesni_mb: add rte_security handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add rte_security support support to AESNI-MB PMD. The PMD now initialize security context instance, create/delete PMD specific security sessions, and process crypto workloads in synchronous mode. Signed-off-by: Fan Zhang --- drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 291 ++++++++++++++++++++- drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 91 ++++++- drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 21 +- 3 files changed, 398 insertions(+), 5 deletions(-) diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c index b495a9679..68767c04e 100644 --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c @@ -8,6 +8,8 @@ #include #include #include +#include +#include #include #include #include @@ -789,6 +791,167 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session, (UINT64_MAX - u_src + u_dst + 1); } +union sec_userdata_field { + int status; + struct { + uint16_t is_gen_digest; + uint16_t digest_len; + }; +}; + +struct sec_udata_digest_field { + uint32_t is_digest_gen; + uint32_t digest_len; +}; + +static inline int +set_mb_job_params_sec(JOB_AES_HMAC *job, struct aesni_mb_sec_session *sec_sess, + void *buf, uint32_t buf_len, void *iv, void *aad, void *digest, + int *status, uint8_t *digest_idx) +{ + struct aesni_mb_session *session = &sec_sess->sess; + uint32_t cipher_offset = sec_sess->cipher_offset; + void *user_digest = NULL; + union sec_userdata_field udata; + + if (unlikely(cipher_offset > buf_len)) + return -EINVAL; + + /* Set crypto operation */ + job->chain_order = session->chain_order; + + /* Set cipher parameters */ + job->cipher_direction = session->cipher.direction; + job->cipher_mode = session->cipher.mode; + + job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes; + + /* Set authentication parameters */ + job->hash_alg = session->auth.algo; + job->iv = iv; + + switch (job->hash_alg) { + case AES_XCBC: + job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded; + job->u.XCBC._k2 = session->auth.xcbc.k2; + job->u.XCBC._k3 = session->auth.xcbc.k3; + + job->aes_enc_key_expanded = + session->cipher.expanded_aes_keys.encode; + job->aes_dec_key_expanded = + session->cipher.expanded_aes_keys.decode; + break; + + case AES_CCM: + job->u.CCM.aad = (uint8_t *)aad + 18; + job->u.CCM.aad_len_in_bytes = session->aead.aad_len; + job->aes_enc_key_expanded = + session->cipher.expanded_aes_keys.encode; + job->aes_dec_key_expanded = + session->cipher.expanded_aes_keys.decode; + job->iv++; + break; + + case AES_CMAC: + job->u.CMAC._key_expanded = session->auth.cmac.expkey; + job->u.CMAC._skey1 = session->auth.cmac.skey1; + job->u.CMAC._skey2 = session->auth.cmac.skey2; + job->aes_enc_key_expanded = + session->cipher.expanded_aes_keys.encode; + job->aes_dec_key_expanded = + session->cipher.expanded_aes_keys.decode; + break; + + case AES_GMAC: + if (session->cipher.mode == GCM) { + job->u.GCM.aad = aad; + job->u.GCM.aad_len_in_bytes = session->aead.aad_len; + } else { + /* For GMAC */ + job->u.GCM.aad = aad; + job->u.GCM.aad_len_in_bytes = buf_len; + job->cipher_mode = GCM; + } + job->aes_enc_key_expanded = &session->cipher.gcm_key; + job->aes_dec_key_expanded = &session->cipher.gcm_key; + break; + + default: + job->u.HMAC._hashed_auth_key_xor_ipad = + session->auth.pads.inner; + job->u.HMAC._hashed_auth_key_xor_opad = + session->auth.pads.outer; + + if (job->cipher_mode == DES3) { + job->aes_enc_key_expanded = + session->cipher.exp_3des_keys.ks_ptr; + job->aes_dec_key_expanded = + session->cipher.exp_3des_keys.ks_ptr; + } else { + job->aes_enc_key_expanded = + session->cipher.expanded_aes_keys.encode; + job->aes_dec_key_expanded = + session->cipher.expanded_aes_keys.decode; + } + } + + /* Set digest output location */ + if (job->hash_alg != NULL_HASH && + session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) { + job->auth_tag_output = sec_sess->temp_digests[*digest_idx]; + *digest_idx = (*digest_idx + 1) % MAX_JOBS; + + udata.is_gen_digest = 0; + udata.digest_len = session->auth.req_digest_len; + user_digest = (void *)digest; + } else { + udata.is_gen_digest = 1; + udata.digest_len = session->auth.req_digest_len; + + if (session->auth.req_digest_len != + session->auth.gen_digest_len) { + job->auth_tag_output = + sec_sess->temp_digests[*digest_idx]; + *digest_idx = (*digest_idx + 1) % MAX_JOBS; + + user_digest = (void *)digest; + } else + job->auth_tag_output = digest; + + /* A bit of hack here, since job structure only supports + * 2 user data fields and we need 4 params to be passed + * (status, direction, digest for verify, and length of + * digest), we set the status value as digest length + + * direction here temporarily to avoid creating longer + * buffer to store all 4 params. + */ + *status = udata.status; + } + /* + * Multi-buffer library current only support returning a truncated + * digest length as specified in the relevant IPsec RFCs + */ + + /* Set digest length */ + job->auth_tag_output_len_in_bytes = session->auth.gen_digest_len; + + /* Set IV parameters */ + job->iv_len_in_bytes = session->iv.length; + + /* Data Parameters */ + job->src = buf; + job->dst = buf; + job->cipher_start_src_offset_in_bytes = cipher_offset; + job->msg_len_to_cipher_in_bytes = buf_len - cipher_offset; + job->hash_start_src_offset_in_bytes = 0; + job->msg_len_to_hash_in_bytes = buf_len; + + job->user_data = (void *)status; + job->user_data2 = user_digest; + + return 0; +} + /** * Process a crypto operation and complete a JOB_AES_HMAC job structure for * submission to the multi buffer library for processing. @@ -1081,6 +1244,37 @@ post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job) return op; } +static inline void +post_process_mb_sec_job(JOB_AES_HMAC *job) +{ + void *user_digest = job->user_data2; + int *status = job->user_data; + union sec_userdata_field udata; + + switch (job->status) { + case STS_COMPLETED: + if (user_digest) { + udata.status = *status; + + if (udata.is_gen_digest) { + *status = RTE_CRYPTO_OP_STATUS_SUCCESS; + memcpy(user_digest, job->auth_tag_output, + udata.digest_len); + } else { + verify_digest(job, user_digest, + udata.digest_len, (uint8_t *)status); + + if (*status == RTE_CRYPTO_OP_STATUS_AUTH_FAILED) + *status = -1; + } + } else + *status = RTE_CRYPTO_OP_STATUS_SUCCESS; + break; + default: + *status = RTE_CRYPTO_OP_STATUS_ERROR; + } +} + /** * Process a completed JOB_AES_HMAC job and keep processing jobs until * get_completed_job return NULL @@ -1117,6 +1311,32 @@ handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job, return processed_jobs; } +static inline uint32_t +handle_completed_sec_jobs(JOB_AES_HMAC *job, MB_MGR *mb_mgr) +{ + uint32_t processed = 0; + + while (job != NULL) { + post_process_mb_sec_job(job); + job = IMB_GET_COMPLETED_JOB(mb_mgr); + processed++; + } + + return processed; +} + +static inline uint32_t +flush_mb_sec_mgr(MB_MGR *mb_mgr) +{ + JOB_AES_HMAC *job = IMB_FLUSH_JOB(mb_mgr); + uint32_t processed = 0; + + if (job) + processed = handle_completed_sec_jobs(job, mb_mgr); + + return processed; +} + static inline uint16_t flush_mb_mgr(struct aesni_mb_qp *qp, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -1220,6 +1440,55 @@ aesni_mb_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, return processed_jobs; } +void +aesni_mb_sec_crypto_process_bulk(struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num) +{ + struct aesni_mb_sec_session *sec_sess = sess->sess_private_data; + JOB_AES_HMAC *job; + uint8_t digest_idx = sec_sess->digest_idx; + uint32_t i, processed = 0; + int ret; + + for (i = 0; i < num; i++) { + void *seg_buf = buf[i].vec[0].iov_base; + uint32_t buf_len = buf[i].vec[0].iov_len; + + job = IMB_GET_NEXT_JOB(sec_sess->mb_mgr); + if (unlikely(job == NULL)) { + processed += flush_mb_sec_mgr(sec_sess->mb_mgr); + + job = IMB_GET_NEXT_JOB(sec_sess->mb_mgr); + if (!job) + return; + } + + ret = set_mb_job_params_sec(job, sec_sess, seg_buf, buf_len, + iv[i], aad[i], digest[i], &status[i], + &digest_idx); + /* Submit job to multi-buffer for processing */ + if (ret) { + processed++; + status[i] = ret; + continue; + } + +#ifdef RTE_LIBRTE_PMD_AESNI_MB_DEBUG + job = IMB_SUBMIT_JOB(sec_sess->mb_mgr); +#else + job = IMB_SUBMIT_JOB_NOCHECK(sec_sess->mb_mgr); +#endif + + if (job) + processed += handle_completed_sec_jobs(job, + sec_sess->mb_mgr); + } + + while (processed < num) + processed += flush_mb_sec_mgr(sec_sess->mb_mgr); +} + static int cryptodev_aesni_mb_remove(struct rte_vdev_device *vdev); static int @@ -1229,8 +1498,10 @@ cryptodev_aesni_mb_create(const char *name, { struct rte_cryptodev *dev; struct aesni_mb_private *internals; + struct rte_security_ctx *sec_ctx; enum aesni_mb_vector_mode vector_mode; MB_MGR *mb_mgr; + char sec_name[RTE_DEV_NAME_MAX_LEN]; /* Check CPU for support for AES instruction set */ if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) { @@ -1264,7 +1535,8 @@ cryptodev_aesni_mb_create(const char *name, dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_CPU_AESNI | - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT; + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | + RTE_CRYPTODEV_FF_SECURITY; mb_mgr = alloc_mb_mgr(0); @@ -1303,11 +1575,28 @@ cryptodev_aesni_mb_create(const char *name, AESNI_MB_LOG(INFO, "IPSec Multi-buffer library version used: %s\n", imb_get_version_str()); + /* setup security operations */ + snprintf(sec_name, sizeof(sec_name) - 1, "aes_mb_sec_%u", + dev->driver_id); + sec_ctx = rte_zmalloc_socket(sec_name, + sizeof(struct rte_security_ctx), + RTE_CACHE_LINE_SIZE, init_params->socket_id); + if (sec_ctx == NULL) { + AESNI_MB_LOG(ERR, "memory allocation failed\n"); + goto error_exit; + } + + sec_ctx->device = (void *)dev; + sec_ctx->ops = rte_aesni_mb_pmd_security_ops; + dev->security_ctx = sec_ctx; + return 0; error_exit: if (mb_mgr) free_mb_mgr(mb_mgr); + if (sec_ctx) + rte_free(sec_ctx); rte_cryptodev_pmd_destroy(dev); diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c index 8d15b99d4..ca6cea775 100644 --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "rte_aesni_mb_pmd_private.h" @@ -732,7 +733,8 @@ aesni_mb_pmd_qp_count(struct rte_cryptodev *dev) static unsigned aesni_mb_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused) { - return sizeof(struct aesni_mb_session); + return RTE_ALIGN_CEIL(sizeof(struct aesni_mb_session), + RTE_CACHE_LINE_SIZE); } /** Configure a aesni multi-buffer session from a crypto xform chain */ @@ -810,4 +812,91 @@ struct rte_cryptodev_ops aesni_mb_pmd_ops = { .sym_session_clear = aesni_mb_pmd_sym_session_clear }; +/** Set session authentication parameters */ + +static int +aesni_mb_security_session_create(void *dev, + struct rte_security_session_conf *conf, + struct rte_security_session *sess, + struct rte_mempool *mempool) +{ + struct rte_cryptodev *cdev = dev; + struct aesni_mb_private *internals = cdev->data->dev_private; + struct aesni_mb_sec_session *sess_priv; + int ret; + + if (!conf->crypto_xform) { + AESNI_MB_LOG(ERR, "Invalid security session conf"); + return -EINVAL; + } + + if (rte_mempool_get(mempool, (void **)(&sess_priv))) { + AESNI_MB_LOG(ERR, + "Couldn't get object from session mempool"); + return -ENOMEM; + } + + sess_priv->mb_mgr = internals->mb_mgr; + if (sess_priv->mb_mgr == NULL) + return -ENOMEM; + + sess_priv->cipher_offset = conf->cpucrypto.cipher_offset; + + ret = aesni_mb_set_session_parameters(sess_priv->mb_mgr, + &sess_priv->sess, conf->crypto_xform); + if (ret != 0) { + AESNI_MB_LOG(ERR, "failed configure session parameters"); + + rte_mempool_put(mempool, sess_priv); + } + + sess->sess_private_data = (void *)sess_priv; + + return ret; +} + +static int +aesni_mb_security_session_destroy(void *dev __rte_unused, + struct rte_security_session *sess) +{ + struct aesni_mb_sec_session *sess_priv = + get_sec_session_private_data(sess); + + if (sess_priv) { + struct rte_mempool *sess_mp = rte_mempool_from_obj( + (void *)sess_priv); + + memset(sess, 0, sizeof(struct aesni_mb_sec_session)); + set_sec_session_private_data(sess, NULL); + + if (sess_mp == NULL) { + AESNI_MB_LOG(ERR, "failed fetch session mempool"); + return -EINVAL; + } + + rte_mempool_put(sess_mp, sess_priv); + } + + return 0; +} + +static unsigned int +aesni_mb_sec_session_get_size(__rte_unused void *device) +{ + return RTE_ALIGN_CEIL(sizeof(struct aesni_mb_sec_session), + RTE_CACHE_LINE_SIZE); +} + +static struct rte_security_ops aesni_mb_security_ops = { + .session_create = aesni_mb_security_session_create, + .session_get_size = aesni_mb_sec_session_get_size, + .session_update = NULL, + .session_stats_get = NULL, + .session_destroy = aesni_mb_security_session_destroy, + .set_pkt_metadata = NULL, + .capabilities_get = NULL, + .process_cpu_crypto_bulk = aesni_mb_sec_crypto_process_bulk, +}; + struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops; +struct rte_security_ops *rte_aesni_mb_pmd_security_ops = &aesni_mb_security_ops; diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h index b794d4bc1..d1cf416ab 100644 --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h @@ -176,7 +176,6 @@ struct aesni_mb_qp { */ } __rte_cache_aligned; -/** AES-NI multi-buffer private session structure */ struct aesni_mb_session { JOB_CHAIN_ORDER chain_order; struct { @@ -265,16 +264,32 @@ struct aesni_mb_session { /** AAD data length */ uint16_t aad_len; } aead; -} __rte_cache_aligned; +}; + +/** AES-NI multi-buffer private security session structure */ +struct aesni_mb_sec_session { + /**< Unique Queue Pair Name */ + struct aesni_mb_session sess; + uint8_t temp_digests[MAX_JOBS][DIGEST_LENGTH_MAX]; + uint16_t digest_idx; + uint32_t cipher_offset; + MB_MGR *mb_mgr; +}; extern int aesni_mb_set_session_parameters(const MB_MGR *mb_mgr, struct aesni_mb_session *sess, const struct rte_crypto_sym_xform *xform); +extern void +aesni_mb_sec_crypto_process_bulk(struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + /** device specific operations function pointer structure */ extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops; - +/** device specific operations function pointer structure for rte_security */ +extern struct rte_security_ops *rte_aesni_mb_pmd_security_ops; #endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */ From patchwork Fri Sep 6 13:13:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58868 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0815C1F3AD; Fri, 6 Sep 2019 15:13:56 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id B44911F391 for ; Fri, 6 Sep 2019 15:13:44 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140763" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:42 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:26 +0100 Message-Id: <20190906131330.40185-7-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 06/10] app/test: add aesni_mb security cpu crypto autotest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds cpu crypto unit test for AESNI_MB PMD. Signed-off-by: Fan Zhang --- app/test/test_security_cpu_crypto.c | 367 ++++++++++++++++++++++++++++++++++++ 1 file changed, 367 insertions(+) diff --git a/app/test/test_security_cpu_crypto.c b/app/test/test_security_cpu_crypto.c index ca9a8dae6..0ea406390 100644 --- a/app/test/test_security_cpu_crypto.c +++ b/app/test/test_security_cpu_crypto.c @@ -19,12 +19,23 @@ #include "test.h" #include "test_cryptodev.h" +#include "test_cryptodev_blockcipher.h" +#include "test_cryptodev_aes_test_vectors.h" #include "test_cryptodev_aead_test_vectors.h" +#include "test_cryptodev_des_test_vectors.h" +#include "test_cryptodev_hash_test_vectors.h" #define CPU_CRYPTO_TEST_MAX_AAD_LENGTH 16 #define MAX_NB_SIGMENTS 4 #define CACHE_WARM_ITER 2048 +#define TOP_ENC BLOCKCIPHER_TEST_OP_ENCRYPT +#define TOP_DEC BLOCKCIPHER_TEST_OP_DECRYPT +#define TOP_AUTH_GEN BLOCKCIPHER_TEST_OP_AUTH_GEN +#define TOP_AUTH_VER BLOCKCIPHER_TEST_OP_AUTH_VERIFY +#define TOP_ENC_AUTH BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN +#define TOP_AUTH_DEC BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC + enum buffer_assemble_option { SGL_MAX_SEG, SGL_ONE_SEG, @@ -516,6 +527,11 @@ cpu_crypto_test_aead(const struct aead_test_data *tdata, TEST_EXPAND(gcm_test_case_256_6, type) \ TEST_EXPAND(gcm_test_case_256_7, type) +/* test-vector/sgl-option */ +#define all_ccm_unit_test_cases \ + TEST_EXPAND(ccm_test_case_128_1, SGL_ONE_SEG) \ + TEST_EXPAND(ccm_test_case_128_2, SGL_ONE_SEG) \ + TEST_EXPAND(ccm_test_case_128_3, SGL_ONE_SEG) #define TEST_EXPAND(t, o) \ static int \ @@ -531,6 +547,7 @@ cpu_crypto_aead_dec_test_##t##_##o(void) \ all_gcm_unit_test_cases(SGL_ONE_SEG) all_gcm_unit_test_cases(SGL_MAX_SEG) +all_ccm_unit_test_cases #undef TEST_EXPAND static struct unit_test_suite security_cpu_crypto_aesgcm_testsuite = { @@ -758,8 +775,358 @@ test_security_cpu_crypto_aesni_gcm_perf(void) &security_cpu_crypto_aesgcm_perf_testsuite); } +static struct rte_security_session * +create_blockcipher_session(struct rte_security_ctx *ctx, + struct rte_mempool *sess_mp, + uint32_t op_mask, + const struct blockcipher_test_data *test_data, + uint32_t is_unit_test) +{ + struct rte_security_session_conf sess_conf = {0}; + struct rte_crypto_sym_xform xforms[2] = { {0} }; + struct rte_crypto_sym_xform *cipher_xform = NULL; + struct rte_crypto_sym_xform *auth_xform = NULL; + struct rte_crypto_sym_xform *xform; + + if (op_mask & BLOCKCIPHER_TEST_OP_CIPHER) { + cipher_xform = &xforms[0]; + cipher_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER; + + if (op_mask & TOP_ENC) + cipher_xform->cipher.op = + RTE_CRYPTO_CIPHER_OP_ENCRYPT; + else + cipher_xform->cipher.op = + RTE_CRYPTO_CIPHER_OP_DECRYPT; + + cipher_xform->cipher.algo = test_data->crypto_algo; + cipher_xform->cipher.key.data = test_data->cipher_key.data; + cipher_xform->cipher.key.length = test_data->cipher_key.len; + cipher_xform->cipher.iv.offset = 0; + cipher_xform->cipher.iv.length = test_data->iv.len; + + if (is_unit_test) + debug_hexdump(stdout, "cipher key:", + test_data->cipher_key.data, + test_data->cipher_key.len); + } + + if (op_mask & BLOCKCIPHER_TEST_OP_AUTH) { + auth_xform = &xforms[1]; + auth_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH; + + if (op_mask & TOP_AUTH_GEN) + auth_xform->auth.op = RTE_CRYPTO_AUTH_OP_GENERATE; + else + auth_xform->auth.op = RTE_CRYPTO_AUTH_OP_VERIFY; + + auth_xform->auth.algo = test_data->auth_algo; + auth_xform->auth.key.length = test_data->auth_key.len; + auth_xform->auth.key.data = test_data->auth_key.data; + auth_xform->auth.digest_length = test_data->digest.len; + + if (is_unit_test) + debug_hexdump(stdout, "auth key:", + test_data->auth_key.data, + test_data->auth_key.len); + } + + if (op_mask == TOP_ENC || + op_mask == TOP_DEC) + xform = cipher_xform; + else if (op_mask == TOP_AUTH_GEN || + op_mask == TOP_AUTH_VER) + xform = auth_xform; + else if (op_mask == TOP_ENC_AUTH) { + xform = cipher_xform; + xform->next = auth_xform; + } else if (op_mask == TOP_AUTH_DEC) { + xform = auth_xform; + xform->next = cipher_xform; + } else + return NULL; + + if (test_data->cipher_offset < test_data->auth_offset) + return NULL; + + sess_conf.action_type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO; + sess_conf.crypto_xform = xform; + sess_conf.cpucrypto.cipher_offset = test_data->cipher_offset - + test_data->auth_offset; + + return rte_security_session_create(ctx, &sess_conf, sess_mp); +} + +static inline int +assemble_blockcipher_buf(struct cpu_crypto_test_case *data, + struct cpu_crypto_test_obj *obj, + uint32_t obj_idx, + uint32_t op_mask, + const struct blockcipher_test_data *test_data, + uint32_t is_unit_test) +{ + const uint8_t *src; + uint32_t src_len; + uint32_t offset; + + if (op_mask == TOP_ENC_AUTH || + op_mask == TOP_AUTH_GEN || + op_mask == BLOCKCIPHER_TEST_OP_AUTH_VERIFY) + offset = test_data->auth_offset; + else + offset = test_data->cipher_offset; + + if (op_mask & TOP_ENC_AUTH) { + src = test_data->plaintext.data; + src_len = test_data->plaintext.len; + if (is_unit_test) + debug_hexdump(stdout, "plaintext:", src, src_len); + } else { + src = test_data->ciphertext.data; + src_len = test_data->ciphertext.len; + memcpy(data->digest, test_data->digest.data, + test_data->digest.len); + if (is_unit_test) { + debug_hexdump(stdout, "ciphertext:", src, src_len); + debug_hexdump(stdout, "digest:", test_data->digest.data, + test_data->digest.len); + } + } + + if (src_len > MBUF_DATAPAYLOAD_SIZE) + return -ENOMEM; + + memcpy(data->seg_buf[0].seg, src, src_len); + data->seg_buf[0].seg_len = src_len; + obj->vec[obj_idx][0].iov_base = + (void *)(data->seg_buf[0].seg + offset); + obj->vec[obj_idx][0].iov_len = src_len - offset; + + obj->sec_buf[obj_idx].vec = obj->vec[obj_idx]; + obj->sec_buf[obj_idx].num = 1; + + memcpy(data->iv, test_data->iv.data, test_data->iv.len); + if (is_unit_test) + debug_hexdump(stdout, "iv:", test_data->iv.data, + test_data->iv.len); + + obj->iv[obj_idx] = (void *)data->iv; + obj->digest[obj_idx] = (void *)data->digest; + + return 0; +} + +static int +check_blockcipher_result(struct cpu_crypto_test_case *tcase, + uint32_t op_mask, + const struct blockcipher_test_data *test_data) +{ + int ret; + + if (op_mask & BLOCKCIPHER_TEST_OP_CIPHER) { + const char *err_msg1, *err_msg2; + const uint8_t *src_pt_ct; + uint32_t src_len; + + if (op_mask & TOP_ENC) { + src_pt_ct = test_data->ciphertext.data; + src_len = test_data->ciphertext.len; + err_msg1 = CPU_CRYPTO_ERR_EXP_CT; + err_msg2 = CPU_CRYPTO_ERR_GEN_CT; + } else { + src_pt_ct = test_data->plaintext.data; + src_len = test_data->plaintext.len; + err_msg1 = CPU_CRYPTO_ERR_EXP_PT; + err_msg2 = CPU_CRYPTO_ERR_GEN_PT; + } + + ret = memcmp(tcase->seg_buf[0].seg, src_pt_ct, src_len); + if (ret != 0) { + debug_hexdump(stdout, err_msg1, src_pt_ct, src_len); + debug_hexdump(stdout, err_msg2, + tcase->seg_buf[0].seg, + test_data->ciphertext.len); + return -1; + } + } + + if (op_mask & TOP_AUTH_GEN) { + ret = memcmp(tcase->digest, test_data->digest.data, + test_data->digest.len); + if (ret != 0) { + debug_hexdump(stdout, "expect digest:", + test_data->digest.data, + test_data->digest.len); + debug_hexdump(stdout, "gen digest:", + tcase->digest, + test_data->digest.len); + return -1; + } + } + + return 0; +} + +static int +cpu_crypto_test_blockcipher(const struct blockcipher_test_data *tdata, + uint32_t op_mask) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + int ret; + + ut_params->sess = create_blockcipher_session(ts_params->ctx, + ts_params->session_priv_mpool, + op_mask, + tdata, + 1); + if (!ut_params->sess) + return -1; + + ret = allocate_buf(1); + if (ret) + return ret; + + tcase = ut_params->test_datas[0]; + ret = assemble_blockcipher_buf(tcase, obj, 0, op_mask, tdata, 1); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + + run_test(ts_params->ctx, ut_params->sess, obj, 1); + + ret = check_status(obj, 1); + if (ret < 0) + return ret; + + ret = check_blockcipher_result(tcase, op_mask, tdata); + if (ret < 0) + return ret; + + return 0; +} + +/* Macro to save code for defining BlockCipher test cases */ +/* test-vector-name/op */ +#define all_blockcipher_test_cases \ + TEST_EXPAND(aes_test_data_1, TOP_ENC) \ + TEST_EXPAND(aes_test_data_1, TOP_DEC) \ + TEST_EXPAND(aes_test_data_1, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_1, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_2, TOP_ENC) \ + TEST_EXPAND(aes_test_data_2, TOP_DEC) \ + TEST_EXPAND(aes_test_data_2, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_2, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_3, TOP_ENC) \ + TEST_EXPAND(aes_test_data_3, TOP_DEC) \ + TEST_EXPAND(aes_test_data_3, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_3, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_4, TOP_ENC) \ + TEST_EXPAND(aes_test_data_4, TOP_DEC) \ + TEST_EXPAND(aes_test_data_4, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_4, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_5, TOP_ENC) \ + TEST_EXPAND(aes_test_data_5, TOP_DEC) \ + TEST_EXPAND(aes_test_data_5, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_5, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_6, TOP_ENC) \ + TEST_EXPAND(aes_test_data_6, TOP_DEC) \ + TEST_EXPAND(aes_test_data_6, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_6, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_7, TOP_ENC) \ + TEST_EXPAND(aes_test_data_7, TOP_DEC) \ + TEST_EXPAND(aes_test_data_7, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_7, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_8, TOP_ENC) \ + TEST_EXPAND(aes_test_data_8, TOP_DEC) \ + TEST_EXPAND(aes_test_data_8, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_8, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_9, TOP_ENC) \ + TEST_EXPAND(aes_test_data_9, TOP_DEC) \ + TEST_EXPAND(aes_test_data_9, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_9, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_10, TOP_ENC) \ + TEST_EXPAND(aes_test_data_10, TOP_DEC) \ + TEST_EXPAND(aes_test_data_11, TOP_ENC) \ + TEST_EXPAND(aes_test_data_11, TOP_DEC) \ + TEST_EXPAND(aes_test_data_12, TOP_ENC) \ + TEST_EXPAND(aes_test_data_12, TOP_DEC) \ + TEST_EXPAND(aes_test_data_12, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_12, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_13, TOP_ENC) \ + TEST_EXPAND(aes_test_data_13, TOP_DEC) \ + TEST_EXPAND(aes_test_data_13, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_13, TOP_AUTH_DEC) \ + TEST_EXPAND(des_test_data_1, TOP_ENC) \ + TEST_EXPAND(des_test_data_1, TOP_DEC) \ + TEST_EXPAND(des_test_data_2, TOP_ENC) \ + TEST_EXPAND(des_test_data_2, TOP_DEC) \ + TEST_EXPAND(des_test_data_3, TOP_ENC) \ + TEST_EXPAND(des_test_data_3, TOP_DEC) \ + TEST_EXPAND(triple_des128cbc_hmac_sha1_test_vector, TOP_ENC) \ + TEST_EXPAND(triple_des128cbc_hmac_sha1_test_vector, TOP_DEC) \ + TEST_EXPAND(triple_des128cbc_hmac_sha1_test_vector, TOP_ENC_AUTH) \ + TEST_EXPAND(triple_des128cbc_hmac_sha1_test_vector, TOP_AUTH_DEC) \ + TEST_EXPAND(triple_des64cbc_test_vector, TOP_ENC) \ + TEST_EXPAND(triple_des64cbc_test_vector, TOP_DEC) \ + TEST_EXPAND(triple_des128cbc_test_vector, TOP_ENC) \ + TEST_EXPAND(triple_des128cbc_test_vector, TOP_DEC) \ + TEST_EXPAND(triple_des192cbc_test_vector, TOP_ENC) \ + TEST_EXPAND(triple_des192cbc_test_vector, TOP_DEC) \ + +#define TEST_EXPAND(t, o) \ +static int \ +cpu_crypto_blockcipher_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_blockcipher(&t, o); \ +} + +all_blockcipher_test_cases +#undef TEST_EXPAND + +static struct unit_test_suite security_cpu_crypto_aesni_mb_testsuite = { + .suite_name = "Security CPU Crypto AESNI-MB Unit Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_aead_enc_test_##t##_##o), \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_aead_dec_test_##t##_##o), \ + + all_gcm_unit_test_cases(SGL_ONE_SEG) + all_ccm_unit_test_cases +#undef TEST_EXPAND + +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_blockcipher_test_##t##_##o), \ + + all_blockcipher_test_cases +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_security_cpu_crypto_aesni_mb(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)); + + return unit_test_suite_runner(&security_cpu_crypto_aesni_mb_testsuite); +} + REGISTER_TEST_COMMAND(security_aesni_gcm_autotest, test_security_cpu_crypto_aesni_gcm); REGISTER_TEST_COMMAND(security_aesni_gcm_perftest, test_security_cpu_crypto_aesni_gcm_perf); + +REGISTER_TEST_COMMAND(security_aesni_mb_autotest, + test_security_cpu_crypto_aesni_mb); From patchwork Fri Sep 6 13:13:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58869 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 117A11F3B4; Fri, 6 Sep 2019 15:14:00 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 32D751F392 for ; Fri, 6 Sep 2019 15:13:46 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140773" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:44 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:27 +0100 Message-Id: <20190906131330.40185-8-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 07/10] app/test: add aesni_mb security cpu crypto perftest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since crypto perf application does not support rte_security, this patch adds a simple AES-CBC-SHA1-HMAC CPU crypto performance test to crypto unittest application. The test includes different key and data sizes test with single buffer test items and will display the throughput as well as cycle count performance information. Signed-off-by: Fan Zhang --- app/test/test_security_cpu_crypto.c | 194 ++++++++++++++++++++++++++++++++++++ 1 file changed, 194 insertions(+) diff --git a/app/test/test_security_cpu_crypto.c b/app/test/test_security_cpu_crypto.c index 0ea406390..6e012672e 100644 --- a/app/test/test_security_cpu_crypto.c +++ b/app/test/test_security_cpu_crypto.c @@ -1122,6 +1122,197 @@ test_security_cpu_crypto_aesni_mb(void) return unit_test_suite_runner(&security_cpu_crypto_aesni_mb_testsuite); } +static inline void +switch_blockcipher_enc_to_dec(struct blockcipher_test_data *tdata, + struct cpu_crypto_test_case *tcase, uint8_t *dst) +{ + memcpy(dst, tcase->seg_buf[0].seg, tcase->seg_buf[0].seg_len); + tdata->ciphertext.len = tcase->seg_buf[0].seg_len; + memcpy(tdata->digest.data, tcase->digest, tdata->digest.len); +} + +static int +cpu_crypto_test_blockcipher_perf( + const enum rte_crypto_cipher_algorithm cipher_algo, + uint32_t cipher_key_sz, + const enum rte_crypto_auth_algorithm auth_algo, + uint32_t auth_key_sz, uint32_t digest_sz, + uint32_t op_mask) +{ + struct blockcipher_test_data tdata = {0}; + uint8_t plaintext[3000], ciphertext[3000]; + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + uint64_t hz = rte_get_tsc_hz(), time_start, time_now; + double rate, cycles_per_buf; + uint32_t test_data_szs[] = {64, 128, 256, 512, 1024, 2048}; + uint32_t i, j; + uint32_t op_mask_opp = 0; + int ret; + + if (op_mask & BLOCKCIPHER_TEST_OP_CIPHER) + op_mask_opp |= (~op_mask & BLOCKCIPHER_TEST_OP_CIPHER); + if (op_mask & BLOCKCIPHER_TEST_OP_AUTH) + op_mask_opp |= (~op_mask & BLOCKCIPHER_TEST_OP_AUTH); + + tdata.plaintext.data = plaintext; + tdata.ciphertext.data = ciphertext; + + tdata.cipher_key.len = cipher_key_sz; + tdata.auth_key.len = auth_key_sz; + + gen_rand(tdata.cipher_key.data, cipher_key_sz / 8); + gen_rand(tdata.auth_key.data, auth_key_sz / 8); + + tdata.crypto_algo = cipher_algo; + tdata.auth_algo = auth_algo; + + tdata.digest.len = digest_sz; + + ut_params->sess = create_blockcipher_session(ts_params->ctx, + ts_params->session_priv_mpool, + op_mask, + &tdata, + 0); + if (!ut_params->sess) + return -1; + + ret = allocate_buf(MAX_NUM_OPS_INFLIGHT); + if (ret) + return ret; + + for (i = 0; i < RTE_DIM(test_data_szs); i++) { + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tdata.plaintext.len = test_data_szs[i]; + gen_rand(plaintext, tdata.plaintext.len); + + tdata.iv.len = 16; + gen_rand(tdata.iv.data, tdata.iv.len); + + tcase = ut_params->test_datas[j]; + ret = assemble_blockcipher_buf(tcase, obj, j, + op_mask, + &tdata, + 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + /* warm up cache */ + for (j = 0; j < CACHE_WARM_ITER; j++) + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_start = rte_rdtsc(); + + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_now = rte_rdtsc(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("%s-%u-%s(%4uB) Enc %03.3fMpps (%03.3fGbps) ", + rte_crypto_cipher_algorithm_strings[cipher_algo], + cipher_key_sz * 8, + rte_crypto_auth_algorithm_strings[auth_algo], + test_data_szs[i], + rate, rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, cycles_per_buf / test_data_szs[i]); + + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tcase = ut_params->test_datas[j]; + + switch_blockcipher_enc_to_dec(&tdata, tcase, + ciphertext); + ret = assemble_blockcipher_buf(tcase, obj, j, + op_mask_opp, + &tdata, + 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + time_start = rte_get_timer_cycles(); + + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_now = rte_get_timer_cycles(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("%s-%u-%s(%4uB) Dec %03.3fMpps (%03.3fGbps) ", + rte_crypto_cipher_algorithm_strings[cipher_algo], + cipher_key_sz * 8, + rte_crypto_auth_algorithm_strings[auth_algo], + test_data_szs[i], + rate, rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, + cycles_per_buf / test_data_szs[i]); + } + + return 0; +} + +/* cipher-algo/cipher-key-len/auth-algo/auth-key-len/digest-len/op */ +#define all_block_cipher_perf_test_cases \ + TEST_EXPAND(_AES_CBC, 128, _NULL, 0, 0, TOP_ENC) \ + TEST_EXPAND(_NULL, 0, _SHA1_HMAC, 160, 20, TOP_AUTH_GEN) \ + TEST_EXPAND(_AES_CBC, 128, _SHA1_HMAC, 160, 20, TOP_ENC_AUTH) + +#define TEST_EXPAND(a, b, c, d, e, f) \ +static int \ +cpu_crypto_blockcipher_perf##a##_##b##c##_##f(void) \ +{ \ + return cpu_crypto_test_blockcipher_perf(RTE_CRYPTO_CIPHER##a, \ + b / 8, RTE_CRYPTO_AUTH##c, d / 8, e, f); \ +} \ + +all_block_cipher_perf_test_cases +#undef TEST_EXPAND + +static struct unit_test_suite security_cpu_crypto_aesni_mb_perf_testsuite = { + .suite_name = "Security CPU Crypto AESNI-MB Perf Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { +#define TEST_EXPAND(a, b, c, d, e, f) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_blockcipher_perf##a##_##b##c##_##f), \ + + all_block_cipher_perf_test_cases +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_security_cpu_crypto_aesni_mb_perf(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)); + + return unit_test_suite_runner( + &security_cpu_crypto_aesni_mb_perf_testsuite); +} + + REGISTER_TEST_COMMAND(security_aesni_gcm_autotest, test_security_cpu_crypto_aesni_gcm); @@ -1130,3 +1321,6 @@ REGISTER_TEST_COMMAND(security_aesni_gcm_perftest, REGISTER_TEST_COMMAND(security_aesni_mb_autotest, test_security_cpu_crypto_aesni_mb); + +REGISTER_TEST_COMMAND(security_aesni_mb_perftest, + test_security_cpu_crypto_aesni_mb_perf); From patchwork Fri Sep 6 13:13:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58870 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 43D041F3BC; Fri, 6 Sep 2019 15:14:04 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id A5E291F39F for ; Fri, 6 Sep 2019 15:13:49 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140778" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:45 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:28 +0100 Message-Id: <20190906131330.40185-9-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 08/10] ipsec: add rte_security cpu_crypto action support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates the ipsec library to handle the newly introduced RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO action. Signed-off-by: Fan Zhang --- lib/librte_ipsec/esp_inb.c | 174 +++++++++++++++++++++++++- lib/librte_ipsec/esp_outb.c | 290 +++++++++++++++++++++++++++++++++++++++++++- lib/librte_ipsec/sa.c | 53 ++++++-- lib/librte_ipsec/sa.h | 29 +++++ lib/librte_ipsec/ses.c | 4 +- 5 files changed, 539 insertions(+), 11 deletions(-) diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c index 8e3ecbc64..6077dcb1e 100644 --- a/lib/librte_ipsec/esp_inb.c +++ b/lib/librte_ipsec/esp_inb.c @@ -105,6 +105,73 @@ inb_cop_prepare(struct rte_crypto_op *cop, } } +static inline int +inb_sync_crypto_proc_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + const union sym_op_data *icv, uint32_t pofs, uint32_t plen, + struct rte_security_vec *buf, struct iovec *cur_vec, + void *iv, void **aad, void **digest) +{ + struct rte_mbuf *ms; + struct iovec *vec = cur_vec; + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + uint64_t *ivp; + uint32_t algo, left, off = 0, n_seg = 0; + + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct rte_esp_hdr)); + algo = sa->algo_type; + + switch (algo) { + case ALGO_TYPE_AES_GCM: + gcm = (struct aead_gcm_iv *)iv; + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + *aad = icv->va + sa->icv_len; + off = sa->ctp.cipher.offset + pofs; + break; + case ALGO_TYPE_AES_CBC: + case ALGO_TYPE_3DES_CBC: + off = sa->ctp.auth.offset + pofs; + break; + case ALGO_TYPE_AES_CTR: + off = sa->ctp.auth.offset + pofs; + ctr = (struct aesctr_cnt_blk *)iv; + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + case ALGO_TYPE_NULL: + break; + } + + *digest = icv->va; + + left = plen - sa->ctp.cipher.length; + + ms = mbuf_get_seg_ofs(mb, &off); + if (!ms) + return -1; + + while (n_seg < RTE_LIBRTE_IP_FRAG_MAX_FRAG && left && ms) { + uint32_t len = RTE_MIN(left, ms->data_len - off); + + vec->iov_base = rte_pktmbuf_mtod_offset(ms, void *, off); + vec->iov_len = len; + + left -= len; + vec++; + n_seg++; + ms = ms->next; + off = 0; + } + + if (left) + return -1; + + buf->vec = cur_vec; + buf->num = n_seg; + + return n_seg; +} + /* * Helper function for prepare() to deal with situation when * ICV is spread by two segments. Tries to move ICV completely into the @@ -512,7 +579,6 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], return k; } - /* * *process* function for tunnel packets */ @@ -625,6 +691,112 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], return n; } +/* + * process packets using sync crypto engine + */ +static uint16_t +esp_inb_sync_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num, uint8_t sqh_len, + esp_inb_process_t process) +{ + int32_t rc; + uint32_t i, k, hl, n, p; + struct rte_ipsec_sa *sa; + struct replay_sqn *rsn; + union sym_op_data icv; + uint32_t sqn[num]; + uint32_t dr[num]; + struct rte_security_vec buf[num]; + struct iovec vec[RTE_LIBRTE_IP_FRAG_MAX_FRAG * num]; + uint32_t vec_idx = 0; + uint8_t ivs[num][IPSEC_MAX_IV_SIZE]; + void *iv[num]; + void *aad[num]; + void *digest[num]; + int status[num]; + + sa = ss->sa; + rsn = rsn_acquire(sa); + + k = 0; + for (i = 0; i != num; i++) { + hl = mb[i]->l2_len + mb[i]->l3_len; + rc = inb_pkt_prepare(sa, rsn, mb[i], hl, &icv); + if (rc >= 0) { + iv[k] = (void *)ivs[k]; + rc = inb_sync_crypto_proc_prepare(sa, mb[i], &icv, hl, + rc, &buf[k], &vec[vec_idx], iv[k], + &aad[k], &digest[k]); + if (rc < 0) { + dr[i - k] = i; + continue; + } + + vec_idx += rc; + k++; + } else + dr[i - k] = i; + } + + /* copy not prepared mbufs beyond good ones */ + if (k != num) { + rte_errno = EBADMSG; + + if (unlikely(k == 0)) + return 0; + + move_bad_mbufs(mb, dr, num, num - k); + } + + /* process the packets */ + n = 0; + rte_security_process_cpu_crypto_bulk(ss->security.ctx, + ss->security.ses, buf, iv, aad, digest, status, + k); + /* move failed process packets to dr */ + for (i = 0; i < k; i++) { + if (status[i]) { + dr[n++] = i; + rte_errno = EBADMSG; + } + } + + /* move bad packets to the back */ + if (n) + move_bad_mbufs(mb, dr, k, n); + + /* process packets */ + p = process(sa, mb, sqn, dr, k - n, sqh_len); + + if (p != k - n && p != 0) + move_bad_mbufs(mb, dr, k - n, k - n - p); + + if (p != num) + rte_errno = EBADMSG; + + return p; +} + +uint16_t +esp_inb_tun_sync_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + struct rte_ipsec_sa *sa = ss->sa; + + return esp_inb_sync_crypto_pkt_process(ss, mb, num, sa->sqh_len, + tun_process); +} + +uint16_t +esp_inb_trs_sync_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + struct rte_ipsec_sa *sa = ss->sa; + + return esp_inb_sync_crypto_pkt_process(ss, mb, num, sa->sqh_len, + trs_process); +} + /* * process group of ESP inbound tunnel packets. */ diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c index 55799a867..097cb663f 100644 --- a/lib/librte_ipsec/esp_outb.c +++ b/lib/librte_ipsec/esp_outb.c @@ -403,6 +403,292 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], return k; } + +static inline int +outb_sync_crypto_proc_prepare(struct rte_mbuf *m, const struct rte_ipsec_sa *sa, + const uint64_t ivp[IPSEC_MAX_IV_QWORD], + const union sym_op_data *icv, uint32_t hlen, uint32_t plen, + struct rte_security_vec *buf, struct iovec *cur_vec, void *iv, + void **aad, void **digest) +{ + struct rte_mbuf *ms; + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + struct iovec *vec = cur_vec; + uint32_t left, off = 0, n_seg = 0; + uint32_t algo; + + algo = sa->algo_type; + + switch (algo) { + case ALGO_TYPE_AES_GCM: + gcm = iv; + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + *aad = (void *)(icv->va + sa->icv_len); + off = sa->ctp.cipher.offset + hlen; + break; + case ALGO_TYPE_AES_CBC: + case ALGO_TYPE_3DES_CBC: + off = sa->ctp.auth.offset + hlen; + break; + case ALGO_TYPE_AES_CTR: + ctr = iv; + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + case ALGO_TYPE_NULL: + break; + } + + *digest = (void *)icv->va; + + left = sa->ctp.cipher.length + plen; + + ms = mbuf_get_seg_ofs(m, &off); + if (!ms) + return -1; + + while (n_seg < RTE_LIBRTE_IP_FRAG_MAX_FRAG && left && ms) { + uint32_t len = RTE_MIN(left, ms->data_len - off); + + vec->iov_base = rte_pktmbuf_mtod_offset(ms, void *, off); + vec->iov_len = len; + + left -= len; + vec++; + n_seg++; + ms = ms->next; + off = 0; + } + + if (left) + return -1; + + buf->vec = cur_vec; + buf->num = n_seg; + + return n_seg; +} + +/** + * Local post process function prototype that same as process function prototype + * as rte_ipsec_sa_pkt_func's process(). + */ +typedef uint16_t (*sync_crypto_post_process)(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], + uint16_t num); +static uint16_t +esp_outb_tun_sync_crypto_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num, + sync_crypto_post_process post_process) +{ + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + struct rte_security_ctx *ctx; + struct rte_security_session *rss; + union sym_op_data icv; + struct rte_security_vec buf[num]; + struct iovec vec[RTE_LIBRTE_IP_FRAG_MAX_FRAG * num]; + uint32_t vec_idx = 0; + void *aad[num]; + void *digest[num]; + void *iv[num]; + uint8_t ivs[num][IPSEC_MAX_IV_SIZE]; + uint64_t ivp[IPSEC_MAX_IV_QWORD]; + int status[num]; + uint32_t dr[num]; + uint32_t i, n, k; + int32_t rc; + + sa = ss->sa; + ctx = ss->security.ctx; + rss = ss->security.ses; + + k = 0; + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + for (i = 0; i != n; i++) { + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(ivp, sqc); + + /* try to update the packet itself */ + rc = outb_tun_pkt_prepare(sa, sqc, ivp, mb[i], &icv, + sa->sqh_len); + + /* success, setup crypto op */ + if (rc >= 0) { + outb_pkt_xprepare(sa, sqc, &icv); + + iv[k] = (void *)ivs[k]; + rc = outb_sync_crypto_proc_prepare(mb[i], sa, ivp, &icv, + 0, rc, &buf[k], &vec[vec_idx], iv[k], + &aad[k], &digest[k]); + if (rc < 0) { + dr[i - k] = i; + rte_errno = -rc; + continue; + } + + vec_idx += rc; + k++; + /* failure, put packet into the death-row */ + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not prepared mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + if (unlikely(k == 0)) { + rte_errno = EBADMSG; + return 0; + } + + /* process the packets */ + n = 0; + rte_security_process_cpu_crypto_bulk(ctx, rss, buf, iv, aad, digest, + status, k); + /* move failed process packets to dr */ + for (i = 0; i < n; i++) { + if (status[i]) + dr[n++] = i; + } + + if (n) + move_bad_mbufs(mb, dr, k, n); + + return post_process(ss, mb, k - n); +} + +static uint16_t +esp_outb_trs_sync_crypto_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num, + sync_crypto_post_process post_process) + +{ + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + struct rte_security_ctx *ctx; + struct rte_security_session *rss; + union sym_op_data icv; + struct rte_security_vec buf[num]; + struct iovec vec[RTE_LIBRTE_IP_FRAG_MAX_FRAG * num]; + uint32_t vec_idx = 0; + void *aad[num]; + void *digest[num]; + uint8_t ivs[num][IPSEC_MAX_IV_SIZE]; + void *iv[num]; + int status[num]; + uint64_t ivp[IPSEC_MAX_IV_QWORD]; + uint32_t dr[num]; + uint32_t i, n, k; + uint32_t l2, l3; + int32_t rc; + + sa = ss->sa; + ctx = ss->security.ctx; + rss = ss->security.ses; + + k = 0; + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + for (i = 0; i != n; i++) { + l2 = mb[i]->l2_len; + l3 = mb[i]->l3_len; + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(ivp, sqc); + + /* try to update the packet itself */ + rc = outb_trs_pkt_prepare(sa, sqc, ivp, mb[i], l2, l3, &icv, + sa->sqh_len); + + /* success, setup crypto op */ + if (rc >= 0) { + outb_pkt_xprepare(sa, sqc, &icv); + + iv[k] = (void *)ivs[k]; + + rc = outb_sync_crypto_proc_prepare(mb[i], sa, ivp, &icv, + l2 + l3, rc, &buf[k], &vec[vec_idx], + iv[k], &aad[k], &digest[k]); + if (rc < 0) { + dr[i - k] = i; + rte_errno = -rc; + continue; + } + + vec_idx += rc; + k++; + /* failure, put packet into the death-row */ + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not prepared mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + /* process the packets */ + n = 0; + rte_security_process_cpu_crypto_bulk(ctx, rss, buf, iv, aad, digest, + status, k); + /* move failed process packets to dr */ + for (i = 0; i < k; i++) { + if (status[i]) + dr[n++] = i; + } + + if (n) + move_bad_mbufs(mb, dr, k, n); + + return post_process(ss, mb, k - n); +} + +uint16_t +esp_outb_tun_sync_crpyto_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return esp_outb_tun_sync_crypto_process(ss, mb, num, + esp_outb_sqh_process); +} + +uint16_t +esp_outb_tun_sync_crpyto_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return esp_outb_tun_sync_crypto_process(ss, mb, num, + esp_outb_pkt_flag_process); +} + +uint16_t +esp_outb_trs_sync_crpyto_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return esp_outb_trs_sync_crypto_process(ss, mb, num, + esp_outb_sqh_process); +} + +uint16_t +esp_outb_trs_sync_crpyto_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return esp_outb_trs_sync_crypto_process(ss, mb, num, + esp_outb_pkt_flag_process); +} + /* * process outbound packets for SA with ESN support, * for algorithms that require SQN.hibits to be implictly included @@ -410,8 +696,8 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], * In that case we have to move ICV bytes back to their proper place. */ uint16_t -esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - uint16_t num) +esp_outb_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) { uint32_t i, k, icv_len, *icv; struct rte_mbuf *ml; diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 23d394b46..31ffbce2c 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -544,9 +544,9 @@ lksd_proto_prepare(const struct rte_ipsec_session *ss, * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled */ -static uint16_t -pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - uint16_t num) +uint16_t +esp_outb_pkt_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) { uint32_t i, k; uint32_t dr[num]; @@ -599,12 +599,48 @@ lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa, case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): pf->prepare = esp_outb_tun_prepare; pf->process = (sa->sqh_len != 0) ? - esp_outb_sqh_process : pkt_flag_process; + esp_outb_sqh_process : esp_outb_pkt_flag_process; break; case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): pf->prepare = esp_outb_trs_prepare; pf->process = (sa->sqh_len != 0) ? - esp_outb_sqh_process : pkt_flag_process; + esp_outb_sqh_process : esp_outb_pkt_flag_process; + break; + default: + rc = -ENOTSUP; + } + + return rc; +} + +static int +lksd_sync_crypto_pkt_func_select(const struct rte_ipsec_sa *sa, + struct rte_ipsec_sa_pkt_func *pf) +{ + int32_t rc; + + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + rc = 0; + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->process = esp_inb_tun_sync_crypto_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): + pf->process = esp_inb_trs_sync_crypto_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->process = (sa->sqh_len != 0) ? + esp_outb_tun_sync_crpyto_sqh_process : + esp_outb_tun_sync_crpyto_flag_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): + pf->process = (sa->sqh_len != 0) ? + esp_outb_trs_sync_crpyto_sqh_process : + esp_outb_trs_sync_crpyto_flag_process; break; default: rc = -ENOTSUP; @@ -672,13 +708,16 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL: if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) - pf->process = pkt_flag_process; + pf->process = esp_outb_pkt_flag_process; else pf->process = inline_proto_outb_pkt_process; break; case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: pf->prepare = lksd_proto_prepare; - pf->process = pkt_flag_process; + pf->process = esp_outb_pkt_flag_process; + break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + rc = lksd_sync_crypto_pkt_func_select(sa, pf); break; default: rc = -ENOTSUP; diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index 51e69ad05..02c7abc60 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -156,6 +156,14 @@ uint16_t inline_inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +esp_inb_tun_sync_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +esp_inb_trs_sync_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + /* outbound processing */ uint16_t @@ -170,6 +178,10 @@ uint16_t esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +esp_outb_pkt_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + uint16_t inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); @@ -182,4 +194,21 @@ uint16_t inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +esp_outb_tun_sync_crpyto_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +esp_outb_tun_sync_crpyto_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +esp_outb_trs_sync_crpyto_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +esp_outb_trs_sync_crpyto_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + + #endif /* _SA_H_ */ diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c index 82c765a33..eaa8c17b7 100644 --- a/lib/librte_ipsec/ses.c +++ b/lib/librte_ipsec/ses.c @@ -19,7 +19,9 @@ session_check(struct rte_ipsec_session *ss) return -EINVAL; if ((ss->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO || ss->type == - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) && + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL || + ss->type == + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && ss->security.ctx == NULL) return -EINVAL; } From patchwork Fri Sep 6 13:13:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58871 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8C63F1F3CD; Fri, 6 Sep 2019 15:14:08 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 57BB91F3A3 for ; Fri, 6 Sep 2019 15:13:50 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140789" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:47 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:29 +0100 Message-Id: <20190906131330.40185-10-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 09/10] examples/ipsec-secgw: add security cpu_crypto action support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since ipsec library is added cpu_crypto security action type support, this patch updates ipsec-secgw sample application with added action type "cpu-crypto". The patch also includes a number of test scripts to prove the correctness of the implementation. Signed-off-by: Fan Zhang --- examples/ipsec-secgw/ipsec.c | 22 ++++++++++++++++++++++ examples/ipsec-secgw/ipsec_process.c | 7 ++++--- examples/ipsec-secgw/sa.c | 13 +++++++++++-- examples/ipsec-secgw/test/run_test.sh | 10 ++++++++++ .../test/trs_3descbc_sha1_cpu_crypto_defs.sh | 5 +++++ .../test/trs_aescbc_sha1_cpu_crypto_defs.sh | 5 +++++ .../test/trs_aesctr_sha1_cpu_crypto_defs.sh | 5 +++++ .../ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh | 5 +++++ .../test/trs_aesgcm_mb_cpu_crypto_defs.sh | 7 +++++++ .../test/tun_3descbc_sha1_cpu_crypto_defs.sh | 5 +++++ .../test/tun_aescbc_sha1_cpu_crypto_defs.sh | 5 +++++ .../test/tun_aesctr_sha1_cpu_crypto_defs.sh | 5 +++++ .../ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh | 5 +++++ .../test/tun_aesgcm_mb_cpu_crypto_defs.sh | 7 +++++++ 14 files changed, 101 insertions(+), 5 deletions(-) create mode 100644 examples/ipsec-secgw/test/trs_3descbc_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/trs_aescbc_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/trs_aesctr_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/trs_aesgcm_mb_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_3descbc_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_aescbc_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_aesctr_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_aesgcm_mb_cpu_crypto_defs.sh diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index dc85adfe5..4c39a7de6 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -105,6 +106,26 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa) "SEC Session init failed: err: %d\n", ret); return -1; } + } else if (sa->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { + struct rte_security_ctx *ctx = + (struct rte_security_ctx *) + rte_cryptodev_get_sec_ctx( + ipsec_ctx->tbl[cdev_id_qp].id); + int32_t offset = sizeof(struct rte_esp_hdr) + + sa->iv_len; + + /* Set IPsec parameters in conf */ + sess_conf.cpucrypto.cipher_offset = offset; + + set_ipsec_conf(sa, &(sess_conf.ipsec)); + sa->security_ctx = ctx; + sa->sec_session = rte_security_session_create(ctx, + &sess_conf, ipsec_ctx->session_priv_pool); + if (sa->sec_session == NULL) { + RTE_LOG(ERR, IPSEC, + "SEC Session init failed: err: %d\n", ret); + return -1; + } } else { RTE_LOG(ERR, IPSEC, "Inline not supported\n"); return -1; @@ -473,6 +494,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, sa->sec_session, pkts[i], NULL); continue; case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO: + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: RTE_ASSERT(sa->sec_session != NULL); priv->cop.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c index 868f1a28d..1932b631f 100644 --- a/examples/ipsec-secgw/ipsec_process.c +++ b/examples/ipsec-secgw/ipsec_process.c @@ -101,7 +101,8 @@ fill_ipsec_session(struct rte_ipsec_session *ss, struct ipsec_ctx *ctx, } ss->crypto.ses = sa->crypto_session; /* setup session action type */ - } else if (sa->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) { + } else if (sa->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL || + sa->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { if (sa->sec_session == NULL) { rc = create_lookaside_session(ctx, sa); if (rc != 0) @@ -227,8 +228,8 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf) /* process packets inline */ else if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO || - sa->type == - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL || + sa->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { satp = rte_ipsec_sa_type(ips->sa); diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index c3cf3bd1f..ba773346f 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -570,6 +570,9 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL; else if (strcmp(tokens[ti], "no-offload") == 0) rule->type = RTE_SECURITY_ACTION_TYPE_NONE; + else if (strcmp(tokens[ti], "cpu-crypto") == 0) + rule->type = + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO; else { APP_CHECK(0, status, "Invalid input \"%s\"", tokens[ti]); @@ -624,10 +627,13 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, if (status->status < 0) return; - if ((rule->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0)) + if ((rule->type != RTE_SECURITY_ACTION_TYPE_NONE && rule->type != + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && + (portid_p == 0)) printf("Missing portid option, falling back to non-offload\n"); - if (!type_p || !portid_p) { + if (!type_p || (!portid_p && rule->type != + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)) { rule->type = RTE_SECURITY_ACTION_TYPE_NONE; rule->portid = -1; } @@ -709,6 +715,9 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: printf("lookaside-protocol-offload "); break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + printf("cpu-crypto-accelerated"); + break; } printf("\n"); } diff --git a/examples/ipsec-secgw/test/run_test.sh b/examples/ipsec-secgw/test/run_test.sh index 8055a4c04..f322aa785 100755 --- a/examples/ipsec-secgw/test/run_test.sh +++ b/examples/ipsec-secgw/test/run_test.sh @@ -32,15 +32,21 @@ usage() } LINUX_TEST="tun_aescbc_sha1 \ +tun_aescbc_sha1_cpu_crypto \ tun_aescbc_sha1_esn \ tun_aescbc_sha1_esn_atom \ tun_aesgcm \ +tun_aesgcm_cpu_crypto \ +tun_aesgcm_mb_cpu_crypto \ tun_aesgcm_esn \ tun_aesgcm_esn_atom \ trs_aescbc_sha1 \ +trs_aescbc_sha1_cpu_crypto \ trs_aescbc_sha1_esn \ trs_aescbc_sha1_esn_atom \ trs_aesgcm \ +trs_aesgcm_cpu_crypto \ +trs_aesgcm_mb_cpu_crypto \ trs_aesgcm_esn \ trs_aesgcm_esn_atom \ tun_aescbc_sha1_old \ @@ -49,17 +55,21 @@ trs_aescbc_sha1_old \ trs_aesgcm_old \ tun_aesctr_sha1 \ tun_aesctr_sha1_old \ +tun_aesctr_cpu_crypto \ tun_aesctr_sha1_esn \ tun_aesctr_sha1_esn_atom \ trs_aesctr_sha1 \ +trs_aesctr_sha1_cpu_crypto \ trs_aesctr_sha1_old \ trs_aesctr_sha1_esn \ trs_aesctr_sha1_esn_atom \ tun_3descbc_sha1 \ +tun_3descbc_sha1_cpu_crypto \ tun_3descbc_sha1_old \ tun_3descbc_sha1_esn \ tun_3descbc_sha1_esn_atom \ trs_3descbc_sha1 \ +trs_3descbc_sha1 \ trs_3descbc_sha1_old \ trs_3descbc_sha1_esn \ trs_3descbc_sha1_esn_atom" diff --git a/examples/ipsec-secgw/test/trs_3descbc_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_3descbc_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..a864a8886 --- /dev/null +++ b/examples/ipsec-secgw/test/trs_3descbc_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/trs_3descbc_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/trs_aescbc_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_aescbc_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..b515cd9f8 --- /dev/null +++ b/examples/ipsec-secgw/test/trs_aescbc_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/trs_aescbc_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/trs_aesctr_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_aesctr_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..745a2a02b --- /dev/null +++ b/examples/ipsec-secgw/test/trs_aesctr_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/trs_aesctr_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh new file mode 100644 index 000000000..8917122da --- /dev/null +++ b/examples/ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/trs_aesgcm_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/trs_aesgcm_mb_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_aesgcm_mb_cpu_crypto_defs.sh new file mode 100644 index 000000000..26943321f --- /dev/null +++ b/examples/ipsec-secgw/test/trs_aesgcm_mb_cpu_crypto_defs.sh @@ -0,0 +1,7 @@ +#! /bin/bash + +. ${DIR}/trs_aesgcm_defs.sh + +CRYPTO_DEV=${CRYPTO_DEV:-'--vdev="crypto_aesni_mb0"'} + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_3descbc_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_3descbc_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..747141f62 --- /dev/null +++ b/examples/ipsec-secgw/test/tun_3descbc_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/tun_3descbc_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_aescbc_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_aescbc_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..56076fa50 --- /dev/null +++ b/examples/ipsec-secgw/test/tun_aescbc_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/tun_aescbc_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_aesctr_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_aesctr_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..3af680533 --- /dev/null +++ b/examples/ipsec-secgw/test/tun_aesctr_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/tun_aesctr_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh new file mode 100644 index 000000000..5bf1c0ae5 --- /dev/null +++ b/examples/ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/tun_aesgcm_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_aesgcm_mb_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_aesgcm_mb_cpu_crypto_defs.sh new file mode 100644 index 000000000..039b8095e --- /dev/null +++ b/examples/ipsec-secgw/test/tun_aesgcm_mb_cpu_crypto_defs.sh @@ -0,0 +1,7 @@ +#! /bin/bash + +. ${DIR}/tun_aesgcm_defs.sh + +CRYPTO_DEV=${CRYPTO_DEV:-'--vdev="crypto_aesni_mb0"'} + +SGW_CFG_XPRM='type cpu-crypto' From patchwork Fri Sep 6 13:13:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 58872 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1F6BE1F3D9; Fri, 6 Sep 2019 15:14:11 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 2048D1F3A3 for ; Fri, 6 Sep 2019 15:13:51 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Sep 2019 06:13:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,473,1559545200"; d="scan'208";a="213140808" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2019 06:13:49 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Fri, 6 Sep 2019 14:13:30 +0100 Message-Id: <20190906131330.40185-11-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190906131330.40185-1-roy.fan.zhang@intel.com> References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190906131330.40185-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH 10/10] doc: update security cpu process description X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates programmer's guide and release note for newly added security cpu process description. Signed-off-by: Fan Zhang --- doc/guides/cryptodevs/aesni_gcm.rst | 6 ++ doc/guides/cryptodevs/aesni_mb.rst | 7 +++ doc/guides/prog_guide/rte_security.rst | 112 ++++++++++++++++++++++++++++++++- doc/guides/rel_notes/release_19_11.rst | 7 +++ 4 files changed, 131 insertions(+), 1 deletion(-) diff --git a/doc/guides/cryptodevs/aesni_gcm.rst b/doc/guides/cryptodevs/aesni_gcm.rst index 9a8bc9323..31297fabd 100644 --- a/doc/guides/cryptodevs/aesni_gcm.rst +++ b/doc/guides/cryptodevs/aesni_gcm.rst @@ -9,6 +9,12 @@ The AES-NI GCM PMD (**librte_pmd_aesni_gcm**) provides poll mode crypto driver support for utilizing Intel multi buffer library (see AES-NI Multi-buffer PMD documentation to learn more about it, including installation). +The AES-NI GCM PMD also supports rte_security with security session create +and ``rte_security_process_cpu_crypto_bulk`` function call to process +symmetric crypto synchronously with all algorithms specified below. With this +way it supports scather-gather buffers (``rte_security_vec`` can be greater than +``1``. Please refer to ``rte_security`` programmer's guide for more detail. + Features -------- diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst index 1eff2b073..1a3ddd850 100644 --- a/doc/guides/cryptodevs/aesni_mb.rst +++ b/doc/guides/cryptodevs/aesni_mb.rst @@ -12,6 +12,13 @@ support for utilizing Intel multi buffer library, see the white paper The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc. +The AES-NI MB PMD also supports rte_security with security session create +and ``rte_security_process_cpu_crypto_bulk`` function call to process +symmetric crypto synchronously with all algorithms specified below. However +it does not support scather-gather buffer so the ``num`` value in +``rte_security_vec`` can only be ``1``. Please refer to ``rte_security`` +programmer's guide for more detail. + Features -------- diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst index 7d0734a37..861619202 100644 --- a/doc/guides/prog_guide/rte_security.rst +++ b/doc/guides/prog_guide/rte_security.rst @@ -296,6 +296,56 @@ Just like IPsec, in case of PDCP also header addition/deletion, cipher/ de-cipher, integrity protection/verification is done based on the action type chosen. + +Synchronous CPU Crypto +~~~~~~~~~~~~~~~~~~~~~~ + +RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: +This action type allows the burst of symmetric crypto workload using the same +algorithm, key, and direction being processed by CPU cycles synchronously. + +The packet is sent to the crypto device for symmetric crypto +processing. The device will encrypt or decrypt the buffer based on the key(s) +and algorithm(s) specified and preprocessed in the security session. Different +than the inline or lookaside modes, when the function exits, the user will +expect the buffers are either processed successfully, or having the error number +assigned to the appropriate index of the status array. + +E.g. in case of IPsec, the application will use CPU cycles to process both +stack and crypto workload synchronously. + +.. code-block:: console + + Egress Data Path + | + +--------|--------+ + | egress IPsec | + | | | + | +------V------+ | + | | SADB lookup | | + | +------|------+ | + | +------V------+ | + | | Desc | | + | +------|------+ | + +--------V--------+ + | + +--------V--------+ + | L2 Stack | + +-----------------+ + | | + | Synchronous | <------ Using CPU instructions + | Crypto Process | + | | + +--------V--------+ + | L2 Stack Post | <------ Add tunnel, ESP header etc header etc. + +--------|--------+ + | + +--------|--------+ + | NIC | + +--------|--------+ + V + + Device Features and Capabilities --------------------------------- @@ -491,6 +541,7 @@ Security Session configuration structure is defined as ``rte_security_session_co struct rte_security_ipsec_xform ipsec; struct rte_security_macsec_xform macsec; struct rte_security_pdcp_xform pdcp; + struct rte_security_cpu_crypto_xform cpu_crypto; }; /**< Configuration parameters for security session */ struct rte_crypto_sym_xform *crypto_xform; @@ -515,9 +566,12 @@ Offload. RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL, /**< All security protocol processing is performed inline during * transmission */ - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, /**< All security protocol processing including crypto is performed * on a lookaside accelerator */ + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO + /**< Crypto processing for security protocol is processed by CPU + * synchronously }; The ``rte_security_session_protocol`` is defined as @@ -587,6 +641,10 @@ PDCP related configuration parameters are defined in ``rte_security_pdcp_xform`` uint32_t hfn_threshold; }; +For CPU Crypto processing action, the application should attach the initialized +`xform` to the security session configuration to specify the algorithm, key, +direction, and other necessary fields required to perform crypto operation. + Security API ~~~~~~~~~~~~ @@ -650,3 +708,55 @@ it is only valid to have a single flow to map to that security session. +-------+ +--------+ +-----+ | Eth | -> ... -> | ESP | -> | END | +-------+ +--------+ +-----+ + + +Process bulk crypto workload using CPU instructions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The inline and lookaside mode depends on the external HW to complete the +workload, where the user has another option to use rte_security to process +symmetric crypto synchronously with CPU instructions. + +When creating the security session the user need to fill the +``rte_security_session_conf`` parameter with the ``action_type`` field as +``RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO``, and points ``crypto_xform`` to an +properly initialized cryptodev xform. The user then passes the +``rte_security_session_conf`` instance to ``rte_security_session_create()`` +along with the security context pointer belongs to a certain SW crypto device. +The crypto device may or may not support this action type or the algorithm / +key sizes specified in the ``crypto_xform``, but when everything is ok +the function will return the created security session. + +The user then can use this session to process the crypto workload synchronously. +Instead of using mbuf ``next`` pointers, synchronous CPU crypto processing uses +a special structure ``rte_security_vec`` to describe scatter-gather buffers. + +.. code-block:: c + + struct rte_security_vec { + struct iovec *vec; + uint32_t num; + }; + +Where the structure ``rte_security_vec`` is used to store scatter-gather buffer +pointers, where ``vec`` is the pointer to one buffer and ``num`` indicates the +number of buffers. + +Please note not all crypto devices support scatter-gather buffer processing, +please check ``cryptodev`` guide for more details. + +The API of the synchronous CPU crypto process is + +.. code-block:: c + + void + rte_security_process_cpu_crypto_bulk(struct rte_security_ctx *instance, + struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + +This function will process ``num`` number of ``rte_security_vec`` buffers using +the content stored in ``iv`` and ``aad`` arrays. The API only support in-place +operation so ``buf`` will be overwritten the encrypted or decrypted values +when successfully processed. Otherwise the error number of the status array's +according index. diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst index 8490d897c..6cd21704f 100644 --- a/doc/guides/rel_notes/release_19_11.rst +++ b/doc/guides/rel_notes/release_19_11.rst @@ -56,6 +56,13 @@ New Features Also, make sure to start the actual text at the margin. ========================================================= +* **RTE_SECURITY is added new synchronous Crypto burst API with CPU** + + A new API rte_security_process_cpu_crypto_bulk is introduced in security + library to process crypto workload in bulk using CPU instructions. AESNI_MB + and AESNI_GCM PMD, as well as unit-test and ipsec-secgw sample applications + are updated to support this feature. + Removed Items -------------