From patchwork Mon Oct 7 16:28:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60634 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 278631C1F8; Mon, 7 Oct 2019 18:29:02 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id A16E31C1AA for ; Mon, 7 Oct 2019 18:28:58 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:28:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082005" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:28:56 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:41 +0100 Message-Id: <20191007162850.60552-2-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 01/10] security: introduce CPU Crypto action type and API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce new RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO action type to security library. The type represents performing crypto operation with CPU cycles. The patch also includes a new API to process crypto operations in bulk and the function pointers for PMDs. Signed-off-by: Fan Zhang --- lib/librte_security/rte_security.c | 11 ++++++ lib/librte_security/rte_security.h | 53 +++++++++++++++++++++++++++- lib/librte_security/rte_security_driver.h | 22 ++++++++++++ lib/librte_security/rte_security_version.map | 1 + 4 files changed, 86 insertions(+), 1 deletion(-) diff --git a/lib/librte_security/rte_security.c b/lib/librte_security/rte_security.c index bc81ce15d..cdd1ee6af 100644 --- a/lib/librte_security/rte_security.c +++ b/lib/librte_security/rte_security.c @@ -141,3 +141,14 @@ rte_security_capability_get(struct rte_security_ctx *instance, return NULL; } + +int +rte_security_process_cpu_crypto_bulk(struct rte_security_ctx *instance, + struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num) +{ + RTE_FUNC_PTR_OR_ERR_RET(*instance->ops->process_cpu_crypto_bulk, -1); + return instance->ops->process_cpu_crypto_bulk(sess, buf, iv, + aad, digest, status, num); +} diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h index aaafdfcd7..0caf5d697 100644 --- a/lib/librte_security/rte_security.h +++ b/lib/librte_security/rte_security.h @@ -18,6 +18,7 @@ extern "C" { #endif #include +#include #include #include @@ -289,6 +290,20 @@ struct rte_security_pdcp_xform { uint32_t hfn_ovrd; }; +struct rte_security_cpu_crypto_xform { + /** For cipher/authentication crypto operation the authentication may + * cover more content then the cipher. E.g., for IPSec ESP encryption + * with AES-CBC and SHA1-HMAC, the encryption happens after the ESP + * header but whole packet (apart from MAC header) is authenticated. + * The cipher_offset field is used to deduct the cipher data pointer + * from the buffer to be processed. + * + * NOTE this parameter shall be ignored by AEAD algorithms, since it + * uses the same offset for cipher and authentication. + */ + int32_t cipher_offset; +}; + /** * Security session action type. */ @@ -303,10 +318,14 @@ enum rte_security_session_action_type { /**< All security protocol processing is performed inline during * transmission */ - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, /**< All security protocol processing including crypto is performed * on a lookaside accelerator */ + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO + /**< Crypto processing for security protocol is processed by CPU + * synchronously + */ }; /** Security session protocol definition */ @@ -332,6 +351,7 @@ struct rte_security_session_conf { struct rte_security_ipsec_xform ipsec; struct rte_security_macsec_xform macsec; struct rte_security_pdcp_xform pdcp; + struct rte_security_cpu_crypto_xform cpucrypto; }; /**< Configuration parameters for security session */ struct rte_crypto_sym_xform *crypto_xform; @@ -665,6 +685,37 @@ const struct rte_security_capability * rte_security_capability_get(struct rte_security_ctx *instance, struct rte_security_capability_idx *idx); +/** + * Security vector structure, contains pointer to vector array and the length + * of the array + */ +struct rte_security_vec { + struct iovec *vec; + uint32_t num; +}; + +/** + * Processing bulk crypto workload with CPU + * + * @param instance security instance. + * @param sess security session + * @param buf array of buffer SGL vectors + * @param iv array of IV pointers + * @param aad array of AAD pointers + * @param digest array of digest pointers + * @param status array of status for the function to return + * @param num number of elements in each array + * @return + * - On success, 0 + * - On any failure, -1 + */ +__rte_experimental +int +rte_security_process_cpu_crypto_bulk(struct rte_security_ctx *instance, + struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + #ifdef __cplusplus } #endif diff --git a/lib/librte_security/rte_security_driver.h b/lib/librte_security/rte_security_driver.h index 1b561f852..fe940fffa 100644 --- a/lib/librte_security/rte_security_driver.h +++ b/lib/librte_security/rte_security_driver.h @@ -132,6 +132,26 @@ typedef int (*security_get_userdata_t)(void *device, typedef const struct rte_security_capability *(*security_capabilities_get_t)( void *device); +/** + * Process security operations in bulk using CPU accelerated method. + * + * @param sess Security session structure. + * @param buf Buffer to the vectors to be processed. + * @param iv IV pointers. + * @param aad AAD pointers. + * @param digest Digest pointers. + * @param status Array of status value. + * @param num Number of elements in each array. + * @return + * - On success, 0 + * - On any failure, -1 + */ + +typedef int (*security_process_cpu_crypto_bulk_t)( + struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + /** Security operations function pointer table */ struct rte_security_ops { security_session_create_t session_create; @@ -150,6 +170,8 @@ struct rte_security_ops { /**< Get userdata associated with session which processed the packet. */ security_capabilities_get_t capabilities_get; /**< Get security capabilities. */ + security_process_cpu_crypto_bulk_t process_cpu_crypto_bulk; + /**< Process data in bulk. */ }; #ifdef __cplusplus diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map index 53267bf3c..2132e7a00 100644 --- a/lib/librte_security/rte_security_version.map +++ b/lib/librte_security/rte_security_version.map @@ -18,4 +18,5 @@ EXPERIMENTAL { rte_security_get_userdata; rte_security_session_stats_get; rte_security_session_update; + rte_security_process_cpu_crypto_bulk; }; From patchwork Mon Oct 7 16:28:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60635 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4A18E1C242; Mon, 7 Oct 2019 18:29:04 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id C8C171C1CB for ; Mon, 7 Oct 2019 18:29:00 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:29:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082010" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:28:58 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:42 +0100 Message-Id: <20191007162850.60552-3-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 02/10] crypto/aesni_gcm: add rte_security handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add rte_security support support to AESNI-GCM PMD. The PMD now initialize security context instance, create/delete PMD specific security sessions, and process crypto workloads in synchronous mode with scatter-gather list buffer supported. Signed-off-by: Fan Zhang --- drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 97 +++++++++++++++++++++++- drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 95 +++++++++++++++++++++++ drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h | 23 ++++++ drivers/crypto/aesni_gcm/meson.build | 2 +- 4 files changed, 215 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c index 1006a5c4d..2e91bf149 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -174,6 +175,56 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_op *op) return sess; } +static __rte_always_inline int +process_gcm_security_sgl_buf(struct aesni_gcm_security_session *sess, + struct rte_security_vec *buf, uint8_t *iv, + uint8_t *aad, uint8_t *digest) +{ + struct aesni_gcm_session *session = &sess->sess; + uint8_t *tag; + uint32_t i; + + sess->init(&session->gdata_key, &sess->gdata_ctx, iv, aad, + (uint64_t)session->aad_length); + + for (i = 0; i < buf->num; i++) { + struct iovec *vec = &buf->vec[i]; + + sess->update(&session->gdata_key, &sess->gdata_ctx, + vec->iov_base, vec->iov_base, vec->iov_len); + } + + switch (session->op) { + case AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION: + if (session->req_digest_length != session->gen_digest_length) + tag = sess->temp_digest; + else + tag = digest; + + sess->finalize(&session->gdata_key, &sess->gdata_ctx, tag, + session->gen_digest_length); + + if (session->req_digest_length != session->gen_digest_length) + memcpy(digest, sess->temp_digest, + session->req_digest_length); + break; + + case AESNI_GCM_OP_AUTHENTICATED_DECRYPTION: + tag = sess->temp_digest; + + sess->finalize(&session->gdata_key, &sess->gdata_ctx, tag, + session->gen_digest_length); + + if (memcmp(tag, digest, session->req_digest_length) != 0) + return -1; + break; + default: + return -1; + } + + return 0; +} + /** * Process a crypto operation, calling * the GCM API from the multi buffer library. @@ -488,8 +539,10 @@ aesni_gcm_create(const char *name, { struct rte_cryptodev *dev; struct aesni_gcm_private *internals; + struct rte_security_ctx *sec_ctx; enum aesni_gcm_vector_mode vector_mode; MB_MGR *mb_mgr; + char sec_name[RTE_DEV_NAME_MAX_LEN]; /* Check CPU for support for AES instruction set */ if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) { @@ -524,7 +577,8 @@ aesni_gcm_create(const char *name, RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_CPU_AESNI | RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT; + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | + RTE_CRYPTODEV_FF_SECURITY; mb_mgr = alloc_mb_mgr(0); if (mb_mgr == NULL) @@ -587,6 +641,21 @@ aesni_gcm_create(const char *name, internals->max_nb_queue_pairs = init_params->max_nb_queue_pairs; + /* setup security operations */ + snprintf(sec_name, sizeof(sec_name) - 1, "aes_gcm_sec_%u", + dev->driver_id); + sec_ctx = rte_zmalloc_socket(sec_name, + sizeof(struct rte_security_ctx), + RTE_CACHE_LINE_SIZE, init_params->socket_id); + if (sec_ctx == NULL) { + AESNI_GCM_LOG(ERR, "memory allocation failed\n"); + goto error_exit; + } + + sec_ctx->device = (void *)dev; + sec_ctx->ops = rte_aesni_gcm_pmd_security_ops; + dev->security_ctx = sec_ctx; + #if IMB_VERSION_NUM >= IMB_VERSION(0, 50, 0) AESNI_GCM_LOG(INFO, "IPSec Multi-buffer library version used: %s\n", imb_get_version_str()); @@ -641,6 +710,8 @@ aesni_gcm_remove(struct rte_vdev_device *vdev) if (cryptodev == NULL) return -ENODEV; + rte_free(cryptodev->security_ctx); + internals = cryptodev->data->dev_private; free_mb_mgr(internals->mb_mgr); @@ -648,6 +719,30 @@ aesni_gcm_remove(struct rte_vdev_device *vdev) return rte_cryptodev_pmd_destroy(cryptodev); } +int +aesni_gcm_sec_crypto_process_bulk(struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num) +{ + struct aesni_gcm_security_session *session = + get_sec_session_private_data(sess); + uint32_t i; + int errcnt = 0; + + if (unlikely(!session)) + return -num; + + for (i = 0; i < num; i++) { + status[i] = process_gcm_security_sgl_buf(session, &buf[i], + (uint8_t *)iv[i], (uint8_t *)aad[i], + (uint8_t *)digest[i]); + if (unlikely(status[i])) + errcnt -= 1; + } + + return errcnt; +} + static struct rte_vdev_driver aesni_gcm_pmd_drv = { .probe = aesni_gcm_probe, .remove = aesni_gcm_remove diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c index 2f66c7c58..cc71dbd60 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c @@ -7,6 +7,7 @@ #include #include #include +#include #include "aesni_gcm_pmd_private.h" @@ -316,6 +317,85 @@ aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev, } } +static int +aesni_gcm_security_session_create(void *dev, + struct rte_security_session_conf *conf, + struct rte_security_session *sess, + struct rte_mempool *mempool) +{ + struct rte_cryptodev *cdev = dev; + struct aesni_gcm_private *internals = cdev->data->dev_private; + struct aesni_gcm_security_session *sess_priv; + int ret; + + if (!conf->crypto_xform) { + AESNI_GCM_LOG(ERR, "Invalid security session conf"); + return -EINVAL; + } + + if (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + AESNI_GCM_LOG(ERR, "GMAC is not supported in security session"); + return -EINVAL; + } + + + if (rte_mempool_get(mempool, (void **)(&sess_priv))) { + AESNI_GCM_LOG(ERR, + "Couldn't get object from session mempool"); + return -ENOMEM; + } + + ret = aesni_gcm_set_session_parameters(internals->ops, + &sess_priv->sess, conf->crypto_xform); + if (ret != 0) { + AESNI_GCM_LOG(ERR, "Failed configure session parameters"); + + /* Return session to mempool */ + rte_mempool_put(mempool, (void *)sess_priv); + return ret; + } + + sess_priv->pre = internals->ops[sess_priv->sess.key].pre; + sess_priv->init = internals->ops[sess_priv->sess.key].init; + if (sess_priv->sess.op == AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION) { + sess_priv->update = + internals->ops[sess_priv->sess.key].update_enc; + sess_priv->finalize = + internals->ops[sess_priv->sess.key].finalize_enc; + } else { + sess_priv->update = + internals->ops[sess_priv->sess.key].update_dec; + sess_priv->finalize = + internals->ops[sess_priv->sess.key].finalize_dec; + } + + sess->sess_private_data = sess_priv; + + return 0; +} + +static int +aesni_gcm_security_session_destroy(void *dev __rte_unused, + struct rte_security_session *sess) +{ + void *sess_priv = get_sec_session_private_data(sess); + + if (sess_priv) { + struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv); + + memset(sess, 0, sizeof(struct aesni_gcm_security_session)); + set_sec_session_private_data(sess, NULL); + rte_mempool_put(sess_mp, sess_priv); + } + return 0; +} + +static unsigned int +aesni_gcm_sec_session_get_size(__rte_unused void *device) +{ + return sizeof(struct aesni_gcm_security_session); +} + struct rte_cryptodev_ops aesni_gcm_pmd_ops = { .dev_configure = aesni_gcm_pmd_config, .dev_start = aesni_gcm_pmd_start, @@ -336,4 +416,19 @@ struct rte_cryptodev_ops aesni_gcm_pmd_ops = { .sym_session_clear = aesni_gcm_pmd_sym_session_clear }; +static struct rte_security_ops aesni_gcm_security_ops = { + .session_create = aesni_gcm_security_session_create, + .session_get_size = aesni_gcm_sec_session_get_size, + .session_update = NULL, + .session_stats_get = NULL, + .session_destroy = aesni_gcm_security_session_destroy, + .set_pkt_metadata = NULL, + .capabilities_get = NULL, + .process_cpu_crypto_bulk = + aesni_gcm_sec_crypto_process_bulk, +}; + struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops = &aesni_gcm_pmd_ops; + +struct rte_security_ops *rte_aesni_gcm_pmd_security_ops = + &aesni_gcm_security_ops; diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h index 56b29e013..ed3f6eb2e 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h @@ -114,5 +114,28 @@ aesni_gcm_set_session_parameters(const struct aesni_gcm_ops *ops, * Device specific operations function pointer structure */ extern struct rte_cryptodev_ops *rte_aesni_gcm_pmd_ops; +/** + * Security session structure. + */ +struct aesni_gcm_security_session { + /** Temp digest for decryption */ + uint8_t temp_digest[DIGEST_LENGTH_MAX]; + /** GCM operations */ + aesni_gcm_pre_t pre; + aesni_gcm_init_t init; + aesni_gcm_update_t update; + aesni_gcm_finalize_t finalize; + /** AESNI-GCM session */ + struct aesni_gcm_session sess; + /** AESNI-GCM context */ + struct gcm_context_data gdata_ctx; +}; + +extern int +aesni_gcm_sec_crypto_process_bulk(struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + +extern struct rte_security_ops *rte_aesni_gcm_pmd_security_ops; #endif /* _RTE_AESNI_GCM_PMD_PRIVATE_H_ */ diff --git a/drivers/crypto/aesni_gcm/meson.build b/drivers/crypto/aesni_gcm/meson.build index 3a6e332dc..f6e160bb3 100644 --- a/drivers/crypto/aesni_gcm/meson.build +++ b/drivers/crypto/aesni_gcm/meson.build @@ -22,4 +22,4 @@ endif allow_experimental_apis = true sources = files('aesni_gcm_pmd.c', 'aesni_gcm_pmd_ops.c') -deps += ['bus_vdev'] +deps += ['bus_vdev', 'security'] From patchwork Mon Oct 7 16:28:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60636 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8B9FA1D14A; Mon, 7 Oct 2019 18:29:06 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 016DE1C1CB for ; Mon, 7 Oct 2019 18:29:01 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:29:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082015" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:29:00 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:43 +0100 Message-Id: <20191007162850.60552-4-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 03/10] app/test: add security cpu crypto autotest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds cpu crypto unit test for AESNI_GCM PMD. Signed-off-by: Fan Zhang --- app/test/Makefile | 1 + app/test/meson.build | 1 + app/test/test_security_cpu_crypto.c | 564 ++++++++++++++++++++++++++++++++++++ 3 files changed, 566 insertions(+) create mode 100644 app/test/test_security_cpu_crypto.c diff --git a/app/test/Makefile b/app/test/Makefile index df7f77f44..0caff561c 100644 --- a/app/test/Makefile +++ b/app/test/Makefile @@ -197,6 +197,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_asym.c SRCS-$(CONFIG_RTE_LIBRTE_SECURITY) += test_cryptodev_security_pdcp.c +SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_security_cpu_crypto.c SRCS-$(CONFIG_RTE_LIBRTE_METRICS) += test_metrics.c diff --git a/app/test/meson.build b/app/test/meson.build index 2c23c6347..0d096c564 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -104,6 +104,7 @@ test_sources = files('commands.c', 'test_ring_perf.c', 'test_rwlock.c', 'test_sched.c', + 'test_security_cpu_crypto.c', 'test_service_cores.c', 'test_spinlock.c', 'test_stack.c', diff --git a/app/test/test_security_cpu_crypto.c b/app/test/test_security_cpu_crypto.c new file mode 100644 index 000000000..d345922b2 --- /dev/null +++ b/app/test/test_security_cpu_crypto.c @@ -0,0 +1,564 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include +#include +#include + +#include "test.h" +#include "test_cryptodev.h" +#include "test_cryptodev_aead_test_vectors.h" + +#define CPU_CRYPTO_TEST_MAX_AAD_LENGTH 16 +#define MAX_NB_SIGMENTS 4 + +enum buffer_assemble_option { + SGL_MAX_SEG, + SGL_ONE_SEG, +}; + +struct cpu_crypto_test_case { + struct { + uint8_t seg[MBUF_DATAPAYLOAD_SIZE]; + uint32_t seg_len; + } seg_buf[MAX_NB_SIGMENTS]; + uint8_t iv[MAXIMUM_IV_LENGTH]; + uint8_t aad[CPU_CRYPTO_TEST_MAX_AAD_LENGTH]; + uint8_t digest[DIGEST_BYTE_LENGTH_SHA512]; +} __rte_cache_aligned; + +struct cpu_crypto_test_obj { + struct iovec vec[MAX_NUM_OPS_INFLIGHT][MAX_NB_SIGMENTS]; + struct rte_security_vec sec_buf[MAX_NUM_OPS_INFLIGHT]; + void *iv[MAX_NUM_OPS_INFLIGHT]; + void *digest[MAX_NUM_OPS_INFLIGHT]; + void *aad[MAX_NUM_OPS_INFLIGHT]; + int status[MAX_NUM_OPS_INFLIGHT]; +}; + +struct cpu_crypto_testsuite_params { + struct rte_mempool *buf_pool; + struct rte_mempool *session_priv_mpool; + struct rte_security_ctx *ctx; +}; + +struct cpu_crypto_unittest_params { + struct rte_security_session *sess; + void *test_datas[MAX_NUM_OPS_INFLIGHT]; + struct cpu_crypto_test_obj test_obj; + uint32_t nb_bufs; +}; + +static struct cpu_crypto_testsuite_params testsuite_params = { NULL }; +static struct cpu_crypto_unittest_params unittest_params; + +static int gbl_driver_id; + +static int +testsuite_setup(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct rte_cryptodev_info info; + uint32_t i; + uint32_t nb_devs; + uint32_t sess_sz; + int ret; + + memset(ts_params, 0, sizeof(*ts_params)); + + ts_params->buf_pool = rte_mempool_lookup("CPU_CRYPTO_MBUFPOOL"); + if (ts_params->buf_pool == NULL) { + /* Not already created so create */ + ts_params->buf_pool = rte_pktmbuf_pool_create( + "CRYPTO_MBUFPOOL", + NUM_MBUFS, MBUF_CACHE_SIZE, 0, + sizeof(struct cpu_crypto_test_case), + rte_socket_id()); + if (ts_params->buf_pool == NULL) { + RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n"); + return TEST_FAILED; + } + } + + /* Create an AESNI MB device if required */ + if (gbl_driver_id == rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))) { + nb_devs = rte_cryptodev_device_count_by_driver( + rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))); + if (nb_devs < 1) { + ret = rte_vdev_init( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL); + + TEST_ASSERT(ret == 0, + "Failed to create instance of" + " pmd : %s", + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)); + } + } + + /* Create an AESNI GCM device if required */ + if (gbl_driver_id == rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) { + nb_devs = rte_cryptodev_device_count_by_driver( + rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))); + if (nb_devs < 1) { + TEST_ASSERT_SUCCESS(rte_vdev_init( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL), + "Failed to create instance of" + " pmd : %s", + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)); + } + } + + nb_devs = rte_cryptodev_count(); + if (nb_devs < 1) { + RTE_LOG(ERR, USER1, "No crypto devices found?\n"); + return TEST_FAILED; + } + + /* Get security context */ + for (i = 0; i < nb_devs; i++) { + rte_cryptodev_info_get(i, &info); + if (info.driver_id != gbl_driver_id) + continue; + + ts_params->ctx = rte_cryptodev_get_sec_ctx(i); + if (!ts_params->ctx) { + RTE_LOG(ERR, USER1, "Rte_security is not supported\n"); + return TEST_FAILED; + } + } + + sess_sz = rte_security_session_get_size(ts_params->ctx); + ts_params->session_priv_mpool = rte_mempool_create( + "cpu_crypto_test_sess_mp", 2, sess_sz, 0, 0, + NULL, NULL, NULL, NULL, + SOCKET_ID_ANY, 0); + if (!ts_params->session_priv_mpool) { + RTE_LOG(ERR, USER1, "Not enough memory\n"); + return TEST_FAILED; + } + + return TEST_SUCCESS; +} + +static void +testsuite_teardown(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + + if (ts_params->buf_pool) + rte_mempool_free(ts_params->buf_pool); + + if (ts_params->session_priv_mpool) + rte_mempool_free(ts_params->session_priv_mpool); +} + +static int +ut_setup(void) +{ + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + + memset(ut_params, 0, sizeof(*ut_params)); + return TEST_SUCCESS; +} + +static void +ut_teardown(void) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + + if (ut_params->sess) + rte_security_session_destroy(ts_params->ctx, ut_params->sess); + + if (ut_params->nb_bufs) { + uint32_t i; + + for (i = 0; i < ut_params->nb_bufs; i++) + memset(ut_params->test_datas[i], 0, + sizeof(struct cpu_crypto_test_case)); + + rte_mempool_put_bulk(ts_params->buf_pool, ut_params->test_datas, + ut_params->nb_bufs); + } +} + +static int +allocate_buf(uint32_t n) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + int ret; + + ret = rte_mempool_get_bulk(ts_params->buf_pool, ut_params->test_datas, + n); + + if (ret == 0) + ut_params->nb_bufs = n; + + return ret; +} + +static int +check_status(struct cpu_crypto_test_obj *obj, uint32_t n) +{ + uint32_t i; + + for (i = 0; i < n; i++) + if (obj->status[i] < 0) + return -1; + + return 0; +} + +static struct rte_security_session * +create_aead_session(struct rte_security_ctx *ctx, + struct rte_mempool *sess_mp, + enum rte_crypto_aead_operation op, + const struct aead_test_data *test_data, + uint32_t is_unit_test) +{ + struct rte_security_session_conf sess_conf = {0}; + struct rte_crypto_sym_xform xform = {0}; + + if (is_unit_test) + debug_hexdump(stdout, "key:", test_data->key.data, + test_data->key.len); + + /* Setup AEAD Parameters */ + xform.type = RTE_CRYPTO_SYM_XFORM_AEAD; + xform.next = NULL; + xform.aead.algo = test_data->algo; + xform.aead.op = op; + xform.aead.key.data = test_data->key.data; + xform.aead.key.length = test_data->key.len; + xform.aead.iv.offset = 0; + xform.aead.iv.length = test_data->iv.len; + xform.aead.digest_length = test_data->auth_tag.len; + xform.aead.aad_length = test_data->aad.len; + + sess_conf.action_type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO; + sess_conf.crypto_xform = &xform; + + return rte_security_session_create(ctx, &sess_conf, sess_mp); +} + +static inline int +assemble_aead_buf(struct cpu_crypto_test_case *data, + struct cpu_crypto_test_obj *obj, + uint32_t obj_idx, + enum rte_crypto_aead_operation op, + const struct aead_test_data *test_data, + enum buffer_assemble_option sgl_option, + uint32_t is_unit_test) +{ + const uint8_t *src; + uint32_t src_len; + uint32_t seg_idx; + uint32_t bytes_per_seg; + uint32_t left; + + if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) { + src = test_data->plaintext.data; + src_len = test_data->plaintext.len; + if (is_unit_test) + debug_hexdump(stdout, "plaintext:", src, src_len); + } else { + src = test_data->ciphertext.data; + src_len = test_data->ciphertext.len; + memcpy(data->digest, test_data->auth_tag.data, + test_data->auth_tag.len); + if (is_unit_test) { + debug_hexdump(stdout, "ciphertext:", src, src_len); + debug_hexdump(stdout, "digest:", + test_data->auth_tag.data, + test_data->auth_tag.len); + } + } + + if (src_len > MBUF_DATAPAYLOAD_SIZE) + return -ENOMEM; + + switch (sgl_option) { + case SGL_MAX_SEG: + seg_idx = 0; + bytes_per_seg = src_len / MAX_NB_SIGMENTS + 1; + left = src_len; + + if (bytes_per_seg > (MBUF_DATAPAYLOAD_SIZE / MAX_NB_SIGMENTS)) + return -ENOMEM; + + while (left) { + uint32_t cp_len = RTE_MIN(left, bytes_per_seg); + memcpy(data->seg_buf[seg_idx].seg, src, cp_len); + data->seg_buf[seg_idx].seg_len = cp_len; + obj->vec[obj_idx][seg_idx].iov_base = + (void *)data->seg_buf[seg_idx].seg; + obj->vec[obj_idx][seg_idx].iov_len = cp_len; + src += cp_len; + left -= cp_len; + seg_idx++; + } + + if (left) + return -ENOMEM; + + obj->sec_buf[obj_idx].vec = obj->vec[obj_idx]; + obj->sec_buf[obj_idx].num = seg_idx; + + break; + case SGL_ONE_SEG: + memcpy(data->seg_buf[0].seg, src, src_len); + data->seg_buf[0].seg_len = src_len; + obj->vec[obj_idx][0].iov_base = + (void *)data->seg_buf[0].seg; + obj->vec[obj_idx][0].iov_len = src_len; + + obj->sec_buf[obj_idx].vec = obj->vec[obj_idx]; + obj->sec_buf[obj_idx].num = 1; + break; + default: + return -1; + } + + if (test_data->algo == RTE_CRYPTO_AEAD_AES_CCM) { + memcpy(data->iv + 1, test_data->iv.data, test_data->iv.len); + memcpy(data->aad + 18, test_data->aad.data, test_data->aad.len); + } else { + memcpy(data->iv, test_data->iv.data, test_data->iv.len); + memcpy(data->aad, test_data->aad.data, test_data->aad.len); + } + + if (is_unit_test) { + debug_hexdump(stdout, "iv:", test_data->iv.data, + test_data->iv.len); + debug_hexdump(stdout, "aad:", test_data->aad.data, + test_data->aad.len); + } + + obj->iv[obj_idx] = (void *)data->iv; + obj->digest[obj_idx] = (void *)data->digest; + obj->aad[obj_idx] = (void *)data->aad; + + return 0; +} + +#define CPU_CRYPTO_ERR_EXP_CT "expect ciphertext:" +#define CPU_CRYPTO_ERR_GEN_CT "gen ciphertext:" +#define CPU_CRYPTO_ERR_EXP_PT "expect plaintext:" +#define CPU_CRYPTO_ERR_GEN_PT "gen plaintext:" + +static int +check_aead_result(struct cpu_crypto_test_case *tcase, + enum rte_crypto_aead_operation op, + const struct aead_test_data *tdata) +{ + const char *err_msg1, *err_msg2; + const uint8_t *src_pt_ct; + const uint8_t *tmp_src; + uint32_t src_len; + uint32_t left; + uint32_t i = 0; + int ret; + + if (op == RTE_CRYPTO_AEAD_OP_ENCRYPT) { + err_msg1 = CPU_CRYPTO_ERR_EXP_CT; + err_msg2 = CPU_CRYPTO_ERR_GEN_CT; + + src_pt_ct = tdata->ciphertext.data; + src_len = tdata->ciphertext.len; + + ret = memcmp(tcase->digest, tdata->auth_tag.data, + tdata->auth_tag.len); + if (ret != 0) { + debug_hexdump(stdout, "expect digest:", + tdata->auth_tag.data, + tdata->auth_tag.len); + debug_hexdump(stdout, "gen digest:", + tcase->digest, + tdata->auth_tag.len); + return -1; + } + } else { + src_pt_ct = tdata->plaintext.data; + src_len = tdata->plaintext.len; + err_msg1 = CPU_CRYPTO_ERR_EXP_PT; + err_msg2 = CPU_CRYPTO_ERR_GEN_PT; + } + + tmp_src = src_pt_ct; + left = src_len; + + while (left && i < MAX_NB_SIGMENTS) { + ret = memcmp(tcase->seg_buf[i].seg, tmp_src, + tcase->seg_buf[i].seg_len); + if (ret != 0) + goto sgl_err_dump; + tmp_src += tcase->seg_buf[i].seg_len; + left -= tcase->seg_buf[i].seg_len; + i++; + } + + if (left) { + ret = -ENOMEM; + goto sgl_err_dump; + } + + return 0; + +sgl_err_dump: + left = src_len; + i = 0; + + debug_hexdump(stdout, err_msg1, + tdata->ciphertext.data, + tdata->ciphertext.len); + + while (left && i < MAX_NB_SIGMENTS) { + debug_hexdump(stdout, err_msg2, + tcase->seg_buf[i].seg, + tcase->seg_buf[i].seg_len); + left -= tcase->seg_buf[i].seg_len; + i++; + } + return ret; +} + +static inline void +run_test(struct rte_security_ctx *ctx, struct rte_security_session *sess, + struct cpu_crypto_test_obj *obj, uint32_t n) +{ + rte_security_process_cpu_crypto_bulk(ctx, sess, obj->sec_buf, + obj->iv, obj->aad, obj->digest, obj->status, n); +} + +static int +cpu_crypto_test_aead(const struct aead_test_data *tdata, + enum rte_crypto_aead_operation dir, + enum buffer_assemble_option sgl_option) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + int ret; + + ut_params->sess = create_aead_session(ts_params->ctx, + ts_params->session_priv_mpool, + dir, + tdata, + 1); + if (!ut_params->sess) + return -1; + + ret = allocate_buf(1); + if (ret) + return ret; + + tcase = ut_params->test_datas[0]; + ret = assemble_aead_buf(tcase, obj, 0, dir, tdata, sgl_option, 1); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + + run_test(ts_params->ctx, ut_params->sess, obj, 1); + + ret = check_status(obj, 1); + if (ret < 0) + return ret; + + ret = check_aead_result(tcase, dir, tdata); + if (ret < 0) + return ret; + + return 0; +} + +/* test-vector/sgl-option */ +#define all_gcm_unit_test_cases(type) \ + TEST_EXPAND(gcm_test_case_1, type) \ + TEST_EXPAND(gcm_test_case_2, type) \ + TEST_EXPAND(gcm_test_case_3, type) \ + TEST_EXPAND(gcm_test_case_4, type) \ + TEST_EXPAND(gcm_test_case_5, type) \ + TEST_EXPAND(gcm_test_case_6, type) \ + TEST_EXPAND(gcm_test_case_7, type) \ + TEST_EXPAND(gcm_test_case_8, type) \ + TEST_EXPAND(gcm_test_case_192_1, type) \ + TEST_EXPAND(gcm_test_case_192_2, type) \ + TEST_EXPAND(gcm_test_case_192_3, type) \ + TEST_EXPAND(gcm_test_case_192_4, type) \ + TEST_EXPAND(gcm_test_case_192_5, type) \ + TEST_EXPAND(gcm_test_case_192_6, type) \ + TEST_EXPAND(gcm_test_case_192_7, type) \ + TEST_EXPAND(gcm_test_case_256_1, type) \ + TEST_EXPAND(gcm_test_case_256_2, type) \ + TEST_EXPAND(gcm_test_case_256_3, type) \ + TEST_EXPAND(gcm_test_case_256_4, type) \ + TEST_EXPAND(gcm_test_case_256_5, type) \ + TEST_EXPAND(gcm_test_case_256_6, type) \ + TEST_EXPAND(gcm_test_case_256_7, type) + + +#define TEST_EXPAND(t, o) \ +static int \ +cpu_crypto_aead_enc_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_ENCRYPT, o); \ +} \ +static int \ +cpu_crypto_aead_dec_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_aead(&t, RTE_CRYPTO_AEAD_OP_DECRYPT, o); \ +} \ + +all_gcm_unit_test_cases(SGL_ONE_SEG) +all_gcm_unit_test_cases(SGL_MAX_SEG) +#undef TEST_EXPAND + +static struct unit_test_suite security_cpu_crypto_aesgcm_testsuite = { + .suite_name = "Security CPU Crypto AESNI-GCM Unit Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_aead_enc_test_##t##_##o), \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_aead_dec_test_##t##_##o), \ + + all_gcm_unit_test_cases(SGL_ONE_SEG) + all_gcm_unit_test_cases(SGL_MAX_SEG) +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_security_cpu_crypto_aesni_gcm(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)); + + return unit_test_suite_runner(&security_cpu_crypto_aesgcm_testsuite); +} + +REGISTER_TEST_COMMAND(security_aesni_gcm_autotest, + test_security_cpu_crypto_aesni_gcm); From patchwork Mon Oct 7 16:28:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60637 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BD3131D14E; Mon, 7 Oct 2019 18:29:09 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 860B61C20B for ; Mon, 7 Oct 2019 18:29:03 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:29:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082026" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:29:01 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:44 +0100 Message-Id: <20191007162850.60552-5-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 04/10] app/test: add security cpu crypto perftest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since crypto perf application does not support rte_security, this patch adds a simple GCM CPU crypto performance test to crypto unittest application. The test includes different key and data sizes test with single buffer and SGL buffer test items and will display the throughput as well as cycle count performance information. Signed-off-by: Fan Zhang --- app/test/test_security_cpu_crypto.c | 201 ++++++++++++++++++++++++++++++++++++ 1 file changed, 201 insertions(+) diff --git a/app/test/test_security_cpu_crypto.c b/app/test/test_security_cpu_crypto.c index d345922b2..ca9a8dae6 100644 --- a/app/test/test_security_cpu_crypto.c +++ b/app/test/test_security_cpu_crypto.c @@ -23,6 +23,7 @@ #define CPU_CRYPTO_TEST_MAX_AAD_LENGTH 16 #define MAX_NB_SIGMENTS 4 +#define CACHE_WARM_ITER 2048 enum buffer_assemble_option { SGL_MAX_SEG, @@ -560,5 +561,205 @@ test_security_cpu_crypto_aesni_gcm(void) return unit_test_suite_runner(&security_cpu_crypto_aesgcm_testsuite); } + +static inline void +gen_rand(uint8_t *data, uint32_t len) +{ + uint32_t i; + + for (i = 0; i < len; i++) + data[i] = (uint8_t)rte_rand(); +} + +static inline void +switch_aead_enc_to_dec(struct aead_test_data *tdata, + struct cpu_crypto_test_case *tcase, + enum buffer_assemble_option sgl_option) +{ + uint32_t i; + uint8_t *dst = tdata->ciphertext.data; + + switch (sgl_option) { + case SGL_ONE_SEG: + memcpy(dst, tcase->seg_buf[0].seg, tcase->seg_buf[0].seg_len); + tdata->ciphertext.len = tcase->seg_buf[0].seg_len; + break; + case SGL_MAX_SEG: + tdata->ciphertext.len = 0; + for (i = 0; i < MAX_NB_SIGMENTS; i++) { + memcpy(dst, tcase->seg_buf[i].seg, + tcase->seg_buf[i].seg_len); + tdata->ciphertext.len += tcase->seg_buf[i].seg_len; + } + break; + } + + memcpy(tdata->auth_tag.data, tcase->digest, tdata->auth_tag.len); +} + +static int +cpu_crypto_test_aead_perf(enum buffer_assemble_option sgl_option, + uint32_t key_sz) +{ + struct aead_test_data tdata = {0}; + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + uint64_t hz = rte_get_tsc_hz(), time_start, time_now; + double rate, cycles_per_buf; + uint32_t test_data_szs[] = {64, 128, 256, 512, 1024, 2048}; + uint32_t i, j; + uint8_t aad[16]; + int ret; + + tdata.key.len = key_sz; + gen_rand(tdata.key.data, tdata.key.len); + tdata.algo = RTE_CRYPTO_AEAD_AES_GCM; + tdata.aad.data = aad; + + ut_params->sess = create_aead_session(ts_params->ctx, + ts_params->session_priv_mpool, + RTE_CRYPTO_AEAD_OP_DECRYPT, + &tdata, + 0); + if (!ut_params->sess) + return -1; + + ret = allocate_buf(MAX_NUM_OPS_INFLIGHT); + if (ret) + return ret; + + for (i = 0; i < RTE_DIM(test_data_szs); i++) { + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tdata.plaintext.len = test_data_szs[i]; + gen_rand(tdata.plaintext.data, + tdata.plaintext.len); + + tdata.aad.len = 12; + gen_rand(tdata.aad.data, tdata.aad.len); + + tdata.auth_tag.len = 16; + + tdata.iv.len = 16; + gen_rand(tdata.iv.data, tdata.iv.len); + + tcase = ut_params->test_datas[j]; + ret = assemble_aead_buf(tcase, obj, j, + RTE_CRYPTO_AEAD_OP_ENCRYPT, + &tdata, sgl_option, 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + /* warm up cache */ + for (j = 0; j < CACHE_WARM_ITER; j++) + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_start = rte_rdtsc(); + + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_now = rte_rdtsc(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("AES-GCM-%u(%4uB) Enc %03.3fMpps (%03.3fGbps) ", + key_sz * 8, test_data_szs[i], rate, + rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, + cycles_per_buf / test_data_szs[i]); + + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tcase = ut_params->test_datas[j]; + + switch_aead_enc_to_dec(&tdata, tcase, sgl_option); + ret = assemble_aead_buf(tcase, obj, j, + RTE_CRYPTO_AEAD_OP_DECRYPT, + &tdata, sgl_option, 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + time_start = rte_get_timer_cycles(); + + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_now = rte_get_timer_cycles(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("AES-GCM-%u(%4uB) Dec %03.3fMpps (%03.3fGbps) ", + key_sz * 8, test_data_szs[i], rate, + rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, + cycles_per_buf / test_data_szs[i]); + } + + return 0; +} + +/* test-perfix/key-size/sgl-type */ +#define all_gcm_perf_test_cases(type) \ + TEST_EXPAND(_128, 16, type) \ + TEST_EXPAND(_192, 24, type) \ + TEST_EXPAND(_256, 32, type) + +#define TEST_EXPAND(a, b, c) \ +static int \ +cpu_crypto_gcm_perf##a##_##c(void) \ +{ \ + return cpu_crypto_test_aead_perf(c, b); \ +} \ + +all_gcm_perf_test_cases(SGL_ONE_SEG) +all_gcm_perf_test_cases(SGL_MAX_SEG) +#undef TEST_EXPAND + +static struct unit_test_suite security_cpu_crypto_aesgcm_perf_testsuite = { + .suite_name = "Security CPU Crypto AESNI-GCM Perf Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { +#define TEST_EXPAND(a, b, c) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_gcm_perf##a##_##c), \ + + all_gcm_perf_test_cases(SGL_ONE_SEG) + all_gcm_perf_test_cases(SGL_MAX_SEG) +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_security_cpu_crypto_aesni_gcm_perf(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)); + + return unit_test_suite_runner( + &security_cpu_crypto_aesgcm_perf_testsuite); +} + REGISTER_TEST_COMMAND(security_aesni_gcm_autotest, test_security_cpu_crypto_aesni_gcm); + +REGISTER_TEST_COMMAND(security_aesni_gcm_perftest, + test_security_cpu_crypto_aesni_gcm_perf); From patchwork Mon Oct 7 16:28:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60638 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 303691D15F; Mon, 7 Oct 2019 18:29:12 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 5EC521C29D for ; Mon, 7 Oct 2019 18:29:05 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:29:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082033" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:29:03 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:45 +0100 Message-Id: <20191007162850.60552-6-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 05/10] crypto/aesni_mb: add rte_security handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add rte_security support support to AESNI-MB PMD. The PMD now initialize security context instance, create/delete PMD specific security sessions, and process crypto workloads in synchronous mode. Signed-off-by: Fan Zhang --- drivers/crypto/aesni_mb/meson.build | 2 +- drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 368 +++++++++++++++++++-- drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 92 +++++- drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 21 +- 4 files changed, 453 insertions(+), 30 deletions(-) diff --git a/drivers/crypto/aesni_mb/meson.build b/drivers/crypto/aesni_mb/meson.build index 3e1687416..e7b585168 100644 --- a/drivers/crypto/aesni_mb/meson.build +++ b/drivers/crypto/aesni_mb/meson.build @@ -23,4 +23,4 @@ endif sources = files('rte_aesni_mb_pmd.c', 'rte_aesni_mb_pmd_ops.c') allow_experimental_apis = true -deps += ['bus_vdev'] +deps += ['bus_vdev', 'security'] diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c index ce1144b95..a4cd518b7 100644 --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c @@ -8,6 +8,8 @@ #include #include #include +#include +#include #include #include #include @@ -19,6 +21,9 @@ #define HMAC_MAX_BLOCK_SIZE 128 static uint8_t cryptodev_driver_id; +static enum aesni_mb_vector_mode vector_mode; +/**< CPU vector instruction set mode */ + typedef void (*hash_one_block_t)(const void *data, void *digest); typedef void (*aes_keyexp_t)(const void *key, void *enc_exp_keys, void *dec_exp_keys); @@ -808,6 +813,164 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session, (UINT64_MAX - u_src + u_dst + 1); } +union sec_userdata_field { + int status; + struct { + uint16_t is_gen_digest; + uint16_t digest_len; + }; +}; + +struct sec_udata_digest_field { + uint32_t is_digest_gen; + uint32_t digest_len; +}; + +static inline int +set_mb_job_params_sec(JOB_AES_HMAC *job, struct aesni_mb_sec_session *sec_sess, + void *buf, uint32_t buf_len, void *iv, void *aad, void *digest, + int *status, uint8_t *digest_idx) +{ + struct aesni_mb_session *session = &sec_sess->sess; + uint32_t cipher_offset = sec_sess->cipher_offset; + union sec_userdata_field udata; + + if (unlikely(cipher_offset > buf_len)) + return -EINVAL; + + /* Set crypto operation */ + job->chain_order = session->chain_order; + + /* Set cipher parameters */ + job->cipher_direction = session->cipher.direction; + job->cipher_mode = session->cipher.mode; + + job->aes_key_len_in_bytes = session->cipher.key_length_in_bytes; + + /* Set authentication parameters */ + job->hash_alg = session->auth.algo; + job->iv = iv; + + switch (job->hash_alg) { + case AES_XCBC: + job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded; + job->u.XCBC._k2 = session->auth.xcbc.k2; + job->u.XCBC._k3 = session->auth.xcbc.k3; + + job->aes_enc_key_expanded = + session->cipher.expanded_aes_keys.encode; + job->aes_dec_key_expanded = + session->cipher.expanded_aes_keys.decode; + break; + + case AES_CCM: + job->u.CCM.aad = (uint8_t *)aad + 18; + job->u.CCM.aad_len_in_bytes = session->aead.aad_len; + job->aes_enc_key_expanded = + session->cipher.expanded_aes_keys.encode; + job->aes_dec_key_expanded = + session->cipher.expanded_aes_keys.decode; + job->iv++; + break; + + case AES_CMAC: + job->u.CMAC._key_expanded = session->auth.cmac.expkey; + job->u.CMAC._skey1 = session->auth.cmac.skey1; + job->u.CMAC._skey2 = session->auth.cmac.skey2; + job->aes_enc_key_expanded = + session->cipher.expanded_aes_keys.encode; + job->aes_dec_key_expanded = + session->cipher.expanded_aes_keys.decode; + break; + + case AES_GMAC: + if (session->cipher.mode == GCM) { + job->u.GCM.aad = aad; + job->u.GCM.aad_len_in_bytes = session->aead.aad_len; + } else { + /* For GMAC */ + job->u.GCM.aad = aad; + job->u.GCM.aad_len_in_bytes = buf_len; + job->cipher_mode = GCM; + } + job->aes_enc_key_expanded = &session->cipher.gcm_key; + job->aes_dec_key_expanded = &session->cipher.gcm_key; + break; + + default: + job->u.HMAC._hashed_auth_key_xor_ipad = + session->auth.pads.inner; + job->u.HMAC._hashed_auth_key_xor_opad = + session->auth.pads.outer; + + if (job->cipher_mode == DES3) { + job->aes_enc_key_expanded = + session->cipher.exp_3des_keys.ks_ptr; + job->aes_dec_key_expanded = + session->cipher.exp_3des_keys.ks_ptr; + } else { + job->aes_enc_key_expanded = + session->cipher.expanded_aes_keys.encode; + job->aes_dec_key_expanded = + session->cipher.expanded_aes_keys.decode; + } + } + + /* Set digest output location */ + if (job->hash_alg != NULL_HASH && + session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) { + job->auth_tag_output = sec_sess->temp_digests[*digest_idx]; + *digest_idx = (*digest_idx + 1) % MAX_JOBS; + + udata.is_gen_digest = 0; + udata.digest_len = session->auth.req_digest_len; + } else { + udata.is_gen_digest = 1; + udata.digest_len = session->auth.req_digest_len; + + if (session->auth.req_digest_len != + session->auth.gen_digest_len) { + job->auth_tag_output = + sec_sess->temp_digests[*digest_idx]; + *digest_idx = (*digest_idx + 1) % MAX_JOBS; + } else + job->auth_tag_output = digest; + } + + /* A bit of hack here, since job structure only supports + * 2 user data fields and we need 4 params to be passed + * (status, direction, digest for verify, and length of + * digest), we set the status value as digest length + + * direction here temporarily to avoid creating longer + * buffer to store all 4 params. + */ + *status = udata.status; + + /* + * Multi-buffer library current only support returning a truncated + * digest length as specified in the relevant IPsec RFCs + */ + + /* Set digest length */ + job->auth_tag_output_len_in_bytes = session->auth.gen_digest_len; + + /* Set IV parameters */ + job->iv_len_in_bytes = session->iv.length; + + /* Data Parameters */ + job->src = buf; + job->dst = (uint8_t *)buf + cipher_offset; + job->cipher_start_src_offset_in_bytes = cipher_offset; + job->msg_len_to_cipher_in_bytes = buf_len - cipher_offset; + job->hash_start_src_offset_in_bytes = 0; + job->msg_len_to_hash_in_bytes = buf_len; + + job->user_data = (void *)status; + job->user_data2 = digest; + + return 0; +} + /** * Process a crypto operation and complete a JOB_AES_HMAC job structure for * submission to the multi buffer library for processing. @@ -1100,6 +1263,35 @@ post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job) return op; } +static inline void +post_process_mb_sec_job(JOB_AES_HMAC *job) +{ + void *user_digest = job->user_data2; + int *status = job->user_data; + + switch (job->status) { + case STS_COMPLETED: + if (user_digest) { + union sec_userdata_field udata; + + udata.status = *status; + if (udata.is_gen_digest) { + *status = RTE_CRYPTO_OP_STATUS_SUCCESS; + memcpy(user_digest, job->auth_tag_output, + udata.digest_len); + } else { + *status = (memcmp(job->auth_tag_output, + user_digest, udata.digest_len) != 0) ? + -1 : 0; + } + } else + *status = RTE_CRYPTO_OP_STATUS_SUCCESS; + break; + default: + *status = RTE_CRYPTO_OP_STATUS_ERROR; + } +} + /** * Process a completed JOB_AES_HMAC job and keep processing jobs until * get_completed_job return NULL @@ -1136,6 +1328,32 @@ handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job, return processed_jobs; } +static inline uint32_t +handle_completed_sec_jobs(JOB_AES_HMAC *job, MB_MGR *mb_mgr) +{ + uint32_t processed = 0; + + while (job != NULL) { + post_process_mb_sec_job(job); + job = IMB_GET_COMPLETED_JOB(mb_mgr); + processed++; + } + + return processed; +} + +static inline uint32_t +flush_mb_sec_mgr(MB_MGR *mb_mgr) +{ + JOB_AES_HMAC *job = IMB_FLUSH_JOB(mb_mgr); + uint32_t processed = 0; + + if (job) + processed = handle_completed_sec_jobs(job, mb_mgr); + + return processed; +} + static inline uint16_t flush_mb_mgr(struct aesni_mb_qp *qp, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -1239,6 +1457,105 @@ aesni_mb_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, return processed_jobs; } +static MB_MGR * +alloc_init_mb_mgr(void) +{ + MB_MGR *mb_mgr = alloc_mb_mgr(0); + if (mb_mgr == NULL) + return NULL; + + switch (vector_mode) { + case RTE_AESNI_MB_SSE: + init_mb_mgr_sse(mb_mgr); + break; + case RTE_AESNI_MB_AVX: + init_mb_mgr_avx(mb_mgr); + break; + case RTE_AESNI_MB_AVX2: + init_mb_mgr_avx2(mb_mgr); + break; + case RTE_AESNI_MB_AVX512: + init_mb_mgr_avx512(mb_mgr); + break; + default: + AESNI_MB_LOG(ERR, "Unsupported vector mode %u\n", vector_mode); + free_mb_mgr(mb_mgr); + return NULL; + } + + return mb_mgr; +} + +static MB_MGR *sec_mb_mgrs[RTE_MAX_LCORE]; + +int +aesni_mb_sec_crypto_process_bulk(struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num) +{ + struct aesni_mb_sec_session *sec_sess = sess->sess_private_data; + JOB_AES_HMAC *job; + static MB_MGR *mb_mgr; + uint32_t lcore_id = rte_lcore_id(); + uint8_t digest_idx = sec_sess->digest_idx; + uint32_t i, processed = 0; + int ret = 0, errcnt = 0; + + if (unlikely(sec_mb_mgrs[lcore_id] == NULL)) { + sec_mb_mgrs[lcore_id] = alloc_init_mb_mgr(); + + if (sec_mb_mgrs[lcore_id] == NULL) { + for (i = 0; i < num; i++) + status[i] = -ENOMEM; + + return -num; + } + } + + mb_mgr = sec_mb_mgrs[lcore_id]; + + for (i = 0; i < num; i++) { + void *seg_buf = buf[i].vec[0].iov_base; + uint32_t buf_len = buf[i].vec[0].iov_len; + + job = IMB_GET_NEXT_JOB(mb_mgr); + if (unlikely(job == NULL)) { + processed += flush_mb_sec_mgr(mb_mgr); + + job = IMB_GET_NEXT_JOB(mb_mgr); + if (!job) { + errcnt -= 1; + status[i] = -ENOMEM; + } + } + + ret = set_mb_job_params_sec(job, sec_sess, seg_buf, buf_len, + iv[i], aad[i], digest[i], &status[i], + &digest_idx); + /* Submit job to multi-buffer for processing */ + if (ret) { + processed++; + status[i] = ret; + errcnt -= 1; + continue; + } + +#ifdef RTE_LIBRTE_PMD_AESNI_MB_DEBUG + job = IMB_SUBMIT_JOB(mb_mgr); +#else + job = IMB_SUBMIT_JOB_NOCHECK(mb_mgr); +#endif + + if (job) + processed += handle_completed_sec_jobs(job, mb_mgr); + } + + while (processed < num) + processed += flush_mb_sec_mgr(mb_mgr); + + return errcnt; +} + static int cryptodev_aesni_mb_remove(struct rte_vdev_device *vdev); static int @@ -1248,8 +1565,9 @@ cryptodev_aesni_mb_create(const char *name, { struct rte_cryptodev *dev; struct aesni_mb_private *internals; - enum aesni_mb_vector_mode vector_mode; + struct rte_security_ctx *sec_ctx; MB_MGR *mb_mgr; + char sec_name[RTE_DEV_NAME_MAX_LEN]; /* Check CPU for support for AES instruction set */ if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_AES)) { @@ -1283,35 +1601,14 @@ cryptodev_aesni_mb_create(const char *name, dev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | RTE_CRYPTODEV_FF_CPU_AESNI | - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT; + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | + RTE_CRYPTODEV_FF_SECURITY; - mb_mgr = alloc_mb_mgr(0); + mb_mgr = alloc_init_mb_mgr(); if (mb_mgr == NULL) return -ENOMEM; - switch (vector_mode) { - case RTE_AESNI_MB_SSE: - dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_SSE; - init_mb_mgr_sse(mb_mgr); - break; - case RTE_AESNI_MB_AVX: - dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX; - init_mb_mgr_avx(mb_mgr); - break; - case RTE_AESNI_MB_AVX2: - dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX2; - init_mb_mgr_avx2(mb_mgr); - break; - case RTE_AESNI_MB_AVX512: - dev->feature_flags |= RTE_CRYPTODEV_FF_CPU_AVX512; - init_mb_mgr_avx512(mb_mgr); - break; - default: - AESNI_MB_LOG(ERR, "Unsupported vector mode %u\n", vector_mode); - goto error_exit; - } - /* Set vector instructions mode supported */ internals = dev->data->dev_private; @@ -1322,11 +1619,28 @@ cryptodev_aesni_mb_create(const char *name, AESNI_MB_LOG(INFO, "IPSec Multi-buffer library version used: %s\n", imb_get_version_str()); + /* setup security operations */ + snprintf(sec_name, sizeof(sec_name) - 1, "aes_mb_sec_%u", + dev->driver_id); + sec_ctx = rte_zmalloc_socket(sec_name, + sizeof(struct rte_security_ctx), + RTE_CACHE_LINE_SIZE, init_params->socket_id); + if (sec_ctx == NULL) { + AESNI_MB_LOG(ERR, "memory allocation failed\n"); + goto error_exit; + } + + sec_ctx->device = (void *)dev; + sec_ctx->ops = rte_aesni_mb_pmd_security_ops; + dev->security_ctx = sec_ctx; + return 0; error_exit: if (mb_mgr) free_mb_mgr(mb_mgr); + if (sec_ctx) + rte_free(sec_ctx); rte_cryptodev_pmd_destroy(dev); @@ -1367,6 +1681,7 @@ cryptodev_aesni_mb_remove(struct rte_vdev_device *vdev) struct rte_cryptodev *cryptodev; struct aesni_mb_private *internals; const char *name; + uint32_t i; name = rte_vdev_device_name(vdev); if (name == NULL) @@ -1379,6 +1694,9 @@ cryptodev_aesni_mb_remove(struct rte_vdev_device *vdev) internals = cryptodev->data->dev_private; free_mb_mgr(internals->mb_mgr); + for (i = 0; i < RTE_MAX_LCORE; i++) + if (sec_mb_mgrs[i]) + free_mb_mgr(sec_mb_mgrs[i]); return rte_cryptodev_pmd_destroy(cryptodev); } diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c index 8d15b99d4..f47df2d57 100644 --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "rte_aesni_mb_pmd_private.h" @@ -732,7 +733,8 @@ aesni_mb_pmd_qp_count(struct rte_cryptodev *dev) static unsigned aesni_mb_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused) { - return sizeof(struct aesni_mb_session); + return RTE_ALIGN_CEIL(sizeof(struct aesni_mb_session), + RTE_CACHE_LINE_SIZE); } /** Configure a aesni multi-buffer session from a crypto xform chain */ @@ -810,4 +812,92 @@ struct rte_cryptodev_ops aesni_mb_pmd_ops = { .sym_session_clear = aesni_mb_pmd_sym_session_clear }; +/** Set session authentication parameters */ + +static int +aesni_mb_security_session_create(void *dev, + struct rte_security_session_conf *conf, + struct rte_security_session *sess, + struct rte_mempool *mempool) +{ + struct rte_cryptodev *cdev = dev; + struct aesni_mb_private *internals = cdev->data->dev_private; + struct aesni_mb_sec_session *sess_priv; + int ret; + + if (!conf->crypto_xform) { + AESNI_MB_LOG(ERR, "Invalid security session conf"); + return -EINVAL; + } + + if (conf->cpucrypto.cipher_offset < 0) { + AESNI_MB_LOG(ERR, "Invalid security session conf"); + return -EINVAL; + } + + if (rte_mempool_get(mempool, (void **)(&sess_priv))) { + AESNI_MB_LOG(ERR, + "Couldn't get object from session mempool"); + return -ENOMEM; + } + + sess_priv->cipher_offset = conf->cpucrypto.cipher_offset; + + ret = aesni_mb_set_session_parameters(internals->mb_mgr, + &sess_priv->sess, conf->crypto_xform); + if (ret != 0) { + AESNI_MB_LOG(ERR, "failed configure session parameters"); + + rte_mempool_put(mempool, sess_priv); + } + + sess->sess_private_data = (void *)sess_priv; + + return ret; +} + +static int +aesni_mb_security_session_destroy(void *dev __rte_unused, + struct rte_security_session *sess) +{ + struct aesni_mb_sec_session *sess_priv = + get_sec_session_private_data(sess); + + if (sess_priv) { + struct rte_mempool *sess_mp = rte_mempool_from_obj( + (void *)sess_priv); + + memset(sess, 0, sizeof(struct aesni_mb_sec_session)); + set_sec_session_private_data(sess, NULL); + + if (sess_mp == NULL) { + AESNI_MB_LOG(ERR, "failed fetch session mempool"); + return -EINVAL; + } + + rte_mempool_put(sess_mp, sess_priv); + } + + return 0; +} + +static unsigned int +aesni_mb_sec_session_get_size(__rte_unused void *device) +{ + return RTE_ALIGN_CEIL(sizeof(struct aesni_mb_sec_session), + RTE_CACHE_LINE_SIZE); +} + +static struct rte_security_ops aesni_mb_security_ops = { + .session_create = aesni_mb_security_session_create, + .session_get_size = aesni_mb_sec_session_get_size, + .session_update = NULL, + .session_stats_get = NULL, + .session_destroy = aesni_mb_security_session_destroy, + .set_pkt_metadata = NULL, + .capabilities_get = NULL, + .process_cpu_crypto_bulk = aesni_mb_sec_crypto_process_bulk, +}; + struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops; +struct rte_security_ops *rte_aesni_mb_pmd_security_ops = &aesni_mb_security_ops; diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h index b794d4bc1..64b58ca8e 100644 --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h @@ -176,7 +176,6 @@ struct aesni_mb_qp { */ } __rte_cache_aligned; -/** AES-NI multi-buffer private session structure */ struct aesni_mb_session { JOB_CHAIN_ORDER chain_order; struct { @@ -265,16 +264,32 @@ struct aesni_mb_session { /** AAD data length */ uint16_t aad_len; } aead; -} __rte_cache_aligned; +}; + +/** AES-NI multi-buffer private security session structure */ +struct aesni_mb_sec_session { + /**< Unique Queue Pair Name */ + struct aesni_mb_session sess; + uint8_t temp_digests[MAX_JOBS][DIGEST_LENGTH_MAX]; + uint16_t digest_idx; + uint32_t cipher_offset; + MB_MGR *mb_mgr; +}; extern int aesni_mb_set_session_parameters(const MB_MGR *mb_mgr, struct aesni_mb_session *sess, const struct rte_crypto_sym_xform *xform); +extern int +aesni_mb_sec_crypto_process_bulk(struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + /** device specific operations function pointer structure */ extern struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops; - +/** device specific operations function pointer structure for rte_security */ +extern struct rte_security_ops *rte_aesni_mb_pmd_security_ops; #endif /* _RTE_AESNI_MB_PMD_PRIVATE_H_ */ From patchwork Mon Oct 7 16:28:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60639 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 72D6C1D178; Mon, 7 Oct 2019 18:29:14 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 0C9001D14E for ; Mon, 7 Oct 2019 18:29:06 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:29:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082039" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:29:05 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:46 +0100 Message-Id: <20191007162850.60552-7-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 06/10] app/test: add aesni_mb security cpu crypto autotest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds cpu crypto unit test for AESNI_MB PMD. Signed-off-by: Fan Zhang --- app/test/test_security_cpu_crypto.c | 371 +++++++++++++++++++++++++++++++++++- 1 file changed, 369 insertions(+), 2 deletions(-) diff --git a/app/test/test_security_cpu_crypto.c b/app/test/test_security_cpu_crypto.c index ca9a8dae6..a9853a0c0 100644 --- a/app/test/test_security_cpu_crypto.c +++ b/app/test/test_security_cpu_crypto.c @@ -19,12 +19,23 @@ #include "test.h" #include "test_cryptodev.h" +#include "test_cryptodev_blockcipher.h" +#include "test_cryptodev_aes_test_vectors.h" #include "test_cryptodev_aead_test_vectors.h" +#include "test_cryptodev_des_test_vectors.h" +#include "test_cryptodev_hash_test_vectors.h" #define CPU_CRYPTO_TEST_MAX_AAD_LENGTH 16 #define MAX_NB_SIGMENTS 4 #define CACHE_WARM_ITER 2048 +#define TOP_ENC BLOCKCIPHER_TEST_OP_ENCRYPT +#define TOP_DEC BLOCKCIPHER_TEST_OP_DECRYPT +#define TOP_AUTH_GEN BLOCKCIPHER_TEST_OP_AUTH_GEN +#define TOP_AUTH_VER BLOCKCIPHER_TEST_OP_AUTH_VERIFY +#define TOP_ENC_AUTH BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN +#define TOP_AUTH_DEC BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC + enum buffer_assemble_option { SGL_MAX_SEG, SGL_ONE_SEG, @@ -35,8 +46,8 @@ struct cpu_crypto_test_case { uint8_t seg[MBUF_DATAPAYLOAD_SIZE]; uint32_t seg_len; } seg_buf[MAX_NB_SIGMENTS]; - uint8_t iv[MAXIMUM_IV_LENGTH]; - uint8_t aad[CPU_CRYPTO_TEST_MAX_AAD_LENGTH]; + uint8_t iv[MAXIMUM_IV_LENGTH * 2]; + uint8_t aad[CPU_CRYPTO_TEST_MAX_AAD_LENGTH * 4]; uint8_t digest[DIGEST_BYTE_LENGTH_SHA512]; } __rte_cache_aligned; @@ -516,6 +527,11 @@ cpu_crypto_test_aead(const struct aead_test_data *tdata, TEST_EXPAND(gcm_test_case_256_6, type) \ TEST_EXPAND(gcm_test_case_256_7, type) +/* test-vector/sgl-option */ +#define all_ccm_unit_test_cases \ + TEST_EXPAND(ccm_test_case_128_1, SGL_ONE_SEG) \ + TEST_EXPAND(ccm_test_case_128_2, SGL_ONE_SEG) \ + TEST_EXPAND(ccm_test_case_128_3, SGL_ONE_SEG) #define TEST_EXPAND(t, o) \ static int \ @@ -531,6 +547,7 @@ cpu_crypto_aead_dec_test_##t##_##o(void) \ all_gcm_unit_test_cases(SGL_ONE_SEG) all_gcm_unit_test_cases(SGL_MAX_SEG) +all_ccm_unit_test_cases #undef TEST_EXPAND static struct unit_test_suite security_cpu_crypto_aesgcm_testsuite = { @@ -758,8 +775,358 @@ test_security_cpu_crypto_aesni_gcm_perf(void) &security_cpu_crypto_aesgcm_perf_testsuite); } +static struct rte_security_session * +create_blockcipher_session(struct rte_security_ctx *ctx, + struct rte_mempool *sess_mp, + uint32_t op_mask, + const struct blockcipher_test_data *test_data, + uint32_t is_unit_test) +{ + struct rte_security_session_conf sess_conf = {0}; + struct rte_crypto_sym_xform xforms[2] = { {0} }; + struct rte_crypto_sym_xform *cipher_xform = NULL; + struct rte_crypto_sym_xform *auth_xform = NULL; + struct rte_crypto_sym_xform *xform; + + if (op_mask & BLOCKCIPHER_TEST_OP_CIPHER) { + cipher_xform = &xforms[0]; + cipher_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER; + + if (op_mask & TOP_ENC) + cipher_xform->cipher.op = + RTE_CRYPTO_CIPHER_OP_ENCRYPT; + else + cipher_xform->cipher.op = + RTE_CRYPTO_CIPHER_OP_DECRYPT; + + cipher_xform->cipher.algo = test_data->crypto_algo; + cipher_xform->cipher.key.data = test_data->cipher_key.data; + cipher_xform->cipher.key.length = test_data->cipher_key.len; + cipher_xform->cipher.iv.offset = 0; + cipher_xform->cipher.iv.length = test_data->iv.len; + + if (is_unit_test) + debug_hexdump(stdout, "cipher key:", + test_data->cipher_key.data, + test_data->cipher_key.len); + } + + if (op_mask & BLOCKCIPHER_TEST_OP_AUTH) { + auth_xform = &xforms[1]; + auth_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH; + + if (op_mask & TOP_AUTH_GEN) + auth_xform->auth.op = RTE_CRYPTO_AUTH_OP_GENERATE; + else + auth_xform->auth.op = RTE_CRYPTO_AUTH_OP_VERIFY; + + auth_xform->auth.algo = test_data->auth_algo; + auth_xform->auth.key.length = test_data->auth_key.len; + auth_xform->auth.key.data = test_data->auth_key.data; + auth_xform->auth.digest_length = test_data->digest.len; + + if (is_unit_test) + debug_hexdump(stdout, "auth key:", + test_data->auth_key.data, + test_data->auth_key.len); + } + + if (op_mask == TOP_ENC || + op_mask == TOP_DEC) + xform = cipher_xform; + else if (op_mask == TOP_AUTH_GEN || + op_mask == TOP_AUTH_VER) + xform = auth_xform; + else if (op_mask == TOP_ENC_AUTH) { + xform = cipher_xform; + xform->next = auth_xform; + } else if (op_mask == TOP_AUTH_DEC) { + xform = auth_xform; + xform->next = cipher_xform; + } else + return NULL; + + if (test_data->cipher_offset < test_data->auth_offset) + return NULL; + + sess_conf.action_type = RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO; + sess_conf.crypto_xform = xform; + sess_conf.cpucrypto.cipher_offset = test_data->cipher_offset - + test_data->auth_offset; + + return rte_security_session_create(ctx, &sess_conf, sess_mp); +} + +static inline int +assemble_blockcipher_buf(struct cpu_crypto_test_case *data, + struct cpu_crypto_test_obj *obj, + uint32_t obj_idx, + uint32_t op_mask, + const struct blockcipher_test_data *test_data, + uint32_t is_unit_test) +{ + const uint8_t *src; + uint32_t src_len; + uint32_t offset; + + if (op_mask == TOP_ENC_AUTH || + op_mask == TOP_AUTH_GEN || + op_mask == BLOCKCIPHER_TEST_OP_AUTH_VERIFY) + offset = test_data->auth_offset; + else + offset = test_data->cipher_offset; + + if (op_mask & TOP_ENC_AUTH) { + src = test_data->plaintext.data; + src_len = test_data->plaintext.len; + if (is_unit_test) + debug_hexdump(stdout, "plaintext:", src, src_len); + } else { + src = test_data->ciphertext.data; + src_len = test_data->ciphertext.len; + memcpy(data->digest, test_data->digest.data, + test_data->digest.len); + if (is_unit_test) { + debug_hexdump(stdout, "ciphertext:", src, src_len); + debug_hexdump(stdout, "digest:", test_data->digest.data, + test_data->digest.len); + } + } + + if (src_len > MBUF_DATAPAYLOAD_SIZE) + return -ENOMEM; + + memcpy(data->seg_buf[0].seg, src, src_len); + data->seg_buf[0].seg_len = src_len; + obj->vec[obj_idx][0].iov_base = + (void *)(data->seg_buf[0].seg + offset); + obj->vec[obj_idx][0].iov_len = src_len - offset; + + obj->sec_buf[obj_idx].vec = obj->vec[obj_idx]; + obj->sec_buf[obj_idx].num = 1; + + memcpy(data->iv, test_data->iv.data, test_data->iv.len); + if (is_unit_test) + debug_hexdump(stdout, "iv:", test_data->iv.data, + test_data->iv.len); + + obj->iv[obj_idx] = (void *)data->iv; + obj->digest[obj_idx] = (void *)data->digest; + + return 0; +} + +static int +check_blockcipher_result(struct cpu_crypto_test_case *tcase, + uint32_t op_mask, + const struct blockcipher_test_data *test_data) +{ + int ret; + + if (op_mask & BLOCKCIPHER_TEST_OP_CIPHER) { + const char *err_msg1, *err_msg2; + const uint8_t *src_pt_ct; + uint32_t src_len; + + if (op_mask & TOP_ENC) { + src_pt_ct = test_data->ciphertext.data; + src_len = test_data->ciphertext.len; + err_msg1 = CPU_CRYPTO_ERR_EXP_CT; + err_msg2 = CPU_CRYPTO_ERR_GEN_CT; + } else { + src_pt_ct = test_data->plaintext.data; + src_len = test_data->plaintext.len; + err_msg1 = CPU_CRYPTO_ERR_EXP_PT; + err_msg2 = CPU_CRYPTO_ERR_GEN_PT; + } + + ret = memcmp(tcase->seg_buf[0].seg, src_pt_ct, src_len); + if (ret != 0) { + debug_hexdump(stdout, err_msg1, src_pt_ct, src_len); + debug_hexdump(stdout, err_msg2, + tcase->seg_buf[0].seg, + test_data->ciphertext.len); + return -1; + } + } + + if (op_mask & TOP_AUTH_GEN) { + ret = memcmp(tcase->digest, test_data->digest.data, + test_data->digest.len); + if (ret != 0) { + debug_hexdump(stdout, "expect digest:", + test_data->digest.data, + test_data->digest.len); + debug_hexdump(stdout, "gen digest:", + tcase->digest, + test_data->digest.len); + return -1; + } + } + + return 0; +} + +static int +cpu_crypto_test_blockcipher(const struct blockcipher_test_data *tdata, + uint32_t op_mask) +{ + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + int ret; + + ut_params->sess = create_blockcipher_session(ts_params->ctx, + ts_params->session_priv_mpool, + op_mask, + tdata, + 1); + if (!ut_params->sess) + return -1; + + ret = allocate_buf(1); + if (ret) + return ret; + + tcase = ut_params->test_datas[0]; + ret = assemble_blockcipher_buf(tcase, obj, 0, op_mask, tdata, 1); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + + run_test(ts_params->ctx, ut_params->sess, obj, 1); + + ret = check_status(obj, 1); + if (ret < 0) + return ret; + + ret = check_blockcipher_result(tcase, op_mask, tdata); + if (ret < 0) + return ret; + + return 0; +} + +/* Macro to save code for defining BlockCipher test cases */ +/* test-vector-name/op */ +#define all_blockcipher_test_cases \ + TEST_EXPAND(aes_test_data_1, TOP_ENC) \ + TEST_EXPAND(aes_test_data_1, TOP_DEC) \ + TEST_EXPAND(aes_test_data_1, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_1, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_2, TOP_ENC) \ + TEST_EXPAND(aes_test_data_2, TOP_DEC) \ + TEST_EXPAND(aes_test_data_2, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_2, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_3, TOP_ENC) \ + TEST_EXPAND(aes_test_data_3, TOP_DEC) \ + TEST_EXPAND(aes_test_data_3, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_3, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_4, TOP_ENC) \ + TEST_EXPAND(aes_test_data_4, TOP_DEC) \ + TEST_EXPAND(aes_test_data_4, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_4, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_5, TOP_ENC) \ + TEST_EXPAND(aes_test_data_5, TOP_DEC) \ + TEST_EXPAND(aes_test_data_5, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_5, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_6, TOP_ENC) \ + TEST_EXPAND(aes_test_data_6, TOP_DEC) \ + TEST_EXPAND(aes_test_data_6, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_6, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_7, TOP_ENC) \ + TEST_EXPAND(aes_test_data_7, TOP_DEC) \ + TEST_EXPAND(aes_test_data_7, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_7, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_8, TOP_ENC) \ + TEST_EXPAND(aes_test_data_8, TOP_DEC) \ + TEST_EXPAND(aes_test_data_8, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_8, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_9, TOP_ENC) \ + TEST_EXPAND(aes_test_data_9, TOP_DEC) \ + TEST_EXPAND(aes_test_data_9, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_9, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_10, TOP_ENC) \ + TEST_EXPAND(aes_test_data_10, TOP_DEC) \ + TEST_EXPAND(aes_test_data_11, TOP_ENC) \ + TEST_EXPAND(aes_test_data_11, TOP_DEC) \ + TEST_EXPAND(aes_test_data_12, TOP_ENC) \ + TEST_EXPAND(aes_test_data_12, TOP_DEC) \ + TEST_EXPAND(aes_test_data_12, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_12, TOP_AUTH_DEC) \ + TEST_EXPAND(aes_test_data_13, TOP_ENC) \ + TEST_EXPAND(aes_test_data_13, TOP_DEC) \ + TEST_EXPAND(aes_test_data_13, TOP_ENC_AUTH) \ + TEST_EXPAND(aes_test_data_13, TOP_AUTH_DEC) \ + TEST_EXPAND(des_test_data_1, TOP_ENC) \ + TEST_EXPAND(des_test_data_1, TOP_DEC) \ + TEST_EXPAND(des_test_data_2, TOP_ENC) \ + TEST_EXPAND(des_test_data_2, TOP_DEC) \ + TEST_EXPAND(des_test_data_3, TOP_ENC) \ + TEST_EXPAND(des_test_data_3, TOP_DEC) \ + TEST_EXPAND(triple_des128cbc_hmac_sha1_test_vector, TOP_ENC) \ + TEST_EXPAND(triple_des128cbc_hmac_sha1_test_vector, TOP_DEC) \ + TEST_EXPAND(triple_des128cbc_hmac_sha1_test_vector, TOP_ENC_AUTH) \ + TEST_EXPAND(triple_des128cbc_hmac_sha1_test_vector, TOP_AUTH_DEC) \ + TEST_EXPAND(triple_des64cbc_test_vector, TOP_ENC) \ + TEST_EXPAND(triple_des64cbc_test_vector, TOP_DEC) \ + TEST_EXPAND(triple_des128cbc_test_vector, TOP_ENC) \ + TEST_EXPAND(triple_des128cbc_test_vector, TOP_DEC) \ + TEST_EXPAND(triple_des192cbc_test_vector, TOP_ENC) \ + TEST_EXPAND(triple_des192cbc_test_vector, TOP_DEC) \ + +#define TEST_EXPAND(t, o) \ +static int \ +cpu_crypto_blockcipher_test_##t##_##o(void) \ +{ \ + return cpu_crypto_test_blockcipher(&t, o); \ +} + +all_blockcipher_test_cases +#undef TEST_EXPAND + +static struct unit_test_suite security_cpu_crypto_aesni_mb_testsuite = { + .suite_name = "Security CPU Crypto AESNI-MB Unit Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_aead_enc_test_##t##_##o), \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_aead_dec_test_##t##_##o), \ + + all_gcm_unit_test_cases(SGL_ONE_SEG) + all_ccm_unit_test_cases +#undef TEST_EXPAND + +#define TEST_EXPAND(t, o) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_blockcipher_test_##t##_##o), \ + + all_blockcipher_test_cases +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_security_cpu_crypto_aesni_mb(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)); + + return unit_test_suite_runner(&security_cpu_crypto_aesni_mb_testsuite); +} + REGISTER_TEST_COMMAND(security_aesni_gcm_autotest, test_security_cpu_crypto_aesni_gcm); REGISTER_TEST_COMMAND(security_aesni_gcm_perftest, test_security_cpu_crypto_aesni_gcm_perf); + +REGISTER_TEST_COMMAND(security_aesni_mb_autotest, + test_security_cpu_crypto_aesni_mb); From patchwork Mon Oct 7 16:28:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60640 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 827421D17B; Mon, 7 Oct 2019 18:29:16 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 8ADA31C241 for ; Mon, 7 Oct 2019 18:29:08 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:29:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082050" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:29:06 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:47 +0100 Message-Id: <20191007162850.60552-8-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 07/10] app/test: add aesni_mb security cpu crypto perftest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since crypto perf application does not support rte_security, this patch adds a simple AES-CBC-SHA1-HMAC CPU crypto performance test to crypto unittest application. The test includes different key and data sizes test with single buffer test items and will display the throughput as well as cycle count performance information. Signed-off-by: Fan Zhang --- app/test/test_security_cpu_crypto.c | 194 ++++++++++++++++++++++++++++++++++++ 1 file changed, 194 insertions(+) diff --git a/app/test/test_security_cpu_crypto.c b/app/test/test_security_cpu_crypto.c index a9853a0c0..c3689d138 100644 --- a/app/test/test_security_cpu_crypto.c +++ b/app/test/test_security_cpu_crypto.c @@ -1122,6 +1122,197 @@ test_security_cpu_crypto_aesni_mb(void) return unit_test_suite_runner(&security_cpu_crypto_aesni_mb_testsuite); } +static inline void +switch_blockcipher_enc_to_dec(struct blockcipher_test_data *tdata, + struct cpu_crypto_test_case *tcase, uint8_t *dst) +{ + memcpy(dst, tcase->seg_buf[0].seg, tcase->seg_buf[0].seg_len); + tdata->ciphertext.len = tcase->seg_buf[0].seg_len; + memcpy(tdata->digest.data, tcase->digest, tdata->digest.len); +} + +static int +cpu_crypto_test_blockcipher_perf( + const enum rte_crypto_cipher_algorithm cipher_algo, + uint32_t cipher_key_sz, + const enum rte_crypto_auth_algorithm auth_algo, + uint32_t auth_key_sz, uint32_t digest_sz, + uint32_t op_mask) +{ + struct blockcipher_test_data tdata = {0}; + uint8_t plaintext[3000], ciphertext[3000]; + struct cpu_crypto_testsuite_params *ts_params = &testsuite_params; + struct cpu_crypto_unittest_params *ut_params = &unittest_params; + struct cpu_crypto_test_obj *obj = &ut_params->test_obj; + struct cpu_crypto_test_case *tcase; + uint64_t hz = rte_get_tsc_hz(), time_start, time_now; + double rate, cycles_per_buf; + uint32_t test_data_szs[] = {64, 128, 256, 512, 1024, 2048}; + uint32_t i, j; + uint32_t op_mask_opp = 0; + int ret; + + if (op_mask & BLOCKCIPHER_TEST_OP_CIPHER) + op_mask_opp |= (~op_mask & BLOCKCIPHER_TEST_OP_CIPHER); + if (op_mask & BLOCKCIPHER_TEST_OP_AUTH) + op_mask_opp |= (~op_mask & BLOCKCIPHER_TEST_OP_AUTH); + + tdata.plaintext.data = plaintext; + tdata.ciphertext.data = ciphertext; + + tdata.cipher_key.len = cipher_key_sz; + tdata.auth_key.len = auth_key_sz; + + gen_rand(tdata.cipher_key.data, cipher_key_sz / 8); + gen_rand(tdata.auth_key.data, auth_key_sz / 8); + + tdata.crypto_algo = cipher_algo; + tdata.auth_algo = auth_algo; + + tdata.digest.len = digest_sz; + + ut_params->sess = create_blockcipher_session(ts_params->ctx, + ts_params->session_priv_mpool, + op_mask, + &tdata, + 0); + if (!ut_params->sess) + return -1; + + ret = allocate_buf(MAX_NUM_OPS_INFLIGHT); + if (ret) + return ret; + + for (i = 0; i < RTE_DIM(test_data_szs); i++) { + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tdata.plaintext.len = test_data_szs[i]; + gen_rand(plaintext, tdata.plaintext.len); + + tdata.iv.len = 16; + gen_rand(tdata.iv.data, tdata.iv.len); + + tcase = ut_params->test_datas[j]; + ret = assemble_blockcipher_buf(tcase, obj, j, + op_mask, + &tdata, + 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + /* warm up cache */ + for (j = 0; j < CACHE_WARM_ITER; j++) + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_start = rte_rdtsc(); + + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_now = rte_rdtsc(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("%s-%u-%s(%4uB) Enc %03.3fMpps (%03.3fGbps) ", + rte_crypto_cipher_algorithm_strings[cipher_algo], + cipher_key_sz * 8, + rte_crypto_auth_algorithm_strings[auth_algo], + test_data_szs[i], + rate, rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, cycles_per_buf / test_data_szs[i]); + + for (j = 0; j < MAX_NUM_OPS_INFLIGHT; j++) { + tcase = ut_params->test_datas[j]; + + switch_blockcipher_enc_to_dec(&tdata, tcase, + ciphertext); + ret = assemble_blockcipher_buf(tcase, obj, j, + op_mask_opp, + &tdata, + 0); + if (ret < 0) { + printf("Test is not supported by the driver\n"); + return ret; + } + } + + time_start = rte_get_timer_cycles(); + + run_test(ts_params->ctx, ut_params->sess, obj, + MAX_NUM_OPS_INFLIGHT); + + time_now = rte_get_timer_cycles(); + + rate = time_now - time_start; + cycles_per_buf = rate / MAX_NUM_OPS_INFLIGHT; + + rate = ((hz / cycles_per_buf)) / 1000000; + + printf("%s-%u-%s(%4uB) Dec %03.3fMpps (%03.3fGbps) ", + rte_crypto_cipher_algorithm_strings[cipher_algo], + cipher_key_sz * 8, + rte_crypto_auth_algorithm_strings[auth_algo], + test_data_szs[i], + rate, rate * test_data_szs[i] * 8 / 1000); + printf("cycles per buf %03.3f per byte %03.3f\n", + cycles_per_buf, + cycles_per_buf / test_data_szs[i]); + } + + return 0; +} + +/* cipher-algo/cipher-key-len/auth-algo/auth-key-len/digest-len/op */ +#define all_block_cipher_perf_test_cases \ + TEST_EXPAND(_AES_CBC, 128, _NULL, 0, 0, TOP_ENC) \ + TEST_EXPAND(_NULL, 0, _SHA1_HMAC, 160, 20, TOP_AUTH_GEN) \ + TEST_EXPAND(_AES_CBC, 128, _SHA1_HMAC, 160, 20, TOP_ENC_AUTH) + +#define TEST_EXPAND(a, b, c, d, e, f) \ +static int \ +cpu_crypto_blockcipher_perf##a##_##b##c##_##f(void) \ +{ \ + return cpu_crypto_test_blockcipher_perf(RTE_CRYPTO_CIPHER##a, \ + b / 8, RTE_CRYPTO_AUTH##c, d / 8, e, f); \ +} \ + +all_block_cipher_perf_test_cases +#undef TEST_EXPAND + +static struct unit_test_suite security_cpu_crypto_aesni_mb_perf_testsuite = { + .suite_name = "Security CPU Crypto AESNI-MB Perf Test Suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { +#define TEST_EXPAND(a, b, c, d, e, f) \ + TEST_CASE_ST(ut_setup, ut_teardown, \ + cpu_crypto_blockcipher_perf##a##_##b##c##_##f), \ + + all_block_cipher_perf_test_cases +#undef TEST_EXPAND + + TEST_CASES_END() /**< NULL terminate unit test array */ + }, +}; + +static int +test_security_cpu_crypto_aesni_mb_perf(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)); + + return unit_test_suite_runner( + &security_cpu_crypto_aesni_mb_perf_testsuite); +} + + REGISTER_TEST_COMMAND(security_aesni_gcm_autotest, test_security_cpu_crypto_aesni_gcm); @@ -1130,3 +1321,6 @@ REGISTER_TEST_COMMAND(security_aesni_gcm_perftest, REGISTER_TEST_COMMAND(security_aesni_mb_autotest, test_security_cpu_crypto_aesni_mb); + +REGISTER_TEST_COMMAND(security_aesni_mb_perftest, + test_security_cpu_crypto_aesni_mb_perf); From patchwork Mon Oct 7 16:28:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60641 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DE73B1D381; Mon, 7 Oct 2019 18:29:18 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 93FEA1D15A for ; Mon, 7 Oct 2019 18:29:10 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:29:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082053" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:29:08 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:48 +0100 Message-Id: <20191007162850.60552-9-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 08/10] ipsec: add rte_security cpu_crypto action support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates the ipsec library to handle the newly introduced RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO action. Signed-off-by: Fan Zhang --- lib/librte_ipsec/crypto.h | 24 +++ lib/librte_ipsec/esp_inb.c | 200 ++++++++++++++++++++++-- lib/librte_ipsec/esp_outb.c | 369 +++++++++++++++++++++++++++++++++++++++++--- lib/librte_ipsec/sa.c | 53 ++++++- lib/librte_ipsec/sa.h | 29 ++++ lib/librte_ipsec/ses.c | 4 +- 6 files changed, 643 insertions(+), 36 deletions(-) diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h index f8fbf8d4f..901c8c7de 100644 --- a/lib/librte_ipsec/crypto.h +++ b/lib/librte_ipsec/crypto.h @@ -179,4 +179,28 @@ lksd_none_cop_prepare(struct rte_crypto_op *cop, __rte_crypto_sym_op_attach_sym_session(sop, cs); } +typedef void* (*_set_icv_f)(void *val, struct rte_mbuf *ml, uint32_t icv_off); + +static inline void * +set_icv_va_pa(void *val, struct rte_mbuf *ml, uint32_t icv_off) +{ + union sym_op_data *icv = val; + + icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_off); + icv->pa = rte_pktmbuf_iova_offset(ml, icv_off); + + return icv->va; +} + +static inline void * +set_icv_va(__rte_unused void *val, __rte_unused struct rte_mbuf *ml, + __rte_unused uint32_t icv_off) +{ + void **icv_va = val; + + *icv_va = rte_pktmbuf_mtod_offset(ml, void *, icv_off); + + return *icv_va; +} + #endif /* _CRYPTO_H_ */ diff --git a/lib/librte_ipsec/esp_inb.c b/lib/librte_ipsec/esp_inb.c index 8e3ecbc64..c4476e819 100644 --- a/lib/librte_ipsec/esp_inb.c +++ b/lib/librte_ipsec/esp_inb.c @@ -105,6 +105,78 @@ inb_cop_prepare(struct rte_crypto_op *cop, } } +static inline int +inb_cpu_crypto_proc_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb, + uint32_t pofs, uint32_t plen, + struct rte_security_vec *buf, struct iovec *cur_vec, + void *iv) +{ + struct rte_mbuf *ms; + struct iovec *vec = cur_vec; + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + uint64_t *ivp; + uint32_t algo; + uint32_t left; + uint32_t off = 0, n_seg = 0; + + ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *, + pofs + sizeof(struct rte_esp_hdr)); + algo = sa->algo_type; + + switch (algo) { + case ALGO_TYPE_AES_GCM: + gcm = (struct aead_gcm_iv *)iv; + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + off = sa->ctp.cipher.offset + pofs; + left = plen - sa->ctp.cipher.length; + break; + case ALGO_TYPE_AES_CBC: + case ALGO_TYPE_3DES_CBC: + copy_iv(iv, ivp, sa->iv_len); + off = sa->ctp.auth.offset + pofs; + left = plen - sa->ctp.auth.length; + break; + case ALGO_TYPE_AES_CTR: + copy_iv(iv, ivp, sa->iv_len); + off = sa->ctp.auth.offset + pofs; + left = plen - sa->ctp.auth.length; + ctr = (struct aesctr_cnt_blk *)iv; + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + case ALGO_TYPE_NULL: + left = plen - sa->ctp.cipher.length; + break; + default: + return -EINVAL; + } + + ms = mbuf_get_seg_ofs(mb, &off); + if (!ms) + return -1; + + while (n_seg < RTE_LIBRTE_IP_FRAG_MAX_FRAG && left && ms) { + uint32_t len = RTE_MIN(left, ms->data_len - off); + + vec->iov_base = rte_pktmbuf_mtod_offset(ms, void *, off); + vec->iov_len = len; + + left -= len; + vec++; + n_seg++; + ms = ms->next; + off = 0; + } + + if (left) + return -1; + + buf->vec = cur_vec; + buf->num = n_seg; + + return n_seg; +} + /* * Helper function for prepare() to deal with situation when * ICV is spread by two segments. Tries to move ICV completely into the @@ -139,20 +211,21 @@ move_icv(struct rte_mbuf *ml, uint32_t ofs) */ static inline void inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, - const union sym_op_data *icv) + uint8_t *icv_va, void *aad_buf, uint32_t aad_off) { struct aead_gcm_aad *aad; /* insert SQN.hi between ESP trailer and ICV */ if (sa->sqh_len != 0) - insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len); + insert_sqh(sqn_hi32(sqc), icv_va, sa->icv_len); /* * fill AAD fields, if any (aad fields are placed after icv), * right now we support only one AEAD algorithm: AES-GCM. */ if (sa->aad_len != 0) { - aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); + aad = aad_buf ? aad_buf : + (struct aead_gcm_aad *)(icv_va + aad_off); aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa)); } } @@ -162,13 +235,15 @@ inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, */ static inline int32_t inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, - struct rte_mbuf *mb, uint32_t hlen, union sym_op_data *icv) + struct rte_mbuf *mb, uint32_t hlen, _set_icv_f set_icv, void *icv_val, + void *aad_buf) { int32_t rc; uint64_t sqn; uint32_t clen, icv_len, icv_ofs, plen; struct rte_mbuf *ml; struct rte_esp_hdr *esph; + void *icv_va; esph = rte_pktmbuf_mtod_offset(mb, struct rte_esp_hdr *, hlen); @@ -226,8 +301,8 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml)) return -ENOSPC; - icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs); - icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs); + icv_va = set_icv(icv_val, ml, icv_ofs); + inb_pkt_xprepare(sa, sqn, icv_va, aad_buf, sa->icv_len); /* * if esn is used then high-order 32 bits are also used in ICV @@ -238,7 +313,6 @@ inb_pkt_prepare(const struct rte_ipsec_sa *sa, const struct replay_sqn *rsn, mb->pkt_len += sa->sqh_len; ml->data_len += sa->sqh_len; - inb_pkt_xprepare(sa, sqn, icv); return plen; } @@ -265,7 +339,8 @@ esp_inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], for (i = 0; i != num; i++) { hl = mb[i]->l2_len + mb[i]->l3_len; - rc = inb_pkt_prepare(sa, rsn, mb[i], hl, &icv); + rc = inb_pkt_prepare(sa, rsn, mb[i], hl, set_icv_va_pa, + (void *)&icv, NULL); if (rc >= 0) { lksd_none_cop_prepare(cop[k], cs, mb[i]); inb_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc); @@ -512,7 +587,6 @@ tun_process(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], return k; } - /* * *process* function for tunnel packets */ @@ -625,6 +699,114 @@ esp_inb_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], return n; } +/* + * process packets using sync crypto engine + */ +static uint16_t +esp_inb_cpu_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num, + esp_inb_process_t process) +{ + int32_t rc; + uint32_t i, hl, n, p; + struct rte_ipsec_sa *sa; + struct replay_sqn *rsn; + void *icv_va; + uint32_t sqn[num]; + uint32_t dr[num]; + uint8_t sqh_len; + + /* cpu crypto specific variables */ + struct rte_security_vec buf[num]; + struct iovec vec[RTE_LIBRTE_IP_FRAG_MAX_FRAG * num]; + uint32_t vec_idx = 0; + uint64_t iv_buf[num][IPSEC_MAX_IV_QWORD]; + void *iv[num]; + int status[num]; + uint8_t *aad_buf[num][sizeof(struct aead_gcm_aad)]; + void *aad[num]; + void *digest[num]; + uint32_t k; + + sa = ss->sa; + rsn = rsn_acquire(sa); + sqh_len = sa->sqh_len; + + k = 0; + for (i = 0; i != num; i++) { + hl = mb[i]->l2_len + mb[i]->l3_len; + rc = inb_pkt_prepare(sa, rsn, mb[i], hl, set_icv_va, + (void *)&icv_va, (void *)aad_buf[k]); + if (rc >= 0) { + iv[k] = (void *)iv_buf[k]; + aad[k] = (void *)aad_buf[k]; + digest[k] = (void *)icv_va; + + rc = inb_cpu_crypto_proc_prepare(sa, mb[i], hl, + rc, &buf[k], &vec[vec_idx], iv[k]); + if (rc < 0) { + dr[i - k] = i; + continue; + } + + vec_idx += rc; + k++; + } else + dr[i - k] = i; + } + + /* copy not prepared mbufs beyond good ones */ + if (k != num) { + rte_errno = EBADMSG; + + if (unlikely(k == 0)) + return 0; + + move_bad_mbufs(mb, dr, num, num - k); + } + + /* process the packets */ + n = 0; + rc = rte_security_process_cpu_crypto_bulk(ss->security.ctx, + ss->security.ses, buf, iv, aad, digest, status, k); + /* move failed process packets to dr */ + for (i = 0; i < k; i++) { + if (status[i]) { + dr[n++] = i; + rte_errno = EBADMSG; + } + } + + /* move bad packets to the back */ + if (n) + move_bad_mbufs(mb, dr, k, n); + + /* process packets */ + p = process(sa, mb, sqn, dr, k - n, sqh_len); + + if (p != k - n && p != 0) + move_bad_mbufs(mb, dr, k - n, k - n - p); + + if (p != num) + rte_errno = EBADMSG; + + return p; +} + +uint16_t +esp_inb_tun_cpu_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return esp_inb_cpu_crypto_pkt_process(ss, mb, num, tun_process); +} + +uint16_t +esp_inb_trs_cpu_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return esp_inb_cpu_crypto_pkt_process(ss, mb, num, trs_process); +} + /* * process group of ESP inbound tunnel packets. */ diff --git a/lib/librte_ipsec/esp_outb.c b/lib/librte_ipsec/esp_outb.c index 55799a867..ecfc4cd3f 100644 --- a/lib/librte_ipsec/esp_outb.c +++ b/lib/librte_ipsec/esp_outb.c @@ -104,7 +104,7 @@ outb_cop_prepare(struct rte_crypto_op *cop, static inline int32_t outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, - union sym_op_data *icv, uint8_t sqh_len) + _set_icv_f set_icv, void *icv_val, uint8_t sqh_len) { uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen; struct rte_mbuf *ml; @@ -177,8 +177,8 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, espt->pad_len = pdlen; espt->next_proto = sa->proto; - icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); - icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); + /* set icv va/pa value(s) */ + set_icv(icv_val, ml, pdofs); return clen; } @@ -189,14 +189,14 @@ outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, */ static inline void outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, - const union sym_op_data *icv) + uint8_t *icv_va, void *aad_buf) { uint32_t *psqh; struct aead_gcm_aad *aad; /* insert SQN.hi between ESP trailer and ICV */ if (sa->sqh_len != 0) { - psqh = (uint32_t *)(icv->va - sa->sqh_len); + psqh = (uint32_t *)(icv_va - sa->sqh_len); psqh[0] = sqn_hi32(sqc); } @@ -205,7 +205,7 @@ outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc, * right now we support only one AEAD algorithm: AES-GCM . */ if (sa->aad_len != 0) { - aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len); + aad = aad_buf; aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa)); } } @@ -242,11 +242,12 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], gen_iv(iv, sqc); /* try to update the packet itself */ - rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, - sa->sqh_len); + rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], set_icv_va_pa, + (void *)&icv, sa->sqh_len); /* success, setup crypto op */ if (rc >= 0) { - outb_pkt_xprepare(sa, sqc, &icv); + outb_pkt_xprepare(sa, sqc, icv.va, + (void *)(icv.va + sa->icv_len)); lksd_none_cop_prepare(cop[k], cs, mb[i]); outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc); k++; @@ -270,7 +271,7 @@ esp_outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], static inline int32_t outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb, - uint32_t l2len, uint32_t l3len, union sym_op_data *icv, + uint32_t l2len, uint32_t l3len, _set_icv_f set_icv, void *icv_val, uint8_t sqh_len) { uint8_t np; @@ -340,8 +341,7 @@ outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc, espt->pad_len = pdlen; espt->next_proto = np; - icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs); - icv->pa = rte_pktmbuf_iova_offset(ml, pdofs); + set_icv(icv_val, ml, pdofs); return clen; } @@ -381,11 +381,12 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], gen_iv(iv, sqc); /* try to update the packet itself */ - rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, &icv, - sa->sqh_len); + rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], l2, l3, + set_icv_va_pa, (void *)&icv, sa->sqh_len); /* success, setup crypto op */ if (rc >= 0) { - outb_pkt_xprepare(sa, sqc, &icv); + outb_pkt_xprepare(sa, sqc, icv.va, + (void *)(icv.va + sa->icv_len)); lksd_none_cop_prepare(cop[k], cs, mb[i]); outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc); k++; @@ -403,6 +404,335 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], return k; } + +static inline int +outb_cpu_crypto_proc_prepare(struct rte_mbuf *m, const struct rte_ipsec_sa *sa, + uint32_t hlen, uint32_t plen, + struct rte_security_vec *buf, struct iovec *cur_vec, void *iv) +{ + struct rte_mbuf *ms; + uint64_t *ivp = iv; + struct aead_gcm_iv *gcm; + struct aesctr_cnt_blk *ctr; + struct iovec *vec = cur_vec; + uint32_t left; + uint32_t off = 0; + uint32_t n_seg = 0; + uint32_t algo; + + algo = sa->algo_type; + + switch (algo) { + case ALGO_TYPE_AES_GCM: + gcm = iv; + aead_gcm_iv_fill(gcm, ivp[0], sa->salt); + off = sa->ctp.cipher.offset + hlen; + left = sa->ctp.cipher.length + plen; + break; + case ALGO_TYPE_AES_CBC: + case ALGO_TYPE_3DES_CBC: + off = sa->ctp.auth.offset + hlen; + left = sa->ctp.auth.length + plen; + break; + case ALGO_TYPE_AES_CTR: + off = sa->ctp.auth.offset + hlen; + left = sa->ctp.auth.length + plen; + ctr = iv; + aes_ctr_cnt_blk_fill(ctr, ivp[0], sa->salt); + break; + case ALGO_TYPE_NULL: + left = sa->ctp.cipher.length + plen; + break; + default: + return -EINVAL; + } + + ms = mbuf_get_seg_ofs(m, &off); + if (!ms) + return -1; + + while (n_seg < m->nb_segs && left && ms) { + uint32_t len = RTE_MIN(left, ms->data_len - off); + + vec->iov_base = rte_pktmbuf_mtod_offset(ms, void *, off); + vec->iov_len = len; + + left -= len; + vec++; + n_seg++; + ms = ms->next; + off = 0; + } + + if (left) + return -1; + + buf->vec = cur_vec; + buf->num = n_seg; + + return n_seg; +} + +static uint16_t +esp_outb_tun_cpu_crypto_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + struct rte_security_ctx *ctx; + struct rte_security_session *rss; + void *icv_va; + uint32_t dr[num]; + uint32_t i, n; + int32_t rc; + + /* cpu crypto specific variables */ + struct rte_security_vec buf[num]; + struct iovec vec[RTE_LIBRTE_IP_FRAG_MAX_FRAG * num]; + uint32_t vec_idx = 0; + uint64_t iv_buf[num][IPSEC_MAX_IV_QWORD]; + void *iv[num]; + int status[num]; + uint8_t *aad_buf[num][sizeof(struct aead_gcm_aad)]; + void *aad[num]; + void *digest[num]; + uint32_t k; + + sa = ss->sa; + ctx = ss->security.ctx; + rss = ss->security.ses; + + k = 0; + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + for (i = 0; i != n; i++) { + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv_buf[k], sqc); + + /* try to update the packet itself */ + rc = outb_tun_pkt_prepare(sa, sqc, iv_buf[k], mb[i], set_icv_va, + (void *)&icv_va, sa->sqh_len); + + /* success, setup crypto op */ + if (rc >= 0) { + iv[k] = (void *)iv_buf[k]; + aad[k] = (void *)aad_buf[k]; + digest[k] = (void *)icv_va; + + outb_pkt_xprepare(sa, sqc, icv_va, aad[k]); + + rc = outb_cpu_crypto_proc_prepare(mb[i], sa, + 0, rc, &buf[k], &vec[vec_idx], iv[k]); + if (rc < 0) { + dr[i - k] = i; + rte_errno = -rc; + continue; + } + + vec_idx += rc; + k++; + /* failure, put packet into the death-row */ + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not prepared mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + if (unlikely(k == 0)) { + rte_errno = EBADMSG; + return 0; + } + + /* process the packets */ + n = 0; + rc = rte_security_process_cpu_crypto_bulk(ctx, rss, buf, iv, aad, + digest, status, k); + /* move failed process packets to dr */ + if (rc < 0) + for (i = 0; i < n; i++) { + if (status[i]) + dr[n++] = i; + } + + if (n) + move_bad_mbufs(mb, dr, k, n); + + return k - n; +} + +static uint16_t +esp_outb_trs_cpu_crypto_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) + +{ + uint64_t sqn; + rte_be64_t sqc; + struct rte_ipsec_sa *sa; + struct rte_security_ctx *ctx; + struct rte_security_session *rss; + void *icv_va; + uint32_t dr[num]; + uint32_t i, n; + uint32_t l2, l3; + int32_t rc; + + /* cpu crypto specific variables */ + struct rte_security_vec buf[num]; + struct iovec vec[RTE_LIBRTE_IP_FRAG_MAX_FRAG * num]; + uint32_t vec_idx = 0; + uint64_t iv_buf[num][IPSEC_MAX_IV_QWORD]; + void *iv[num]; + int status[num]; + uint8_t *aad_buf[num][sizeof(struct aead_gcm_aad)]; + void *aad[num]; + void *digest[num]; + uint32_t k; + + sa = ss->sa; + ctx = ss->security.ctx; + rss = ss->security.ses; + + k = 0; + n = num; + sqn = esn_outb_update_sqn(sa, &n); + if (n != num) + rte_errno = EOVERFLOW; + + for (i = 0; i != n; i++) { + l2 = mb[i]->l2_len; + l3 = mb[i]->l3_len; + + sqc = rte_cpu_to_be_64(sqn + i); + gen_iv(iv_buf[k], sqc); + + /* try to update the packet itself */ + rc = outb_trs_pkt_prepare(sa, sqc, iv_buf[k], mb[i], l2, l3, + set_icv_va, (void *)&icv_va, sa->sqh_len); + + /* success, setup crypto op */ + if (rc >= 0) { + iv[k] = (void *)iv_buf[k]; + aad[k] = (void *)aad_buf[k]; + digest[k] = (void *)icv_va; + + outb_pkt_xprepare(sa, sqc, icv_va, aad[k]); + + rc = outb_cpu_crypto_proc_prepare(mb[i], sa, + l2 + l3, rc, &buf[k], &vec[vec_idx], + iv[k]); + if (rc < 0) { + dr[i - k] = i; + rte_errno = -rc; + continue; + } + + vec_idx += rc; + k++; + /* failure, put packet into the death-row */ + } else { + dr[i - k] = i; + rte_errno = -rc; + } + } + + /* copy not prepared mbufs beyond good ones */ + if (k != n && k != 0) + move_bad_mbufs(mb, dr, n, n - k); + + /* process the packets */ + n = 0; + rc = rte_security_process_cpu_crypto_bulk(ctx, rss, buf, iv, aad, + digest, status, k); + /* move failed process packets to dr */ + if (rc < 0) + for (i = 0; i < k; i++) { + if (status[i]) + dr[n++] = i; + } + + if (n) + move_bad_mbufs(mb, dr, k, n); + + return k - n; +} + +uint16_t +esp_outb_tun_cpu_crypto_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + struct rte_ipsec_sa *sa = ss->sa; + uint32_t icv_len; + void *icv; + uint16_t n; + uint16_t i; + + n = esp_outb_tun_cpu_crypto_process(ss, mb, num); + + icv_len = sa->icv_len; + + for (i = 0; i < n; i++) { + struct rte_mbuf *ml = rte_pktmbuf_lastseg(mb[i]); + + mb[i]->pkt_len -= sa->sqh_len; + ml->data_len -= sa->sqh_len; + + icv = rte_pktmbuf_mtod_offset(ml, void *, + ml->data_len - icv_len); + remove_sqh(icv, sa->icv_len); + } + + return n; +} + +uint16_t +esp_outb_tun_cpu_crypto_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return esp_outb_tun_cpu_crypto_process(ss, mb, num); +} + +uint16_t +esp_outb_trs_cpu_crypto_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + struct rte_ipsec_sa *sa = ss->sa; + uint32_t icv_len; + void *icv; + uint16_t n; + uint16_t i; + + n = esp_outb_trs_cpu_crypto_process(ss, mb, num); + icv_len = sa->icv_len; + + for (i = 0; i < n; i++) { + struct rte_mbuf *ml = rte_pktmbuf_lastseg(mb[i]); + + mb[i]->pkt_len -= sa->sqh_len; + ml->data_len -= sa->sqh_len; + + icv = rte_pktmbuf_mtod_offset(ml, void *, + ml->data_len - icv_len); + remove_sqh(icv, sa->icv_len); + } + + return n; +} + +uint16_t +esp_outb_trs_cpu_crypto_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) +{ + return esp_outb_trs_cpu_crypto_process(ss, mb, num); +} + /* * process outbound packets for SA with ESN support, * for algorithms that require SQN.hibits to be implictly included @@ -410,8 +740,8 @@ esp_outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], * In that case we have to move ICV bytes back to their proper place. */ uint16_t -esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - uint16_t num) +esp_outb_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) { uint32_t i, k, icv_len, *icv; struct rte_mbuf *ml; @@ -498,7 +828,8 @@ inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, gen_iv(iv, sqc); /* try to update the packet itself */ - rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv, 0); + rc = outb_tun_pkt_prepare(sa, sqc, iv, mb[i], set_icv_va_pa, + (void *)&icv, 0); k += (rc >= 0); @@ -552,7 +883,7 @@ inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss, /* try to update the packet itself */ rc = outb_trs_pkt_prepare(sa, sqc, iv, mb[i], - l2, l3, &icv, 0); + l2, l3, set_icv_va_pa, (void *)&icv, 0); k += (rc >= 0); diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 23d394b46..b8d55a1c7 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -544,9 +544,9 @@ lksd_proto_prepare(const struct rte_ipsec_session *ss, * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled */ -static uint16_t -pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], - uint16_t num) +uint16_t +esp_outb_pkt_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num) { uint32_t i, k; uint32_t dr[num]; @@ -599,12 +599,48 @@ lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa, case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): pf->prepare = esp_outb_tun_prepare; pf->process = (sa->sqh_len != 0) ? - esp_outb_sqh_process : pkt_flag_process; + esp_outb_sqh_process : esp_outb_pkt_flag_process; break; case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): pf->prepare = esp_outb_trs_prepare; pf->process = (sa->sqh_len != 0) ? - esp_outb_sqh_process : pkt_flag_process; + esp_outb_sqh_process : esp_outb_pkt_flag_process; + break; + default: + rc = -ENOTSUP; + } + + return rc; +} + +static int +cpu_crypto_pkt_func_select(const struct rte_ipsec_sa *sa, + struct rte_ipsec_sa_pkt_func *pf) +{ + int32_t rc; + + static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK | + RTE_IPSEC_SATP_MODE_MASK; + + rc = 0; + switch (sa->type & msk) { + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->process = esp_inb_tun_cpu_crypto_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS): + pf->process = esp_inb_trs_cpu_crypto_pkt_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4): + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6): + pf->process = (sa->sqh_len != 0) ? + esp_outb_tun_cpu_crypto_sqh_process : + esp_outb_tun_cpu_crypto_flag_process; + break; + case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS): + pf->process = (sa->sqh_len != 0) ? + esp_outb_trs_cpu_crypto_sqh_process : + esp_outb_trs_cpu_crypto_flag_process; break; default: rc = -ENOTSUP; @@ -672,13 +708,16 @@ ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss, case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL: if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) - pf->process = pkt_flag_process; + pf->process = esp_outb_pkt_flag_process; else pf->process = inline_proto_outb_pkt_process; break; case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: pf->prepare = lksd_proto_prepare; - pf->process = pkt_flag_process; + pf->process = esp_outb_pkt_flag_process; + break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + rc = cpu_crypto_pkt_func_select(sa, pf); break; default: rc = -ENOTSUP; diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index 51e69ad05..770d36b8b 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -156,6 +156,14 @@ uint16_t inline_inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +esp_inb_tun_cpu_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +esp_inb_trs_cpu_crypto_pkt_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + /* outbound processing */ uint16_t @@ -170,6 +178,10 @@ uint16_t esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +esp_outb_pkt_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + uint16_t inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); @@ -182,4 +194,21 @@ uint16_t inline_proto_outb_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num); +uint16_t +esp_outb_tun_cpu_crypto_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +esp_outb_tun_cpu_crypto_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +esp_outb_trs_cpu_crypto_sqh_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + +uint16_t +esp_outb_trs_cpu_crypto_flag_process(const struct rte_ipsec_session *ss, + struct rte_mbuf *mb[], uint16_t num); + + #endif /* _SA_H_ */ diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c index 82c765a33..eaa8c17b7 100644 --- a/lib/librte_ipsec/ses.c +++ b/lib/librte_ipsec/ses.c @@ -19,7 +19,9 @@ session_check(struct rte_ipsec_session *ss) return -EINVAL; if ((ss->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO || ss->type == - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) && + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL || + ss->type == + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && ss->security.ctx == NULL) return -EINVAL; } From patchwork Mon Oct 7 16:28:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60642 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C0FDD1D40D; Mon, 7 Oct 2019 18:29:20 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 7C8431D166 for ; Mon, 7 Oct 2019 18:29:12 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:29:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082068" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:29:10 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:49 +0100 Message-Id: <20191007162850.60552-10-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 09/10] examples/ipsec-secgw: add security cpu_crypto action support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since ipsec library is added cpu_crypto security action type support, this patch updates ipsec-secgw sample application with added action type "cpu-crypto". The patch also includes a number of test scripts to prove the correctness of the implementation. Signed-off-by: Fan Zhang --- examples/ipsec-secgw/ipsec.c | 35 ++++++++++++++++++++++ examples/ipsec-secgw/ipsec_process.c | 7 +++-- examples/ipsec-secgw/sa.c | 13 ++++++-- examples/ipsec-secgw/test/run_test.sh | 10 +++++++ .../test/trs_3descbc_sha1_common_defs.sh | 8 ++--- .../test/trs_3descbc_sha1_cpu_crypto_defs.sh | 5 ++++ .../test/trs_aescbc_sha1_common_defs.sh | 8 ++--- .../test/trs_aescbc_sha1_cpu_crypto_defs.sh | 5 ++++ .../test/trs_aesctr_sha1_common_defs.sh | 8 ++--- .../test/trs_aesctr_sha1_cpu_crypto_defs.sh | 5 ++++ .../ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh | 5 ++++ .../test/trs_aesgcm_mb_cpu_crypto_defs.sh | 7 +++++ .../test/tun_3descbc_sha1_common_defs.sh | 8 ++--- .../test/tun_3descbc_sha1_cpu_crypto_defs.sh | 5 ++++ .../test/tun_aescbc_sha1_common_defs.sh | 8 ++--- .../test/tun_aescbc_sha1_cpu_crypto_defs.sh | 5 ++++ .../test/tun_aesctr_sha1_common_defs.sh | 8 ++--- .../test/tun_aesctr_sha1_cpu_crypto_defs.sh | 5 ++++ .../ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh | 5 ++++ .../test/tun_aesgcm_mb_cpu_crypto_defs.sh | 7 +++++ 20 files changed, 138 insertions(+), 29 deletions(-) create mode 100644 examples/ipsec-secgw/test/trs_3descbc_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/trs_aescbc_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/trs_aesctr_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/trs_aesgcm_mb_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_3descbc_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_aescbc_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_aesctr_sha1_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh create mode 100644 examples/ipsec-secgw/test/tun_aesgcm_mb_cpu_crypto_defs.sh diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index 1145ca1c0..02b9443a8 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -51,6 +52,19 @@ set_ipsec_conf(struct ipsec_sa *sa, struct rte_security_ipsec_xform *ipsec) ipsec->esn_soft_limit = IPSEC_OFFLOAD_ESN_SOFTLIMIT; } +static int32_t +compute_cipher_offset(struct ipsec_sa *sa) +{ + int32_t offset; + + if (sa->aead_algo == RTE_CRYPTO_AEAD_AES_GCM) + return 0; + + offset = (sa->iv_len + sizeof(struct rte_esp_hdr)); + + return offset; +} + int create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa) { @@ -117,6 +131,25 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa) "SEC Session init failed: err: %d\n", ret); return -1; } + } else if (sa->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { + struct rte_security_ctx *ctx = + (struct rte_security_ctx *) + rte_cryptodev_get_sec_ctx( + ipsec_ctx->tbl[cdev_id_qp].id); + + /* Set IPsec parameters in conf */ + sess_conf.cpucrypto.cipher_offset = + compute_cipher_offset(sa); + + set_ipsec_conf(sa, &(sess_conf.ipsec)); + sa->security_ctx = ctx; + sa->sec_session = rte_security_session_create(ctx, + &sess_conf, ipsec_ctx->session_priv_pool); + if (sa->sec_session == NULL) { + RTE_LOG(ERR, IPSEC, + "SEC Session init failed: err: %d\n", ret); + return -1; + } } else { RTE_LOG(ERR, IPSEC, "Inline not supported\n"); return -1; @@ -512,6 +545,8 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, sa->security_ctx, sa->sec_session, pkts[i], NULL); continue; + default: + continue; } RTE_ASSERT(sa->cdev_id_qp < ipsec_ctx->nb_qps); diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c index 868f1a28d..1932b631f 100644 --- a/examples/ipsec-secgw/ipsec_process.c +++ b/examples/ipsec-secgw/ipsec_process.c @@ -101,7 +101,8 @@ fill_ipsec_session(struct rte_ipsec_session *ss, struct ipsec_ctx *ctx, } ss->crypto.ses = sa->crypto_session; /* setup session action type */ - } else if (sa->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) { + } else if (sa->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL || + sa->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { if (sa->sec_session == NULL) { rc = create_lookaside_session(ctx, sa); if (rc != 0) @@ -227,8 +228,8 @@ ipsec_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf) /* process packets inline */ else if (sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO || - sa->type == - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + sa->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL || + sa->type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) { satp = rte_ipsec_sa_type(ips->sa); diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index c3cf3bd1f..ba773346f 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -570,6 +570,9 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL; else if (strcmp(tokens[ti], "no-offload") == 0) rule->type = RTE_SECURITY_ACTION_TYPE_NONE; + else if (strcmp(tokens[ti], "cpu-crypto") == 0) + rule->type = + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO; else { APP_CHECK(0, status, "Invalid input \"%s\"", tokens[ti]); @@ -624,10 +627,13 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, if (status->status < 0) return; - if ((rule->type != RTE_SECURITY_ACTION_TYPE_NONE) && (portid_p == 0)) + if ((rule->type != RTE_SECURITY_ACTION_TYPE_NONE && rule->type != + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) && + (portid_p == 0)) printf("Missing portid option, falling back to non-offload\n"); - if (!type_p || !portid_p) { + if (!type_p || (!portid_p && rule->type != + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)) { rule->type = RTE_SECURITY_ACTION_TYPE_NONE; rule->portid = -1; } @@ -709,6 +715,9 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL: printf("lookaside-protocol-offload "); break; + case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: + printf("cpu-crypto-accelerated"); + break; } printf("\n"); } diff --git a/examples/ipsec-secgw/test/run_test.sh b/examples/ipsec-secgw/test/run_test.sh index 8055a4c04..bcaf91715 100755 --- a/examples/ipsec-secgw/test/run_test.sh +++ b/examples/ipsec-secgw/test/run_test.sh @@ -32,15 +32,21 @@ usage() } LINUX_TEST="tun_aescbc_sha1 \ +tun_aescbc_sha1_cpu_crypto \ tun_aescbc_sha1_esn \ tun_aescbc_sha1_esn_atom \ tun_aesgcm \ +tun_aesgcm_cpu_crypto \ +tun_aesgcm_mb_cpu_crypto \ tun_aesgcm_esn \ tun_aesgcm_esn_atom \ trs_aescbc_sha1 \ +trs_aescbc_sha1_cpu_crypto \ trs_aescbc_sha1_esn \ trs_aescbc_sha1_esn_atom \ trs_aesgcm \ +trs_aesgcm_cpu_crypto \ +trs_aesgcm_mb_cpu_crypto \ trs_aesgcm_esn \ trs_aesgcm_esn_atom \ tun_aescbc_sha1_old \ @@ -49,17 +55,21 @@ trs_aescbc_sha1_old \ trs_aesgcm_old \ tun_aesctr_sha1 \ tun_aesctr_sha1_old \ +tun_aesctr_sha1_cpu_crypto \ tun_aesctr_sha1_esn \ tun_aesctr_sha1_esn_atom \ trs_aesctr_sha1 \ +trs_aesctr_sha1_cpu_crypto \ trs_aesctr_sha1_old \ trs_aesctr_sha1_esn \ trs_aesctr_sha1_esn_atom \ tun_3descbc_sha1 \ +tun_3descbc_sha1_cpu_crypto \ tun_3descbc_sha1_old \ tun_3descbc_sha1_esn \ tun_3descbc_sha1_esn_atom \ trs_3descbc_sha1 \ +trs_3descbc_sha1_cpu_crypto \ trs_3descbc_sha1_old \ trs_3descbc_sha1_esn \ trs_3descbc_sha1_esn_atom" diff --git a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh index bb4cef6a9..eda2ddf0c 100644 --- a/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh @@ -32,14 +32,14 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} sa in 9 cipher_algo 3des-cbc \ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo 3des-cbc \ @@ -47,7 +47,7 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 9 cipher_algo 3des-cbc \ @@ -55,7 +55,7 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/trs_3descbc_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_3descbc_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..a864a8886 --- /dev/null +++ b/examples/ipsec-secgw/test/trs_3descbc_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/trs_3descbc_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh index e2621e0df..49b7b0713 100644 --- a/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh @@ -31,27 +31,27 @@ sa in 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/trs_aescbc_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_aescbc_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..b515cd9f8 --- /dev/null +++ b/examples/ipsec-secgw/test/trs_aescbc_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/trs_aescbc_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh index 9c213e3cc..428322307 100644 --- a/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh @@ -31,27 +31,27 @@ sa in 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #SA out rules sa out 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode transport +mode transport ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/trs_aesctr_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_aesctr_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..745a2a02b --- /dev/null +++ b/examples/ipsec-secgw/test/trs_aesctr_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/trs_aesctr_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh new file mode 100644 index 000000000..8917122da --- /dev/null +++ b/examples/ipsec-secgw/test/trs_aesgcm_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/trs_aesgcm_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/trs_aesgcm_mb_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/trs_aesgcm_mb_cpu_crypto_defs.sh new file mode 100644 index 000000000..26943321f --- /dev/null +++ b/examples/ipsec-secgw/test/trs_aesgcm_mb_cpu_crypto_defs.sh @@ -0,0 +1,7 @@ +#! /bin/bash + +. ${DIR}/trs_aesgcm_defs.sh + +CRYPTO_DEV=${CRYPTO_DEV:-'--vdev="crypto_aesni_mb0"'} + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh index dd802d6be..a583ef605 100644 --- a/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh @@ -32,14 +32,14 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} +mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM} sa in 9 cipher_algo 3des-cbc \ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} +mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo 3des-cbc \ @@ -47,14 +47,14 @@ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} +mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM} sa out 9 cipher_algo 3des-cbc \ cipher_key \ de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} +mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/tun_3descbc_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_3descbc_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..747141f62 --- /dev/null +++ b/examples/ipsec-secgw/test/tun_3descbc_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/tun_3descbc_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh index 4025da232..ac0232d2c 100644 --- a/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh @@ -31,26 +31,26 @@ sa in 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} +mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} +mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} +mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM} sa out 9 cipher_algo aes-128-cbc \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} +mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/tun_aescbc_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_aescbc_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..56076fa50 --- /dev/null +++ b/examples/ipsec-secgw/test/tun_aescbc_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/tun_aescbc_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh index a3ac3a698..523c396c9 100644 --- a/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh +++ b/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh @@ -31,26 +31,26 @@ sa in 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} +mode ipv4-tunnel src ${REMOTE_IPV4} dst ${LOCAL_IPV4} ${SGW_CFG_XPRM} sa in 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} +mode ipv6-tunnel src ${REMOTE_IPV6} dst ${LOCAL_IPV6} ${SGW_CFG_XPRM} #SA out rules sa out 7 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} +mode ipv4-tunnel src ${LOCAL_IPV4} dst ${REMOTE_IPV4} ${SGW_CFG_XPRM} sa out 9 cipher_algo aes-128-ctr \ cipher_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ auth_algo sha1-hmac \ auth_key de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef \ -mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} +mode ipv6-tunnel src ${LOCAL_IPV6} dst ${REMOTE_IPV6} ${SGW_CFG_XPRM} #Routing rules rt ipv4 dst ${REMOTE_IPV4}/32 port 0 diff --git a/examples/ipsec-secgw/test/tun_aesctr_sha1_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_aesctr_sha1_cpu_crypto_defs.sh new file mode 100644 index 000000000..3af680533 --- /dev/null +++ b/examples/ipsec-secgw/test/tun_aesctr_sha1_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/tun_aesctr_sha1_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh new file mode 100644 index 000000000..5bf1c0ae5 --- /dev/null +++ b/examples/ipsec-secgw/test/tun_aesgcm_cpu_crypto_defs.sh @@ -0,0 +1,5 @@ +#! /bin/bash + +. ${DIR}/tun_aesgcm_defs.sh + +SGW_CFG_XPRM='type cpu-crypto' diff --git a/examples/ipsec-secgw/test/tun_aesgcm_mb_cpu_crypto_defs.sh b/examples/ipsec-secgw/test/tun_aesgcm_mb_cpu_crypto_defs.sh new file mode 100644 index 000000000..039b8095e --- /dev/null +++ b/examples/ipsec-secgw/test/tun_aesgcm_mb_cpu_crypto_defs.sh @@ -0,0 +1,7 @@ +#! /bin/bash + +. ${DIR}/tun_aesgcm_defs.sh + +CRYPTO_DEV=${CRYPTO_DEV:-'--vdev="crypto_aesni_mb0"'} + +SGW_CFG_XPRM='type cpu-crypto' From patchwork Mon Oct 7 16:28:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 60643 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DD5881D411; Mon, 7 Oct 2019 18:29:22 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 2F8601D173 for ; Mon, 7 Oct 2019 18:29:14 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Oct 2019 09:29:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,268,1566889200"; d="scan'208";a="393082090" Received: from silpixa00398673.ir.intel.com (HELO silpixa00398673.ger.corp.intel.com) ([10.237.223.136]) by fmsmga005.fm.intel.com with ESMTP; 07 Oct 2019 09:29:12 -0700 From: Fan Zhang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, declan.doherty@intel.com, akhil.goyal@nxp.com, Fan Zhang Date: Mon, 7 Oct 2019 17:28:50 +0100 Message-Id: <20191007162850.60552-11-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20191007162850.60552-1-roy.fan.zhang@intel.com> References: <20190906131330.40185-1-roy.fan.zhang@intel.com> <20191007162850.60552-1-roy.fan.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 10/10] doc: update security cpu process description X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch updates programmer's guide and release note for newly added security cpu process description. Signed-off-by: Fan Zhang --- doc/guides/cryptodevs/aesni_gcm.rst | 6 ++ doc/guides/cryptodevs/aesni_mb.rst | 7 +++ doc/guides/prog_guide/rte_security.rst | 112 ++++++++++++++++++++++++++++++++- doc/guides/rel_notes/release_19_11.rst | 7 +++ 4 files changed, 131 insertions(+), 1 deletion(-) diff --git a/doc/guides/cryptodevs/aesni_gcm.rst b/doc/guides/cryptodevs/aesni_gcm.rst index 15002aba7..e1c4f9d24 100644 --- a/doc/guides/cryptodevs/aesni_gcm.rst +++ b/doc/guides/cryptodevs/aesni_gcm.rst @@ -9,6 +9,12 @@ The AES-NI GCM PMD (**librte_pmd_aesni_gcm**) provides poll mode crypto driver support for utilizing Intel multi buffer library (see AES-NI Multi-buffer PMD documentation to learn more about it, including installation). +The AES-NI GCM PMD also supports rte_security with security session create +and ``rte_security_process_cpu_crypto_bulk`` function call to process +symmetric crypto synchronously with all algorithms specified below. With this +way it supports scather-gather buffers (``rte_security_vec`` can be greater than +``1``. Please refer to ``rte_security`` programmer's guide for more detail. + Features -------- diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst index 1eff2b073..1a3ddd850 100644 --- a/doc/guides/cryptodevs/aesni_mb.rst +++ b/doc/guides/cryptodevs/aesni_mb.rst @@ -12,6 +12,13 @@ support for utilizing Intel multi buffer library, see the white paper The AES-NI MB PMD has current only been tested on Fedora 21 64-bit with gcc. +The AES-NI MB PMD also supports rte_security with security session create +and ``rte_security_process_cpu_crypto_bulk`` function call to process +symmetric crypto synchronously with all algorithms specified below. However +it does not support scather-gather buffer so the ``num`` value in +``rte_security_vec`` can only be ``1``. Please refer to ``rte_security`` +programmer's guide for more detail. + Features -------- diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst index 7d0734a37..39bcc2e69 100644 --- a/doc/guides/prog_guide/rte_security.rst +++ b/doc/guides/prog_guide/rte_security.rst @@ -296,6 +296,56 @@ Just like IPsec, in case of PDCP also header addition/deletion, cipher/ de-cipher, integrity protection/verification is done based on the action type chosen. + +Synchronous CPU Crypto +~~~~~~~~~~~~~~~~~~~~~~ + +RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: +This action type allows the burst of symmetric crypto workload using the same +algorithm, key, and direction being processed by CPU cycles synchronously. + +The packet is sent to the crypto device for symmetric crypto +processing. The device will encrypt or decrypt the buffer based on the key(s) +and algorithm(s) specified and preprocessed in the security session. Different +than the inline or lookaside modes, when the function exits, the user will +expect the buffers are either processed successfully, or having the error number +assigned to the appropriate index of the status array. + +E.g. in case of IPsec, the application will use CPU cycles to process both +stack and crypto workload synchronously. + +.. code-block:: console + + Egress Data Path + | + +--------|--------+ + | egress IPsec | + | | | + | +------V------+ | + | | SADB lookup | | + | +------|------+ | + | +------V------+ | + | | Desc | | + | +------|------+ | + +--------V--------+ + | + +--------V--------+ + | L2 Stack | + +-----------------+ + | | + | Synchronous | <------ Using CPU instructions + | Crypto Process | + | | + +--------V--------+ + | L2 Stack Post | <------ Add tunnel, ESP header etc header etc. + +--------|--------+ + | + +--------|--------+ + | NIC | + +--------|--------+ + V + + Device Features and Capabilities --------------------------------- @@ -491,6 +541,7 @@ Security Session configuration structure is defined as ``rte_security_session_co struct rte_security_ipsec_xform ipsec; struct rte_security_macsec_xform macsec; struct rte_security_pdcp_xform pdcp; + struct rte_security_cpu_crypto_xform cpu_crypto; }; /**< Configuration parameters for security session */ struct rte_crypto_sym_xform *crypto_xform; @@ -515,9 +566,12 @@ Offload. RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL, /**< All security protocol processing is performed inline during * transmission */ - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL + RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, /**< All security protocol processing including crypto is performed * on a lookaside accelerator */ + RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO + /**< Crypto processing for security protocol is processed by CPU + * synchronously }; The ``rte_security_session_protocol`` is defined as @@ -587,6 +641,10 @@ PDCP related configuration parameters are defined in ``rte_security_pdcp_xform`` uint32_t hfn_threshold; }; +For CPU Crypto processing action, the application should attach the initialized +`xform` to the security session configuration to specify the algorithm, key, +direction, and other necessary fields required to perform crypto operation. + Security API ~~~~~~~~~~~~ @@ -650,3 +708,55 @@ it is only valid to have a single flow to map to that security session. +-------+ +--------+ +-----+ | Eth | -> ... -> | ESP | -> | END | +-------+ +--------+ +-----+ + + +Process bulk crypto workload using CPU instructions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The inline and lookaside mode depends on the external HW to complete the +workload, where the user has another option to use rte_security to process +symmetric crypto synchronously with CPU instructions. + +When creating the security session the user need to fill the +``rte_security_session_conf`` parameter with the ``action_type`` field as +``RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO``, and points ``crypto_xform`` to an +properly initialized cryptodev xform. The user then passes the +``rte_security_session_conf`` instance to ``rte_security_session_create()`` +along with the security context pointer belongs to a certain SW crypto device. +The crypto device may or may not support this action type or the algorithm / +key sizes specified in the ``crypto_xform``, but when everything is ok +the function will return the created security session. + +The user then can use this session to process the crypto workload synchronously. +Instead of using mbuf ``next`` pointers, synchronous CPU crypto processing uses +a special structure ``rte_security_vec`` to describe scatter-gather buffers. + +.. code-block:: c + + struct rte_security_vec { + struct iovec *vec; + uint32_t num; + }; + +Where the structure ``rte_security_vec`` is used to store scatter-gather buffer +pointers, where ``vec`` is the pointer to one buffer and ``num`` indicates the +number of buffers. + +Please note not all crypto devices support scatter-gather buffer processing, +please check ``cryptodev`` guide for more details. + +The API of the synchronous CPU crypto process is + +.. code-block:: c + + int + rte_security_process_cpu_crypto_bulk(struct rte_security_ctx *instance, + struct rte_security_session *sess, + struct rte_security_vec buf[], void *iv[], void *aad[], + void *digest[], int status[], uint32_t num); + +This function will process ``num`` number of ``rte_security_vec`` buffers using +the content stored in ``iv`` and ``aad`` arrays. The API only support in-place +operation so ``buf`` will be overwritten the encrypted or decrypted values +when successfully processed. Otherwise a negative value will be returned and +the error number of the status array's according index will be set. diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst index f971b3f77..3d89ab643 100644 --- a/doc/guides/rel_notes/release_19_11.rst +++ b/doc/guides/rel_notes/release_19_11.rst @@ -72,6 +72,13 @@ New Features Added a symmetric crypto PMD for Marvell NITROX V security processor. See the :doc:`../cryptodevs/nitrox` guide for more details on this new +* **Added synchronous Crypto burst API with CPU for RTE_SECURITY.** + + A new API rte_security_process_cpu_crypto_bulk is introduced in security + library to process crypto workload in bulk using CPU instructions. AESNI_MB + and AESNI_GCM PMD, as well as unit-test and ipsec-secgw sample applications + are updated to support this feature. + Removed Items -------------