From patchwork Wed Jun 14 17:56:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 128715 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EECC142CB8; Wed, 14 Jun 2023 19:56:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7FD6940FAE; Wed, 14 Jun 2023 19:56:30 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 6FA5B40E0F for ; Wed, 14 Jun 2023 19:56:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686765388; x=1718301388; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=cgYv8CfOHIKqrefl2JlJu8rHch9K7Q6mtotOSd0E/Zs=; b=A9z/oZyKgJPV3ISyrW1el+KMlNNuCx3pd5BXX99+8jDcxrkCPzE5//2u wkXLcUd06y3tSRY+j3nsYLQsvi+Y3vBT7o5R5T5o3x/RUgJ8SH4hFnuq3 fgol7yQLbPfrybS7Opw3M0lHOZjmwVAvPbUQm9bjlAMpKaAKvIVMEAncJ m7cg49u8ACqyufA0Y7uCk8S178NxMd4PI4naAOQCl62JKT1RSt2/9uRrM Etzws+uOOuiMKMPj4lMwZvw19qAVLUyuZplINtYOexT1uW+EQYtyFsLU6 Rj07hIGoPfDzb/FiafEm2fWKw1Paj7GMiF5u6+OlvvkDDCousWSvig49G A==; X-IronPort-AV: E=McAfee;i="6600,9927,10741"; a="424577184" X-IronPort-AV: E=Sophos;i="6.00,243,1681196400"; d="scan'208";a="424577184" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jun 2023 10:56:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10741"; a="741908865" X-IronPort-AV: E=Sophos;i="6.00,243,1681196400"; d="scan'208";a="741908865" Received: from silpixa00401012.ir.intel.com ([10.243.23.125]) by orsmga008.jf.intel.com with ESMTP; 14 Jun 2023 10:56:25 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, kai.ji@intel.com, ciara.power@intel.com, Arek Kusztal Subject: [PATCH v3] crypto/qat: add SM3 HMAC to gen4 devices Date: Wed, 14 Jun 2023 17:56:23 +0000 Message-Id: <20230614175623.153833-1-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit adds SM3 HMAC to Intel QuickAssist Technology PMD generation 3 and 4 devices. Signed-off-by: Arek Kusztal Acked-by: Ciara Power --- v2: - Fixed problem with chaining operations - Added implementation of prefix tables v3: - Added support for gen3 devices doc/guides/cryptodevs/features/qat.ini | 1 + doc/guides/cryptodevs/qat.rst | 5 + drivers/common/qat/qat_adf/icp_qat_fw_la.h | 10 ++ drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 4 + drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 4 + drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 12 +++ drivers/crypto/qat/qat_sym_session.c | 100 +++++++++++++++---- drivers/crypto/qat/qat_sym_session.h | 7 ++ 8 files changed, 122 insertions(+), 21 deletions(-) diff --git a/doc/guides/cryptodevs/features/qat.ini b/doc/guides/cryptodevs/features/qat.ini index 70511a3076..6358a43357 100644 --- a/doc/guides/cryptodevs/features/qat.ini +++ b/doc/guides/cryptodevs/features/qat.ini @@ -70,6 +70,7 @@ AES XCBC MAC = Y ZUC EIA3 = Y AES CMAC (128) = Y SM3 = Y +SM3 HMAC = Y ; ; Supported AEAD algorithms of the 'qat' crypto driver. diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst index a4a25711ed..2403430cd6 100644 --- a/doc/guides/cryptodevs/qat.rst +++ b/doc/guides/cryptodevs/qat.rst @@ -51,6 +51,9 @@ Cipher algorithms: * ``RTE_CRYPTO_CIPHER_AES_DOCSISBPI`` * ``RTE_CRYPTO_CIPHER_DES_DOCSISBPI`` * ``RTE_CRYPTO_CIPHER_ZUC_EEA3`` +* ``RTE_CRYPTO_CIPHER_SM4_ECB`` +* ``RTE_CRYPTO_CIPHER_SM4_CBC`` +* ``RTE_CRYPTO_CIPHER_SM4_CTR`` Hash algorithms: @@ -76,6 +79,8 @@ Hash algorithms: * ``RTE_CRYPTO_AUTH_AES_GMAC`` * ``RTE_CRYPTO_AUTH_ZUC_EIA3`` * ``RTE_CRYPTO_AUTH_AES_CMAC`` +* ``RTE_CRYPTO_AUTH_SM3`` +* ``RTE_CRYPTO_AUTH_SM3_HMAC`` Supported AEAD algorithms: diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h index 227a6cebc8..70f0effa62 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h @@ -188,6 +188,16 @@ struct icp_qat_fw_la_bulk_req { QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \ QAT_LA_PARTIAL_MASK) +#define QAT_FW_LA_MODE2 1 +#define QAT_FW_LA_NO_MODE2 0 +#define QAT_FW_LA_MODE2_MASK 0x1 +#define QAT_FW_LA_MODE2_BITPOS 5 +#define ICP_QAT_FW_HASH_FLAG_MODE2_SET(flags, val) \ +QAT_FIELD_SET(flags, \ + val, \ + QAT_FW_LA_MODE2_BITPOS, \ + QAT_FW_LA_MODE2_MASK) + struct icp_qat_fw_cipher_req_hdr_cd_pars { union { struct { diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c index 6013fed721..733d690339 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c @@ -155,6 +155,10 @@ static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = { QAT_SYM_PLAIN_AUTH_CAP(SM3, CAP_SET(block_size, 64), CAP_RNG(digest_size, 32, 32, 0)), + QAT_SYM_AUTH_CAP(SM3_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 16, 64, 4), CAP_RNG(digest_size, 32, 32, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() }; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c index b219a418ba..a7f50c73df 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c @@ -103,6 +103,10 @@ static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = { QAT_SYM_PLAIN_AUTH_CAP(SM3, CAP_SET(block_size, 64), CAP_RNG(digest_size, 32, 32, 0)), + QAT_SYM_AUTH_CAP(SM3_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 16, 64, 4), CAP_RNG(digest_size, 32, 32, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() }; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h index e8e92e22d4..6f13a46a78 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -625,6 +625,12 @@ enqueue_one_auth_job_gen1(struct qat_sym_session *ctx, rte_memcpy(cipher_param->u.cipher_IV_array, auth_iv->va, ctx->auth_iv.length); break; + case ICP_QAT_HW_AUTH_ALGO_SM3: + if (ctx->auth_mode == ICP_QAT_HW_AUTH_MODE0) + auth_param->u1.aad_adr = 0; + else + auth_param->u1.aad_adr = ctx->prefix_paddr; + break; default: break; } @@ -678,6 +684,12 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx, case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: break; + case ICP_QAT_HW_AUTH_ALGO_SM3: + if (ctx->auth_mode == ICP_QAT_HW_AUTH_MODE0) + auth_param->u1.aad_adr = 0; + else + auth_param->u1.aad_adr = ctx->prefix_paddr; + break; default: break; } diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 9babf13b66..ba5636fcf4 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -129,11 +129,12 @@ qat_sym_cd_crc_set(struct qat_sym_session *cdesc, static int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, - const uint8_t *authkey, - uint32_t authkeylen, - uint32_t aad_length, - uint32_t digestsize, - unsigned int operation); + const uint8_t *authkey, + uint32_t authkeylen, + uint32_t aad_length, + uint32_t digestsize, + unsigned int operation, + enum qat_device_gen qat_dev_gen); static void qat_sym_session_init_common_hdr(struct qat_sym_session *session); @@ -574,6 +575,8 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, /* Set context descriptor physical address */ session->cd_paddr = session_paddr + offsetof(struct qat_sym_session, cd); + session->prefix_paddr = session_paddr + + offsetof(struct qat_sym_session, prefix_state); session->dev_id = internals->dev_id; session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE; @@ -752,6 +755,10 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SM3; session->auth_mode = ICP_QAT_HW_AUTH_MODE0; break; + case RTE_CRYPTO_AUTH_SM3_HMAC: + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SM3; + session->auth_mode = ICP_QAT_HW_AUTH_MODE2; + break; case RTE_CRYPTO_AUTH_SHA1: session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1; session->auth_mode = ICP_QAT_HW_AUTH_MODE0; @@ -877,7 +884,8 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, key_length, 0, auth_xform->digest_length, - auth_xform->op)) + auth_xform->op, + qat_dev_gen)) return -EINVAL; } else { session->qat_cmd = ICP_QAT_FW_LA_CMD_HASH_CIPHER; @@ -892,7 +900,8 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, key_length, 0, auth_xform->digest_length, - auth_xform->op)) + auth_xform->op, + qat_dev_gen)) return -EINVAL; if (qat_sym_cd_cipher_set(session, @@ -906,7 +915,8 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, key_length, 0, auth_xform->digest_length, - auth_xform->op)) + auth_xform->op, + qat_dev_gen)) return -EINVAL; } @@ -1012,7 +1022,8 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, aead_xform->key.length, aead_xform->aad_length, aead_xform->digest_length, - crypto_operation)) + crypto_operation, + qat_dev_gen)) return -EINVAL; } else { session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT; @@ -1029,7 +1040,8 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, aead_xform->key.length, aead_xform->aad_length, aead_xform->digest_length, - crypto_operation)) + crypto_operation, + qat_dev_gen)) return -EINVAL; if (qat_sym_cd_cipher_set(session, @@ -1198,6 +1210,8 @@ static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg) case ICP_QAT_HW_AUTH_ALGO_DELIMITER: /* return maximum block size in this case */ return SHA512_CBLOCK; + case ICP_QAT_HW_AUTH_ALGO_SM3: + return QAT_SM3_BLOCK_SIZE; default: QAT_LOG(ERR, "invalid hash alg %u", qat_hash_alg); return -EFAULT; @@ -2078,13 +2092,14 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, } int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, - const uint8_t *authkey, - uint32_t authkeylen, - uint32_t aad_length, - uint32_t digestsize, - unsigned int operation) + const uint8_t *authkey, + uint32_t authkeylen, + uint32_t aad_length, + uint32_t digestsize, + unsigned int operation, + enum qat_device_gen qat_dev_gen) { - struct icp_qat_hw_auth_setup *hash; + struct icp_qat_hw_auth_setup *hash, *hash_2 = NULL; struct icp_qat_hw_cipher_algo_blk *cipherconfig; struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req; struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; @@ -2100,6 +2115,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, uint32_t *aad_len = NULL; uint32_t wordIndex = 0; uint32_t *pTempKey; + uint8_t *prefix = NULL; int ret = 0; if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) { @@ -2150,6 +2166,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL + || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SM3 || cdesc->is_cnt_zero ) hash->auth_counter.counter = 0; @@ -2161,6 +2178,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, hash->auth_counter.counter = rte_bswap32(block_size); } + hash_cd_ctrl->hash_cfg_offset = hash_offset >> 3; cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_auth_setup); switch (cdesc->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SM3: @@ -2169,6 +2187,48 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, state1_size = qat_hash_get_state1_size( cdesc->qat_hash_alg); state2_size = ICP_QAT_HW_SM3_STATE2_SZ; + if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0) + break; + hash_2 = (struct icp_qat_hw_auth_setup *)(cdesc->cd_cur_ptr + + state1_size + state2_size); + hash_2->auth_config.config = + ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE2, + cdesc->qat_hash_alg, digestsize); + rte_memcpy(cdesc->cd_cur_ptr + state1_size + state2_size + + sizeof(*hash_2), sm3InitialState, + sizeof(sm3InitialState)); + hash_cd_ctrl->inner_state1_sz = state1_size; + hash_cd_ctrl->inner_state2_sz = state2_size; + hash_cd_ctrl->inner_state2_offset = + hash_cd_ctrl->hash_cfg_offset + + ((sizeof(struct icp_qat_hw_auth_setup) + + RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8)) >> 3); + hash_cd_ctrl->outer_config_offset = + hash_cd_ctrl->inner_state2_offset + + ((hash_cd_ctrl->inner_state2_sz) >> 3); + hash_cd_ctrl->outer_state1_sz = state1_size; + hash_cd_ctrl->outer_res_sz = state2_size; + hash_cd_ctrl->outer_prefix_sz = + qat_hash_get_block_size(cdesc->qat_hash_alg); + hash_cd_ctrl->outer_prefix_offset = + qat_hash_get_block_size(cdesc->qat_hash_alg) >> 3; + auth_param->u2.inner_prefix_sz = + qat_hash_get_block_size(cdesc->qat_hash_alg); + auth_param->hash_state_sz = digestsize; + if (qat_dev_gen == QAT_GEN4) { + ICP_QAT_FW_HASH_FLAG_MODE2_SET( + hash_cd_ctrl->hash_flags, + QAT_FW_LA_MODE2); + } else { + hash_cd_ctrl->hash_flags |= + ICP_QAT_FW_AUTH_HDR_FLAG_DO_NESTED; + } + prefix = cdesc->prefix_state; + rte_memcpy(prefix, authkey, authkeylen); + rte_memcpy(prefix + QAT_PREFIX_SIZE, authkey, + authkeylen); + cd_extra_size += sizeof(struct icp_qat_hw_auth_setup) + + state1_size + state2_size; break; case ICP_QAT_HW_AUTH_ALGO_SHA1: if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0) { @@ -2529,8 +2589,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, } /* Auth CD config setup */ - hash_cd_ctrl->hash_cfg_offset = hash_offset >> 3; - hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; + hash_cd_ctrl->hash_flags |= ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; hash_cd_ctrl->inner_state1_sz = state1_size; if (cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { hash_cd_ctrl->inner_res_sz = 4; @@ -2547,13 +2606,10 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, ((sizeof(struct icp_qat_hw_auth_setup) + RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8)) >> 3); - cdesc->cd_cur_ptr += state1_size + state2_size + cd_extra_size; cd_size = cdesc->cd_cur_ptr-(uint8_t *)&cdesc->cd; - cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3; - return 0; } @@ -2860,6 +2916,8 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, /* Set context descriptor physical address */ session->cd_paddr = session_paddr + offsetof(struct qat_sym_session, cd); + session->prefix_paddr = session_paddr + + offsetof(struct qat_sym_session, prefix_state); /* Get requested QAT command id - should be cipher */ qat_cmd_id = qat_get_cmd_id(xform); diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 9b5d11ac88..e9d03b232d 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -58,9 +58,14 @@ #define QAT_CRYPTO_SLICE_UCS 2 #define QAT_CRYPTO_SLICE_WCP 4 +#define QAT_PREFIX_SIZE 64 +#define QAT_PREFIX_TBL_SIZE ((QAT_PREFIX_SIZE) * 2) + #define QAT_SESSION_IS_SLICE_SET(flags, flag) \ (!!((flags) & (flag))) +#define QAT_SM3_BLOCK_SIZE 64 + enum qat_sym_proto_flag { QAT_CRYPTO_PROTO_FLAG_NONE = 0, QAT_CRYPTO_PROTO_FLAG_CCM = 1, @@ -100,8 +105,10 @@ struct qat_sym_session { enum icp_qat_hw_auth_mode auth_mode; void *bpi_ctx; struct qat_sym_cd cd; + uint8_t prefix_state[QAT_PREFIX_TBL_SIZE] __rte_cache_aligned; uint8_t *cd_cur_ptr; phys_addr_t cd_paddr; + phys_addr_t prefix_paddr; struct icp_qat_fw_la_bulk_req fw_req; uint8_t aad_len; struct qat_crypto_instance *inst;