From patchwork Mon Feb 21 10:48:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 107892 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3F74A034E; Mon, 21 Feb 2022 11:48:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 96B62410F3; Mon, 21 Feb 2022 11:48:43 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 48380410F0 for ; Mon, 21 Feb 2022 11:48:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645440521; x=1676976521; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=qttZT4TAYX9R2i7CcjQW6nKh9CKlDtqoI2ZDZq+Ev8M=; b=RpLhiqk8xLvCUtWbhvLRjLUvI/zoNW2XYEesRmMvelETdhvC5IfsMbJ8 CratXGeWz+S9jWpWmbhfE7cus/g1xdRiXMt21nHpIV++DJt2f+8x5sgU5 eu1gVzrBTeiftYZ0VtiRN6Jx3kLAYQpYFaWtnHaxQZqd/4nJ21UESczrd SUrERs273sX9+gc2sxgteN5M/GkOd7XIqXM4Sy7kEPGlBaKE98jOuGbuH WCZnw6JjTLBt6NAaQWPimeVUHFLS43SUB/VaahXDJOLZu9r4gYG3eqfP8 Hg5CzdYc4r4FYDAgooNe4rxq3rFppdWz6Z0yDWk/vX2/BhVa33xvBVaUY A==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251668103" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251668103" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 02:48:40 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="638517117" Received: from silpixa00400308.ir.intel.com ([10.237.214.95]) by orsmga004.jf.intel.com with ESMTP; 21 Feb 2022 02:48:38 -0800 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Subject: [PATCH v3 1/5] crypto/qat: refactor asymmetric crypto functions Date: Mon, 21 Feb 2022 10:48:27 +0000 Message-Id: <20220221104831.30149-2-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> References: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit refactors asummetric crypto functions in Intel QuickAssist Technology PMD. Functions right now are shorter and far easier readable, plus it facilitates addition of new algorithms. Signed-off-by: Arek Kusztal --- doc/guides/cryptodevs/qat.rst | 1 + drivers/common/qat/qat_adf/qat_pke.h | 215 ++++ .../qat/qat_adf/qat_pke_functionality_arrays.h | 79 -- drivers/crypto/qat/dev/qat_asym_pmd_gen1.c | 1 - drivers/crypto/qat/qat_asym.c | 1130 +++++++++----------- drivers/crypto/qat/qat_asym.h | 16 +- 6 files changed, 734 insertions(+), 708 deletions(-) create mode 100644 drivers/common/qat/qat_adf/qat_pke.h delete mode 100644 drivers/common/qat/qat_adf/qat_pke_functionality_arrays.h diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst index 88a50b2816..452bc843c2 100644 --- a/doc/guides/cryptodevs/qat.rst +++ b/doc/guides/cryptodevs/qat.rst @@ -174,6 +174,7 @@ The QAT ASYM PMD has support for: * ``RTE_CRYPTO_ASYM_XFORM_MODEX`` * ``RTE_CRYPTO_ASYM_XFORM_MODINV`` +* ``RTE_CRYPTO_ASYM_XFORM_RSA`` Limitations ~~~~~~~~~~~ diff --git a/drivers/common/qat/qat_adf/qat_pke.h b/drivers/common/qat/qat_adf/qat_pke.h new file mode 100644 index 0000000000..82bb1ee55e --- /dev/null +++ b/drivers/common/qat/qat_adf/qat_pke.h @@ -0,0 +1,215 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021-2022 Intel Corporation + */ + +#ifndef _QAT_PKE_FUNCTIONALITY_ARRAYS_H_ +#define _QAT_PKE_FUNCTIONALITY_ARRAYS_H_ + +#include "icp_qat_fw_mmp_ids.h" + +/* + * Modular exponentiation functionality IDs + */ + +struct qat_asym_function { + uint32_t func_id; + uint32_t bytesize; +}; + +static struct qat_asym_function +get_modexp_function(struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function = { }; + + if (xform->modex.modulus.length <= 64) { + qat_function.func_id = MATHS_MODEXP_L512; + qat_function.bytesize = 64; + } else if (xform->modex.modulus.length <= 128) { + qat_function.func_id = MATHS_MODEXP_L1024; + qat_function.bytesize = 128; + } else if (xform->modex.modulus.length <= 192) { + qat_function.func_id = MATHS_MODEXP_L1536; + qat_function.bytesize = 192; + } else if (xform->modex.modulus.length <= 256) { + qat_function.func_id = MATHS_MODEXP_L2048; + qat_function.bytesize = 256; + } else if (xform->modex.modulus.length <= 320) { + qat_function.func_id = MATHS_MODEXP_L2560; + qat_function.bytesize = 320; + } else if (xform->modex.modulus.length <= 384) { + qat_function.func_id = MATHS_MODEXP_L3072; + qat_function.bytesize = 384; + } else if (xform->modex.modulus.length <= 448) { + qat_function.func_id = MATHS_MODEXP_L3584; + qat_function.bytesize = 448; + } else if (xform->modex.modulus.length <= 512) { + qat_function.func_id = MATHS_MODEXP_L4096; + qat_function.bytesize = 512; + } + return qat_function; +} + +static struct qat_asym_function +get_modinv_function(struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function = { }; + + if (xform->modinv.modulus.data[ + xform->modinv.modulus.length - 1] & 0x01) { + if (xform->modex.modulus.length <= 16) { + qat_function.func_id = MATHS_MODINV_ODD_L128; + qat_function.bytesize = 16; + } else if (xform->modex.modulus.length <= 24) { + qat_function.func_id = MATHS_MODINV_ODD_L192; + qat_function.bytesize = 24; + } else if (xform->modex.modulus.length <= 32) { + qat_function.func_id = MATHS_MODINV_ODD_L256; + qat_function.bytesize = 32; + } else if (xform->modex.modulus.length <= 48) { + qat_function.func_id = MATHS_MODINV_ODD_L384; + qat_function.bytesize = 48; + } else if (xform->modex.modulus.length <= 64) { + qat_function.func_id = MATHS_MODINV_ODD_L512; + qat_function.bytesize = 64; + } else if (xform->modex.modulus.length <= 96) { + qat_function.func_id = MATHS_MODINV_ODD_L768; + qat_function.bytesize = 96; + } else if (xform->modex.modulus.length <= 128) { + qat_function.func_id = MATHS_MODINV_ODD_L1024; + qat_function.bytesize = 128; + } else if (xform->modex.modulus.length <= 192) { + qat_function.func_id = MATHS_MODINV_ODD_L1536; + qat_function.bytesize = 192; + } else if (xform->modex.modulus.length <= 256) { + qat_function.func_id = MATHS_MODINV_ODD_L2048; + qat_function.bytesize = 256; + } else if (xform->modex.modulus.length <= 384) { + qat_function.func_id = MATHS_MODINV_ODD_L3072; + qat_function.bytesize = 384; + } else if (xform->modex.modulus.length <= 512) { + qat_function.func_id = MATHS_MODINV_ODD_L4096; + qat_function.bytesize = 512; + } + } else { + if (xform->modex.modulus.length <= 16) { + qat_function.func_id = MATHS_MODINV_EVEN_L128; + qat_function.bytesize = 16; + } else if (xform->modex.modulus.length <= 24) { + qat_function.func_id = MATHS_MODINV_EVEN_L192; + qat_function.bytesize = 24; + } else if (xform->modex.modulus.length <= 32) { + qat_function.func_id = MATHS_MODINV_EVEN_L256; + qat_function.bytesize = 32; + } else if (xform->modex.modulus.length <= 48) { + qat_function.func_id = MATHS_MODINV_EVEN_L384; + qat_function.bytesize = 48; + } else if (xform->modex.modulus.length <= 64) { + qat_function.func_id = MATHS_MODINV_EVEN_L512; + qat_function.bytesize = 64; + } else if (xform->modex.modulus.length <= 96) { + qat_function.func_id = MATHS_MODINV_EVEN_L768; + qat_function.bytesize = 96; + } else if (xform->modex.modulus.length <= 128) { + qat_function.func_id = MATHS_MODINV_EVEN_L1024; + qat_function.bytesize = 128; + } else if (xform->modex.modulus.length <= 192) { + qat_function.func_id = MATHS_MODINV_EVEN_L1536; + qat_function.bytesize = 192; + } else if (xform->modex.modulus.length <= 256) { + qat_function.func_id = MATHS_MODINV_EVEN_L2048; + qat_function.bytesize = 256; + } else if (xform->modex.modulus.length <= 384) { + qat_function.func_id = MATHS_MODINV_EVEN_L3072; + qat_function.bytesize = 384; + } else if (xform->modex.modulus.length <= 512) { + qat_function.func_id = MATHS_MODINV_EVEN_L4096; + qat_function.bytesize = 512; + } + } + + return qat_function; +} + +static struct qat_asym_function +get_rsa_enc_function(struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function = { }; + + if (xform->rsa.n.length <= 64) { + qat_function.func_id = PKE_RSA_EP_512; + qat_function.bytesize = 64; + } else if (xform->rsa.n.length <= 128) { + qat_function.func_id = PKE_RSA_EP_1024; + qat_function.bytesize = 128; + } else if (xform->rsa.n.length <= 192) { + qat_function.func_id = PKE_RSA_EP_1536; + qat_function.bytesize = 192; + } else if (xform->rsa.n.length <= 256) { + qat_function.func_id = PKE_RSA_EP_2048; + qat_function.bytesize = 256; + } else if (xform->rsa.n.length <= 384) { + qat_function.func_id = PKE_RSA_EP_3072; + qat_function.bytesize = 384; + } else if (xform->rsa.n.length <= 512) { + qat_function.func_id = PKE_RSA_EP_4096; + qat_function.bytesize = 512; + } + return qat_function; +} + +static struct qat_asym_function +get_rsa_dec_function(struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function = { }; + + if (xform->rsa.n.length <= 64) { + qat_function.func_id = PKE_RSA_DP1_512; + qat_function.bytesize = 64; + } else if (xform->rsa.n.length <= 128) { + qat_function.func_id = PKE_RSA_DP1_1024; + qat_function.bytesize = 128; + } else if (xform->rsa.n.length <= 192) { + qat_function.func_id = PKE_RSA_DP1_1536; + qat_function.bytesize = 192; + } else if (xform->rsa.n.length <= 256) { + qat_function.func_id = PKE_RSA_DP1_2048; + qat_function.bytesize = 256; + } else if (xform->rsa.n.length <= 384) { + qat_function.func_id = PKE_RSA_DP1_3072; + qat_function.bytesize = 384; + } else if (xform->rsa.n.length <= 512) { + qat_function.func_id = PKE_RSA_DP1_4096; + qat_function.bytesize = 512; + } + return qat_function; +} + +static struct qat_asym_function +get_rsa_crt_function(struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function = { }; + int nlen = xform->rsa.qt.p.length * 2; + + if (nlen <= 64) { + qat_function.func_id = PKE_RSA_DP2_512; + qat_function.bytesize = 64; + } else if (nlen <= 128) { + qat_function.func_id = PKE_RSA_DP2_1024; + qat_function.bytesize = 128; + } else if (nlen <= 192) { + qat_function.func_id = PKE_RSA_DP2_1536; + qat_function.bytesize = 192; + } else if (nlen <= 256) { + qat_function.func_id = PKE_RSA_DP2_2048; + qat_function.bytesize = 256; + } else if (nlen <= 384) { + qat_function.func_id = PKE_RSA_DP2_3072; + qat_function.bytesize = 384; + } else if (nlen <= 512) { + qat_function.func_id = PKE_RSA_DP2_4096; + qat_function.bytesize = 512; + } + return qat_function; +} + +#endif diff --git a/drivers/common/qat/qat_adf/qat_pke_functionality_arrays.h b/drivers/common/qat/qat_adf/qat_pke_functionality_arrays.h deleted file mode 100644 index 42ffbbadd0..0000000000 --- a/drivers/common/qat/qat_adf/qat_pke_functionality_arrays.h +++ /dev/null @@ -1,79 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2019 Intel Corporation - */ - -#ifndef _QAT_PKE_FUNCTIONALITY_ARRAYS_H_ -#define _QAT_PKE_FUNCTIONALITY_ARRAYS_H_ - -#include "icp_qat_fw_mmp_ids.h" - -/* - * Modular exponentiation functionality IDs - */ -static const uint32_t MOD_EXP_SIZE[][2] = { - { 512, MATHS_MODEXP_L512 }, - { 1024, MATHS_MODEXP_L1024 }, - { 1536, MATHS_MODEXP_L1536 }, - { 2048, MATHS_MODEXP_L2048 }, - { 2560, MATHS_MODEXP_L2560 }, - { 3072, MATHS_MODEXP_L3072 }, - { 3584, MATHS_MODEXP_L3584 }, - { 4096, MATHS_MODEXP_L4096 } -}; - -static const uint32_t MOD_INV_IDS_ODD[][2] = { - { 128, MATHS_MODINV_ODD_L128 }, - { 192, MATHS_MODINV_ODD_L192 }, - { 256, MATHS_MODINV_ODD_L256 }, - { 384, MATHS_MODINV_ODD_L384 }, - { 512, MATHS_MODINV_ODD_L512 }, - { 768, MATHS_MODINV_ODD_L768 }, - { 1024, MATHS_MODINV_ODD_L1024 }, - { 1536, MATHS_MODINV_ODD_L1536 }, - { 2048, MATHS_MODINV_ODD_L2048 }, - { 3072, MATHS_MODINV_ODD_L3072 }, - { 4096, MATHS_MODINV_ODD_L4096 }, -}; - -static const uint32_t MOD_INV_IDS_EVEN[][2] = { - { 128, MATHS_MODINV_EVEN_L128 }, - { 192, MATHS_MODINV_EVEN_L192 }, - { 256, MATHS_MODINV_EVEN_L256 }, - { 384, MATHS_MODINV_EVEN_L384 }, - { 512, MATHS_MODINV_EVEN_L512 }, - { 768, MATHS_MODINV_EVEN_L768 }, - { 1024, MATHS_MODINV_EVEN_L1024 }, - { 1536, MATHS_MODINV_EVEN_L1536 }, - { 2048, MATHS_MODINV_EVEN_L2048 }, - { 3072, MATHS_MODINV_EVEN_L3072 }, - { 4096, MATHS_MODINV_EVEN_L4096 }, -}; - -static const uint32_t RSA_ENC_IDS[][2] = { - { 512, PKE_RSA_EP_512 }, - { 1024, PKE_RSA_EP_1024 }, - { 1536, PKE_RSA_EP_1536 }, - { 2048, PKE_RSA_EP_2048 }, - { 3072, PKE_RSA_EP_3072 }, - { 4096, PKE_RSA_EP_4096 }, -}; - -static const uint32_t RSA_DEC_IDS[][2] = { - { 512, PKE_RSA_DP1_512 }, - { 1024, PKE_RSA_DP1_1024 }, - { 1536, PKE_RSA_DP1_1536 }, - { 2048, PKE_RSA_DP1_2048 }, - { 3072, PKE_RSA_DP1_3072 }, - { 4096, PKE_RSA_DP1_4096 }, -}; - -static const uint32_t RSA_DEC_CRT_IDS[][2] = { - { 512, PKE_RSA_DP2_512 }, - { 1024, PKE_RSA_DP2_1024 }, - { 1536, PKE_RSA_DP2_1536 }, - { 2048, PKE_RSA_DP2_2048 }, - { 3072, PKE_RSA_DP2_3072 }, - { 4096, PKE_RSA_DP2_4096 }, -}; - -#endif diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c index 01a897a21f..4499fdaf2d 100644 --- a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c @@ -7,7 +7,6 @@ #include "qat_asym.h" #include "qat_crypto.h" #include "qat_crypto_pmd_gens.h" -#include "qat_pke_functionality_arrays.h" struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = { /* Device related operations */ diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index 6e65950baf..56dc0019dc 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -6,45 +6,18 @@ #include -#include "icp_qat_fw_pke.h" -#include "icp_qat_fw.h" -#include "qat_pke_functionality_arrays.h" - #include "qat_device.h" - #include "qat_logs.h" + #include "qat_asym.h" +#include "icp_qat_fw_pke.h" +#include "icp_qat_fw.h" +#include "qat_pke.h" uint8_t qat_asym_driver_id; struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS]; -void -qat_asym_init_op_cookie(void *op_cookie) -{ - int j; - struct qat_asym_op_cookie *cookie = op_cookie; - - cookie->input_addr = rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - input_params_ptrs); - - cookie->output_addr = rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - output_params_ptrs); - - for (j = 0; j < 8; j++) { - cookie->input_params_ptrs[j] = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - input_array[j]); - cookie->output_params_ptrs[j] = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - output_array[j]); - } -} - /* An rte_driver is needed in the registration of both the device and the driver * with cryptodev. * The actual qat pci's rte_driver can't be used as its name represents @@ -57,8 +30,52 @@ static const struct rte_driver cryptodev_qat_asym_driver = { .alias = qat_asym_drv_name }; +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG +#define HEXDUMP(name, where, size) QAT_DP_HEXDUMP_LOG(DEBUG, name, \ + where, size) +#define HEXDUMP_OFF(name, where, size, idx) QAT_DP_HEXDUMP_LOG(DEBUG, name, \ + &where[idx * size], size) +#else +#define HEXDUMP(name, where, size) +#define HEXDUMP_OFF(name, where, size, idx) +#endif -static void qat_clear_arrays(struct qat_asym_op_cookie *cookie, +#define CHECK_IF_NOT_EMPTY(param, name, pname, status) \ + do { \ + if (param.length == 0) { \ + QAT_LOG(ERR, \ + "Invalid " name \ + " input parameter, zero length " pname \ + ); \ + status = -EINVAL; \ + } else if (check_zero(param)) { \ + QAT_LOG(ERR, \ + "Invalid " name " input parameter, empty " \ + pname ", length = %d", \ + (int)param.length \ + ); \ + status = -EINVAL; \ + } \ + } while (0) + +#define SET_PKE_LN(where, what, how, idx) \ + rte_memcpy(where[idx] + how - \ + what.length, \ + what.data, \ + what.length) + +static void +request_init(struct icp_qat_fw_pke_request *qat_req) +{ + memset(qat_req, 0, sizeof(*qat_req)); + qat_req->pke_hdr.service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_PKE; + qat_req->pke_hdr.hdr_flags = + ICP_QAT_FW_COMN_HDR_FLAGS_BUILD + (ICP_QAT_FW_COMN_REQ_FLAG_SET); +} + +static void +cleanup_arrays(struct qat_asym_op_cookie *cookie, int in_count, int out_count, int alg_size) { int i; @@ -69,7 +86,8 @@ static void qat_clear_arrays(struct qat_asym_op_cookie *cookie, memset(cookie->output_array[i], 0x0, alg_size); } -static void qat_clear_arrays_crt(struct qat_asym_op_cookie *cookie, +static void +cleanup_crt(struct qat_asym_op_cookie *cookie, int alg_size) { int i; @@ -81,469 +99,490 @@ static void qat_clear_arrays_crt(struct qat_asym_op_cookie *cookie, memset(cookie->output_array[i], 0x0, alg_size); } -static void qat_clear_arrays_by_alg(struct qat_asym_op_cookie *cookie, +static void +cleanup(struct qat_asym_op_cookie *cookie, struct rte_crypto_asym_xform *xform, int alg_size) { if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) - qat_clear_arrays(cookie, QAT_ASYM_MODEXP_NUM_IN_PARAMS, + cleanup_arrays(cookie, QAT_ASYM_MODEXP_NUM_IN_PARAMS, QAT_ASYM_MODEXP_NUM_OUT_PARAMS, alg_size); else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) - qat_clear_arrays(cookie, QAT_ASYM_MODINV_NUM_IN_PARAMS, + cleanup_arrays(cookie, QAT_ASYM_MODINV_NUM_IN_PARAMS, QAT_ASYM_MODINV_NUM_OUT_PARAMS, alg_size); else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) { if (xform->rsa.key_type == RTE_RSA_KEY_TYPE_QT) - qat_clear_arrays_crt(cookie, alg_size); + cleanup_crt(cookie, alg_size); else { - qat_clear_arrays(cookie, QAT_ASYM_RSA_NUM_IN_PARAMS, + cleanup_arrays(cookie, QAT_ASYM_RSA_NUM_IN_PARAMS, QAT_ASYM_RSA_NUM_OUT_PARAMS, alg_size); } } } -#define qat_asym_sz_2param(arg) (arg, sizeof(arg)/sizeof(*arg)) - static int -qat_asym_get_sz_and_func_id(const uint32_t arr[][2], - size_t arr_sz, size_t *size, uint32_t *func_id) +check_zero(rte_crypto_param n) { - size_t i; + int i, len = n.length; - for (i = 0; i < arr_sz; i++) { - if (*size <= arr[i][0]) { - *size = arr[i][0]; - *func_id = arr[i][1]; - return 0; + if (len < 8) { + for (i = len - 1; i >= 0; i--) { + if (n.data[i] != 0x0) + return 0; } - } - return -1; + } else if (len == 8 && *(uint64_t *)&n.data[len - 8] == 0) { + return 1; + } else if (*(uint64_t *)&n.data[len - 8] == 0) { + for (i = len - 9; i >= 0; i--) { + if (n.data[i] != 0x0) + return 0; + } + } else + return 0; + + return 1; } -static size_t -max_of(int n, ...) +static struct qat_asym_function +get_asym_function(struct rte_crypto_asym_xform *xform) { - va_list args; - size_t len = 0, num; - int i; + struct qat_asym_function qat_function; + + switch (xform->xform_type) { + case RTE_CRYPTO_ASYM_XFORM_MODEX: + qat_function = get_modexp_function(xform); + break; + case RTE_CRYPTO_ASYM_XFORM_MODINV: + qat_function = get_modinv_function(xform); + break; + default: + qat_function.func_id = 0; + break; + } - va_start(args, n); - len = va_arg(args, size_t); + return qat_function; +} - for (i = 0; i < n - 1; i++) { - num = va_arg(args, size_t); - if (num > len) - len = num; +static int +modexp_set_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + uint32_t alg_bytesize, func_id; + int status = 0; + + CHECK_IF_NOT_EMPTY(xform->modex.modulus, "mod exp", + "modulus", status); + CHECK_IF_NOT_EMPTY(xform->modex.exponent, "mod exp", + "exponent", status); + if (status) + return status; + + qat_function = get_asym_function(xform); + func_id = qat_function.func_id; + if (qat_function.func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; } - va_end(args); + alg_bytesize = qat_function.bytesize; + + SET_PKE_LN(cookie->input_array, asym_op->modex.base, + alg_bytesize, 0); + SET_PKE_LN(cookie->input_array, xform->modex.exponent, + alg_bytesize, 1); + SET_PKE_LN(cookie->input_array, xform->modex.modulus, + alg_bytesize, 2); + + cookie->alg_bytesize = alg_bytesize; + qat_req->pke_hdr.cd_pars.func_id = func_id; + qat_req->input_param_count = QAT_ASYM_MODEXP_NUM_IN_PARAMS; + qat_req->output_param_count = QAT_ASYM_MODEXP_NUM_OUT_PARAMS; - return len; + HEXDUMP("ModExp base", cookie->input_array[0], alg_bytesize); + HEXDUMP("ModExp exponent", cookie->input_array[1], alg_bytesize); + HEXDUMP("ModExp modulus", cookie->input_array[2], alg_bytesize); + + return status; } -static int -qat_asym_check_nonzero(rte_crypto_param n) +static uint8_t +modexp_collect(struct rte_crypto_asym_op *asym_op, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) { - if (n.length < 8) { - /* Not a case for any cryptographic function except for DH - * generator which very often can be of one byte length - */ - size_t i; - - if (n.data[n.length - 1] == 0x0) { - for (i = 0; i < n.length - 1; i++) - if (n.data[i] != 0x0) - break; - if (i == n.length - 1) - return -(EINVAL); - } - } else if (*(uint64_t *)&n.data[ - n.length - 8] == 0) { - /* Very likely it is zeroed modulus */ - size_t i; + rte_crypto_param n = xform->modex.modulus; + uint32_t alg_bytesize = cookie->alg_bytesize; + uint8_t *modexp_result = asym_op->modex.result.data; + + rte_memcpy(modexp_result, + cookie->output_array[0] + alg_bytesize + - n.length, n.length); + HEXDUMP("ModExp result", cookie->output_array[0], + alg_bytesize); + return RTE_CRYPTO_OP_STATUS_SUCCESS; +} - for (i = 0; i < n.length - 8; i++) - if (n.data[i] != 0x0) - break; - if (i == n.length - 8) - return -(EINVAL); +static int +modinv_set_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + uint32_t alg_bytesize, func_id; + int status = 0; + + CHECK_IF_NOT_EMPTY(xform->modex.modulus, "mod inv", + "modulus", status); + if (status) + return status; + + qat_function = get_asym_function(xform); + func_id = qat_function.func_id; + if (func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; } + alg_bytesize = qat_function.bytesize; + + SET_PKE_LN(cookie->input_array, asym_op->modinv.base, + alg_bytesize, 0); + SET_PKE_LN(cookie->input_array, xform->modinv.modulus, + alg_bytesize, 1); + + cookie->alg_bytesize = alg_bytesize; + qat_req->pke_hdr.cd_pars.func_id = func_id; + qat_req->input_param_count = + QAT_ASYM_MODINV_NUM_IN_PARAMS; + qat_req->output_param_count = + QAT_ASYM_MODINV_NUM_OUT_PARAMS; + + HEXDUMP("ModInv base", cookie->input_array[0], alg_bytesize); + HEXDUMP("ModInv modulus", cookie->input_array[1], alg_bytesize); return 0; } +static uint8_t +modinv_collect(struct rte_crypto_asym_op *asym_op, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + rte_crypto_param n = xform->modinv.modulus; + uint8_t *modinv_result = asym_op->modinv.result.data; + uint32_t alg_bytesize = cookie->alg_bytesize; + + rte_memcpy(modinv_result + (asym_op->modinv.result.length + - n.length), + cookie->output_array[0] + alg_bytesize + - n.length, n.length); + HEXDUMP("ModInv result", cookie->output_array[0], + alg_bytesize); + return RTE_CRYPTO_OP_STATUS_SUCCESS; +} + static int -qat_asym_fill_arrays(struct rte_crypto_asym_op *asym_op, +rsa_set_pub_input(struct rte_crypto_asym_op *asym_op, struct icp_qat_fw_pke_request *qat_req, struct qat_asym_op_cookie *cookie, struct rte_crypto_asym_xform *xform) { - int err = 0; - size_t alg_size; - size_t alg_size_in_bytes; - uint32_t func_id = 0; + struct qat_asym_function qat_function; + uint32_t alg_bytesize, func_id; + int status = 0; + + qat_function = get_rsa_enc_function(xform); + func_id = qat_function.func_id; + if (func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; + } + alg_bytesize = qat_function.bytesize; - if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) { - err = qat_asym_check_nonzero(xform->modex.modulus); - if (err) { - QAT_LOG(ERR, "Empty modulus in modular exponentiation," - " aborting this operation"); - return err; + if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) { + switch (asym_op->rsa.pad) { + case RTE_CRYPTO_RSA_PADDING_NONE: + SET_PKE_LN(cookie->input_array, asym_op->rsa.message, + alg_bytesize, 0); + break; + default: + QAT_LOG(ERR, + "Invalid RSA padding (Encryption)" + ); + return -EINVAL; + } + HEXDUMP("RSA Message", cookie->input_array[0], alg_bytesize); + } else { + switch (asym_op->rsa.pad) { + case RTE_CRYPTO_RSA_PADDING_NONE: + SET_PKE_LN(cookie->input_array, asym_op->rsa.sign, + alg_bytesize, 0); + break; + default: + QAT_LOG(ERR, + "Invalid RSA padding (Verify)"); + return -EINVAL; } + HEXDUMP("RSA Signature", cookie->input_array[0], + alg_bytesize); + } - alg_size_in_bytes = max_of(3, asym_op->modex.base.length, - xform->modex.exponent.length, - xform->modex.modulus.length); - alg_size = alg_size_in_bytes << 3; + SET_PKE_LN(cookie->input_array, xform->rsa.e, + alg_bytesize, 1); + SET_PKE_LN(cookie->input_array, xform->rsa.n, + alg_bytesize, 2); - if (qat_asym_get_sz_and_func_id(MOD_EXP_SIZE, - sizeof(MOD_EXP_SIZE)/sizeof(*MOD_EXP_SIZE), - &alg_size, &func_id)) { - return -(EINVAL); - } + cookie->alg_bytesize = alg_bytesize; + qat_req->pke_hdr.cd_pars.func_id = func_id; - alg_size_in_bytes = alg_size >> 3; - rte_memcpy(cookie->input_array[0] + alg_size_in_bytes - - asym_op->modex.base.length - , asym_op->modex.base.data, - asym_op->modex.base.length); - rte_memcpy(cookie->input_array[1] + alg_size_in_bytes - - xform->modex.exponent.length - , xform->modex.exponent.data, - xform->modex.exponent.length); - rte_memcpy(cookie->input_array[2] + alg_size_in_bytes - - xform->modex.modulus.length, - xform->modex.modulus.data, - xform->modex.modulus.length); - cookie->alg_size = alg_size; - qat_req->pke_hdr.cd_pars.func_id = func_id; - qat_req->input_param_count = QAT_ASYM_MODEXP_NUM_IN_PARAMS; - qat_req->output_param_count = QAT_ASYM_MODEXP_NUM_OUT_PARAMS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "ModExp base", - cookie->input_array[0], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, "ModExp exponent", - cookie->input_array[1], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, " ModExpmodulus", - cookie->input_array[2], - alg_size_in_bytes); -#endif - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) { - err = qat_asym_check_nonzero(xform->modinv.modulus); - if (err) { - QAT_LOG(ERR, "Empty modulus in modular multiplicative" - " inverse, aborting this operation"); - return err; + HEXDUMP("RSA Public Key", cookie->input_array[1], alg_bytesize); + HEXDUMP("RSA Modulus", cookie->input_array[2], alg_bytesize); + + return status; +} + +static int +rsa_set_priv_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + uint32_t alg_bytesize, func_id; + int status = 0; + + if (xform->rsa.key_type == RTE_RSA_KEY_TYPE_QT) { + qat_function = get_rsa_crt_function(xform); + func_id = qat_function.func_id; + if (func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; } + alg_bytesize = qat_function.bytesize; + qat_req->input_param_count = + QAT_ASYM_RSA_QT_NUM_IN_PARAMS; + + SET_PKE_LN(cookie->input_array, xform->rsa.qt.p, + (alg_bytesize >> 1), 1); + SET_PKE_LN(cookie->input_array, xform->rsa.qt.q, + (alg_bytesize >> 1), 2); + SET_PKE_LN(cookie->input_array, xform->rsa.qt.dP, + (alg_bytesize >> 1), 3); + SET_PKE_LN(cookie->input_array, xform->rsa.qt.dQ, + (alg_bytesize >> 1), 4); + SET_PKE_LN(cookie->input_array, xform->rsa.qt.qInv, + (alg_bytesize >> 1), 5); + + HEXDUMP("RSA p", cookie->input_array[1], + alg_bytesize); + HEXDUMP("RSA q", cookie->input_array[2], + alg_bytesize); + HEXDUMP("RSA dP", cookie->input_array[3], + alg_bytesize); + HEXDUMP("RSA dQ", cookie->input_array[4], + alg_bytesize); + HEXDUMP("RSA qInv", cookie->input_array[5], + alg_bytesize); + } else if (xform->rsa.key_type == + RTE_RSA_KEY_TYPE_EXP) { + qat_function = get_rsa_dec_function(xform); + func_id = qat_function.func_id; + if (func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; + } + alg_bytesize = qat_function.bytesize; - alg_size_in_bytes = max_of(2, asym_op->modinv.base.length, - xform->modinv.modulus.length); - alg_size = alg_size_in_bytes << 3; - - if (xform->modinv.modulus.data[ - xform->modinv.modulus.length - 1] & 0x01) { - if (qat_asym_get_sz_and_func_id(MOD_INV_IDS_ODD, - sizeof(MOD_INV_IDS_ODD)/ - sizeof(*MOD_INV_IDS_ODD), - &alg_size, &func_id)) { - return -(EINVAL); - } - } else { - if (qat_asym_get_sz_and_func_id(MOD_INV_IDS_EVEN, - sizeof(MOD_INV_IDS_EVEN)/ - sizeof(*MOD_INV_IDS_EVEN), - &alg_size, &func_id)) { - return -(EINVAL); - } + SET_PKE_LN(cookie->input_array, xform->rsa.d, + alg_bytesize, 1); + SET_PKE_LN(cookie->input_array, xform->rsa.n, + alg_bytesize, 2); + + HEXDUMP("RSA d", cookie->input_array[1], + alg_bytesize); + HEXDUMP("RSA n", cookie->input_array[2], + alg_bytesize); + } else { + QAT_LOG(ERR, "Invalid RSA key type"); + return -EINVAL; + } + + if (asym_op->rsa.op_type == + RTE_CRYPTO_ASYM_OP_DECRYPT) { + switch (asym_op->rsa.pad) { + case RTE_CRYPTO_RSA_PADDING_NONE: + SET_PKE_LN(cookie->input_array, asym_op->rsa.cipher, + alg_bytesize, 0); + HEXDUMP("RSA ciphertext", cookie->input_array[0], + alg_bytesize); + break; + default: + QAT_LOG(ERR, + "Invalid padding of RSA (Decrypt)"); + return -(EINVAL); } - alg_size_in_bytes = alg_size >> 3; - rte_memcpy(cookie->input_array[0] + alg_size_in_bytes - - asym_op->modinv.base.length - , asym_op->modinv.base.data, - asym_op->modinv.base.length); - rte_memcpy(cookie->input_array[1] + alg_size_in_bytes - - xform->modinv.modulus.length - , xform->modinv.modulus.data, - xform->modinv.modulus.length); - cookie->alg_size = alg_size; - qat_req->pke_hdr.cd_pars.func_id = func_id; - qat_req->input_param_count = - QAT_ASYM_MODINV_NUM_IN_PARAMS; - qat_req->output_param_count = - QAT_ASYM_MODINV_NUM_OUT_PARAMS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "ModInv base", - cookie->input_array[0], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, "ModInv modulus", - cookie->input_array[1], - alg_size_in_bytes); -#endif - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) { - err = qat_asym_check_nonzero(xform->rsa.n); - if (err) { - QAT_LOG(ERR, "Empty modulus in RSA" - " inverse, aborting this operation"); - return err; + } else if (asym_op->rsa.op_type == + RTE_CRYPTO_ASYM_OP_SIGN) { + switch (asym_op->rsa.pad) { + case RTE_CRYPTO_RSA_PADDING_NONE: + SET_PKE_LN(cookie->input_array, asym_op->rsa.message, + alg_bytesize, 0); + HEXDUMP("RSA text to be signed", cookie->input_array[0], + alg_bytesize); + break; + default: + QAT_LOG(ERR, + "Invalid padding of RSA (Signature)"); + return -(EINVAL); } + } - alg_size_in_bytes = xform->rsa.n.length; - alg_size = alg_size_in_bytes << 3; + cookie->alg_bytesize = alg_bytesize; + qat_req->pke_hdr.cd_pars.func_id = func_id; + return status; +} - qat_req->input_param_count = - QAT_ASYM_RSA_NUM_IN_PARAMS; - qat_req->output_param_count = - QAT_ASYM_RSA_NUM_OUT_PARAMS; - - if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT || - asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_VERIFY) { - - if (qat_asym_get_sz_and_func_id(RSA_ENC_IDS, - sizeof(RSA_ENC_IDS)/ - sizeof(*RSA_ENC_IDS), - &alg_size, &func_id)) { - err = -(EINVAL); - QAT_LOG(ERR, - "Not supported RSA parameter size (key)"); - return err; - } - alg_size_in_bytes = alg_size >> 3; - if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) { - switch (asym_op->rsa.pad) { - case RTE_CRYPTO_RSA_PADDING_NONE: - rte_memcpy(cookie->input_array[0] + - alg_size_in_bytes - - asym_op->rsa.message.length - , asym_op->rsa.message.data, - asym_op->rsa.message.length); - break; - default: - err = -(EINVAL); - QAT_LOG(ERR, - "Invalid RSA padding (Encryption)"); - return err; - } -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Message", - cookie->input_array[0], - alg_size_in_bytes); -#endif - } else { - switch (asym_op->rsa.pad) { - case RTE_CRYPTO_RSA_PADDING_NONE: - rte_memcpy(cookie->input_array[0], - asym_op->rsa.sign.data, - alg_size_in_bytes); - break; - default: - err = -(EINVAL); - QAT_LOG(ERR, - "Invalid RSA padding (Verify)"); - return err; - } +static int +rsa_set_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + qat_req->input_param_count = + QAT_ASYM_RSA_NUM_IN_PARAMS; + qat_req->output_param_count = + QAT_ASYM_RSA_NUM_OUT_PARAMS; + + if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT || + asym_op->rsa.op_type == + RTE_CRYPTO_ASYM_OP_VERIFY) { + return rsa_set_pub_input(asym_op, qat_req, cookie, xform); + } else { + return rsa_set_priv_input(asym_op, qat_req, cookie, xform); + } +} -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, " RSA Signature", - cookie->input_array[0], - alg_size_in_bytes); -#endif +static uint8_t +rsa_collect(struct rte_crypto_asym_op *asym_op, + struct qat_asym_op_cookie *cookie) +{ + uint32_t alg_bytesize = cookie->alg_bytesize; - } - rte_memcpy(cookie->input_array[1] + - alg_size_in_bytes - - xform->rsa.e.length - , xform->rsa.e.data, - xform->rsa.e.length); - rte_memcpy(cookie->input_array[2] + - alg_size_in_bytes - - xform->rsa.n.length, - xform->rsa.n.data, - xform->rsa.n.length); - - cookie->alg_size = alg_size; - qat_req->pke_hdr.cd_pars.func_id = func_id; + if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT || + asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) { -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Public Key", - cookie->input_array[1], alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Modulus", - cookie->input_array[2], alg_size_in_bytes); -#endif - } else { - if (asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_DECRYPT) { - switch (asym_op->rsa.pad) { - case RTE_CRYPTO_RSA_PADDING_NONE: - rte_memcpy(cookie->input_array[0] - + alg_size_in_bytes - - asym_op->rsa.cipher.length, - asym_op->rsa.cipher.data, - asym_op->rsa.cipher.length); - break; - default: - QAT_LOG(ERR, - "Invalid padding of RSA (Decrypt)"); - return -(EINVAL); - } - - } else if (asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_SIGN) { - switch (asym_op->rsa.pad) { - case RTE_CRYPTO_RSA_PADDING_NONE: - rte_memcpy(cookie->input_array[0] - + alg_size_in_bytes - - asym_op->rsa.message.length, - asym_op->rsa.message.data, - asym_op->rsa.message.length); - break; - default: - QAT_LOG(ERR, - "Invalid padding of RSA (Signature)"); - return -(EINVAL); - } - } - if (xform->rsa.key_type == RTE_RSA_KEY_TYPE_QT) { - - qat_req->input_param_count = - QAT_ASYM_RSA_QT_NUM_IN_PARAMS; - if (qat_asym_get_sz_and_func_id(RSA_DEC_CRT_IDS, - sizeof(RSA_DEC_CRT_IDS)/ - sizeof(*RSA_DEC_CRT_IDS), - &alg_size, &func_id)) { - return -(EINVAL); - } - alg_size_in_bytes = alg_size >> 3; - - rte_memcpy(cookie->input_array[1] + - (alg_size_in_bytes >> 1) - - xform->rsa.qt.p.length - , xform->rsa.qt.p.data, - xform->rsa.qt.p.length); - rte_memcpy(cookie->input_array[2] + - (alg_size_in_bytes >> 1) - - xform->rsa.qt.q.length - , xform->rsa.qt.q.data, - xform->rsa.qt.q.length); - rte_memcpy(cookie->input_array[3] + - (alg_size_in_bytes >> 1) - - xform->rsa.qt.dP.length - , xform->rsa.qt.dP.data, - xform->rsa.qt.dP.length); - rte_memcpy(cookie->input_array[4] + - (alg_size_in_bytes >> 1) - - xform->rsa.qt.dQ.length - , xform->rsa.qt.dQ.data, - xform->rsa.qt.dQ.length); - rte_memcpy(cookie->input_array[5] + - (alg_size_in_bytes >> 1) - - xform->rsa.qt.qInv.length - , xform->rsa.qt.qInv.data, - xform->rsa.qt.qInv.length); - cookie->alg_size = alg_size; - qat_req->pke_hdr.cd_pars.func_id = func_id; + if (asym_op->rsa.op_type == + RTE_CRYPTO_ASYM_OP_ENCRYPT) { + uint8_t *rsa_result = asym_op->rsa.cipher.data; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "C", - cookie->input_array[0], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, "p", - cookie->input_array[1], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, "q", - cookie->input_array[2], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, - "dP", cookie->input_array[3], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, - "dQ", cookie->input_array[4], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, - "qInv", cookie->input_array[5], - alg_size_in_bytes); -#endif - } else if (xform->rsa.key_type == - RTE_RSA_KEY_TYPE_EXP) { - if (qat_asym_get_sz_and_func_id( - RSA_DEC_IDS, - sizeof(RSA_DEC_IDS)/ - sizeof(*RSA_DEC_IDS), - &alg_size, &func_id)) { - return -(EINVAL); - } - alg_size_in_bytes = alg_size >> 3; - rte_memcpy(cookie->input_array[1] + - alg_size_in_bytes - - xform->rsa.d.length, - xform->rsa.d.data, - xform->rsa.d.length); - rte_memcpy(cookie->input_array[2] + - alg_size_in_bytes - - xform->rsa.n.length, - xform->rsa.n.data, - xform->rsa.n.length); -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA ciphertext", - cookie->input_array[0], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA d", cookie->input_array[1], - alg_size_in_bytes); - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA n", cookie->input_array[2], - alg_size_in_bytes); -#endif + rte_memcpy(rsa_result, + cookie->output_array[0], + alg_bytesize); + HEXDUMP("RSA Encrypted data", cookie->output_array[0], + alg_bytesize); + } else { + uint8_t *rsa_result = asym_op->rsa.cipher.data; - cookie->alg_size = alg_size; - qat_req->pke_hdr.cd_pars.func_id = func_id; - } else { - QAT_LOG(ERR, "Invalid RSA key type"); - return -(EINVAL); + switch (asym_op->rsa.pad) { + case RTE_CRYPTO_RSA_PADDING_NONE: + rte_memcpy(rsa_result, + cookie->output_array[0], + alg_bytesize); + HEXDUMP("RSA signature", + cookie->output_array[0], + alg_bytesize); + break; + default: + QAT_LOG(ERR, "Padding not supported"); + return RTE_CRYPTO_OP_STATUS_ERROR; } } } else { - QAT_LOG(ERR, "Invalid asymmetric crypto xform"); - return -(EINVAL); + if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) { + uint8_t *rsa_result = asym_op->rsa.message.data; + + switch (asym_op->rsa.pad) { + case RTE_CRYPTO_RSA_PADDING_NONE: + rte_memcpy(rsa_result, + cookie->output_array[0], + alg_bytesize); + HEXDUMP("RSA Decrypted Message", + cookie->output_array[0], + alg_bytesize); + break; + default: + QAT_LOG(ERR, "Padding not supported"); + return RTE_CRYPTO_OP_STATUS_ERROR; + } + } else { + uint8_t *rsa_result = asym_op->rsa.sign.data; + + rte_memcpy(rsa_result, + cookie->output_array[0], + alg_bytesize); + HEXDUMP("RSA Signature", cookie->output_array[0], + alg_bytesize); + } } - return 0; + return RTE_CRYPTO_OP_STATUS_SUCCESS; } -static __rte_always_inline int + +static int +asym_set_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + switch (xform->xform_type) { + case RTE_CRYPTO_ASYM_XFORM_MODEX: + return modexp_set_input(asym_op, qat_req, + cookie, xform); + case RTE_CRYPTO_ASYM_XFORM_MODINV: + return modinv_set_input(asym_op, qat_req, + cookie, xform); + case RTE_CRYPTO_ASYM_XFORM_RSA: + return rsa_set_input(asym_op, qat_req, + cookie, xform); + default: + QAT_LOG(ERR, "Invalid/unsupported asymmetric crypto xform"); + return -EINVAL; + } + return 1; +} + +static int qat_asym_build_request(void *in_op, uint8_t *out_msg, void *op_cookie, - __rte_unused uint64_t *opaque, - __rte_unused enum qat_device_gen dev_gen) + __rte_unused uint64_t *opaque, + __rte_unused enum qat_device_gen qat_dev_gen) { - struct qat_asym_session *ctx; struct rte_crypto_op *op = (struct rte_crypto_op *)in_op; struct rte_crypto_asym_op *asym_op = op->asym; struct icp_qat_fw_pke_request *qat_req = (struct icp_qat_fw_pke_request *)out_msg; struct qat_asym_op_cookie *cookie = - (struct qat_asym_op_cookie *)op_cookie; + (struct qat_asym_op_cookie *)op_cookie; int err = 0; op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; - if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { - ctx = (struct qat_asym_session *) - op->asym->session->sess_private_data; - if (unlikely(ctx == NULL)) { - QAT_LOG(ERR, "Session has not been created for this device"); - goto error; - } - rte_mov64((uint8_t *)qat_req, (const uint8_t *)&(ctx->req_tmpl)); - err = qat_asym_fill_arrays(asym_op, qat_req, cookie, ctx->xform); - if (err) { - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - goto error; - } - } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) { - qat_fill_req_tmpl(qat_req); - err = qat_asym_fill_arrays(asym_op, qat_req, cookie, + switch (op->sess_type) { + case RTE_CRYPTO_OP_WITH_SESSION: + QAT_LOG(ERR, + "QAT asymmetric crypto PMD does not support session" + ); + goto error; + case RTE_CRYPTO_OP_SESSIONLESS: + request_init(qat_req); + err = asym_set_input(asym_op, qat_req, cookie, op->asym->xform); if (err) { op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; goto error; } - } else { + break; + default: QAT_DP_LOG(ERR, "Invalid session/xform settings"); op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION; goto error; @@ -553,21 +592,12 @@ qat_asym_build_request(void *in_op, uint8_t *out_msg, void *op_cookie, qat_req->pke_mid.src_data_addr = cookie->input_addr; qat_req->pke_mid.dest_data_addr = cookie->output_addr; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "qat_req:", qat_req, - sizeof(struct icp_qat_fw_pke_request)); -#endif + HEXDUMP("qat_req:", qat_req, sizeof(struct icp_qat_fw_pke_request)); return 0; error: - qat_req->pke_mid.opaque = (uint64_t)(uintptr_t)op; - -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "qat_req:", qat_req, - sizeof(struct icp_qat_fw_pke_request)); -#endif - + HEXDUMP("qat_req:", qat_req, sizeof(struct icp_qat_fw_pke_request)); qat_req->output_param_count = 0; qat_req->input_param_count = 0; qat_req->pke_hdr.service_type = ICP_QAT_FW_COMN_REQ_NULL; @@ -576,144 +606,30 @@ qat_asym_build_request(void *in_op, uint8_t *out_msg, void *op_cookie, return 0; } -static void qat_asym_collect_response(struct rte_crypto_op *rx_op, +static uint8_t +qat_asym_collect_response(struct rte_crypto_op *rx_op, struct qat_asym_op_cookie *cookie, struct rte_crypto_asym_xform *xform) { - size_t alg_size, alg_size_in_bytes = 0; struct rte_crypto_asym_op *asym_op = rx_op->asym; - if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) { - rte_crypto_param n = xform->modex.modulus; - - alg_size = cookie->alg_size; - alg_size_in_bytes = alg_size >> 3; - uint8_t *modexp_result = asym_op->modex.result.data; - - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) { - rte_memcpy(modexp_result + - (asym_op->modex.result.length - - n.length), - cookie->output_array[0] + alg_size_in_bytes - - n.length, n.length - ); - rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "ModExp result", - cookie->output_array[0], - alg_size_in_bytes); - -#endif - } - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) { - rte_crypto_param n = xform->modinv.modulus; - - alg_size = cookie->alg_size; - alg_size_in_bytes = alg_size >> 3; - uint8_t *modinv_result = asym_op->modinv.result.data; - - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) { - rte_memcpy(modinv_result + (asym_op->modinv.result.length - - n.length), - cookie->output_array[0] + alg_size_in_bytes - - n.length, n.length); - rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "ModInv result", - cookie->output_array[0], - alg_size_in_bytes); -#endif - } - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) { - - alg_size = cookie->alg_size; - alg_size_in_bytes = alg_size >> 3; - if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT || - asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_VERIFY) { - if (asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_ENCRYPT) { - uint8_t *rsa_result = asym_op->rsa.cipher.data; - - rte_memcpy(rsa_result, - cookie->output_array[0], - alg_size_in_bytes); - rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Encrypted data", - cookie->output_array[0], - alg_size_in_bytes); -#endif - } else if (asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_VERIFY) { - uint8_t *rsa_result = asym_op->rsa.cipher.data; - - switch (asym_op->rsa.pad) { - case RTE_CRYPTO_RSA_PADDING_NONE: - rte_memcpy(rsa_result, - cookie->output_array[0], - alg_size_in_bytes); - rx_op->status = - RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Signature", - cookie->output_array[0], - alg_size_in_bytes); -#endif - break; - default: - QAT_LOG(ERR, "Padding not supported"); - rx_op->status = - RTE_CRYPTO_OP_STATUS_ERROR; - break; - } - } - } else { - if (asym_op->rsa.op_type == - RTE_CRYPTO_ASYM_OP_DECRYPT) { - uint8_t *rsa_result = asym_op->rsa.message.data; - - switch (asym_op->rsa.pad) { - case RTE_CRYPTO_RSA_PADDING_NONE: - rte_memcpy(rsa_result, - cookie->output_array[0], - alg_size_in_bytes); - rx_op->status = - RTE_CRYPTO_OP_STATUS_SUCCESS; - break; - default: - QAT_LOG(ERR, "Padding not supported"); - rx_op->status = - RTE_CRYPTO_OP_STATUS_ERROR; - break; - } -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Decrypted Message", - rsa_result, alg_size_in_bytes); -#endif - } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) { - uint8_t *rsa_result = asym_op->rsa.sign.data; - - rte_memcpy(rsa_result, - cookie->output_array[0], - alg_size_in_bytes); - rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "RSA Signature", - cookie->output_array[0], - alg_size_in_bytes); -#endif - } - } + switch (xform->xform_type) { + case RTE_CRYPTO_ASYM_XFORM_MODEX: + return modexp_collect(asym_op, cookie, xform); + case RTE_CRYPTO_ASYM_XFORM_MODINV: + return modinv_collect(asym_op, cookie, xform); + case RTE_CRYPTO_ASYM_XFORM_RSA: + return rsa_collect(asym_op, cookie); + default: + QAT_LOG(ERR, "Not supported xform type"); + return RTE_CRYPTO_OP_STATUS_ERROR; } - qat_clear_arrays_by_alg(cookie, xform, alg_size_in_bytes); } -int +static int qat_asym_process_response(void **op, uint8_t *resp, void *op_cookie, __rte_unused uint64_t *dequeue_err_count) { - struct qat_asym_session *ctx; struct icp_qat_fw_pke_resp *resp_msg = (struct icp_qat_fw_pke_resp *)resp; struct rte_crypto_op *rx_op = (struct rte_crypto_op *)(uintptr_t) @@ -740,78 +656,40 @@ qat_asym_process_response(void **op, uint8_t *resp, " returned error"); } } - - if (rx_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { - ctx = (struct qat_asym_session *) - rx_op->asym->session->sess_private_data; - qat_asym_collect_response(rx_op, cookie, ctx->xform); - } else if (rx_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) { - qat_asym_collect_response(rx_op, cookie, rx_op->asym->xform); + if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) { + rx_op->status = qat_asym_collect_response(rx_op, + cookie, rx_op->asym->xform); + cleanup(cookie, rx_op->asym->xform, + cookie->alg_bytesize); } - *op = rx_op; -#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - QAT_DP_HEXDUMP_LOG(DEBUG, "resp_msg:", resp_msg, - sizeof(struct icp_qat_fw_pke_resp)); -#endif + *op = rx_op; + HEXDUMP("resp_msg:", resp_msg, sizeof(struct icp_qat_fw_pke_resp)); return 1; } int qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused, - struct rte_crypto_asym_xform *xform, - struct rte_cryptodev_asym_session *sess) + struct rte_crypto_asym_xform *xform __rte_unused, + struct rte_cryptodev_asym_session *sess __rte_unused) { - struct qat_asym_session *session; - - session = (struct qat_asym_session *) sess->sess_private_data; - if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) { - if (xform->modex.exponent.length == 0 || - xform->modex.modulus.length == 0) { - QAT_LOG(ERR, "Invalid mod exp input parameter"); - return -EINVAL; - } - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) { - if (xform->modinv.modulus.length == 0) { - QAT_LOG(ERR, "Invalid mod inv input parameter"); - return -EINVAL; - } - } else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) { - if (xform->rsa.n.length == 0) { - QAT_LOG(ERR, "Invalid rsa input parameter"); - return -EINVAL; - } - } else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END - || xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) { - QAT_LOG(ERR, "Invalid asymmetric crypto xform"); - return -EINVAL; - } else { - QAT_LOG(ERR, "Asymmetric crypto xform not implemented"); - return -EINVAL; - } - - session->xform = xform; - qat_asym_build_req_tmpl(session); - - return 0; + QAT_LOG(ERR, "QAT asymmetric PMD currently does not support session"); + return -ENOTSUP; } -unsigned int qat_asym_session_get_private_size( - struct rte_cryptodev *dev __rte_unused) +unsigned int +qat_asym_session_get_private_size(struct rte_cryptodev *dev __rte_unused) { - return RTE_ALIGN_CEIL(sizeof(struct qat_asym_session), 8); + QAT_LOG(ERR, "QAT asymmetric PMD currently does not support session"); + return 0; } void -qat_asym_session_clear(struct rte_cryptodev *dev, - struct rte_cryptodev_asym_session *sess) +qat_asym_session_clear(struct rte_cryptodev *dev __rte_unused, + struct rte_cryptodev_asym_session *sess __rte_unused) { - void *sess_priv = sess->sess_private_data; - struct qat_asym_session *s = (struct qat_asym_session *)sess_priv; - - if (sess_priv) - memset(s, 0, qat_asym_session_get_private_size(dev)); + QAT_LOG(ERR, "QAT asymmetric PMD currently does not support session"); } static uint16_t @@ -830,6 +708,32 @@ qat_asym_crypto_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, nb_ops); } +void +qat_asym_init_op_cookie(void *op_cookie) +{ + int j; + struct qat_asym_op_cookie *cookie = op_cookie; + + cookie->input_addr = rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + input_params_ptrs); + + cookie->output_addr = rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + output_params_ptrs); + + for (j = 0; j < 8; j++) { + cookie->input_params_ptrs[j] = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + input_array[j]); + cookie->output_params_ptrs[j] = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + output_array[j]); + } +} + int qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, struct qat_dev_cmd_param *qat_dev_cmd_param) diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h index 78caa5649c..cb7102aa3b 100644 --- a/drivers/crypto/qat/qat_asym.h +++ b/drivers/crypto/qat/qat_asym.h @@ -52,7 +52,7 @@ typedef uint64_t large_int_ptr; } struct qat_asym_op_cookie { - size_t alg_size; + size_t alg_bytesize; uint64_t error; rte_iova_t input_addr; rte_iova_t output_addr; @@ -103,20 +103,6 @@ void qat_asym_session_clear(struct rte_cryptodev *dev, struct rte_cryptodev_asym_session *sess); -/* - * Process PKE response received from outgoing queue of QAT - * - * @param op a ptr to the rte_crypto_op referred to by - * the response message is returned in this param - * @param resp icp_qat_fw_pke_resp message received from - * outgoing fw message queue - * @param op_cookie Cookie pointer that holds private metadata - * - */ -int -qat_asym_process_response(void **op, uint8_t *resp, - void *op_cookie, __rte_unused uint64_t *dequeue_err_count); - void qat_asym_init_op_cookie(void *cookie); From patchwork Mon Feb 21 10:48:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 107893 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5DE88A034E; Mon, 21 Feb 2022 11:48:52 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7EB341152; Mon, 21 Feb 2022 11:48:45 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 72712410F0 for ; Mon, 21 Feb 2022 11:48:43 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645440523; x=1676976523; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=yFkjs3MGRbt0fpWcm7ZkUa+5S9yVfhpWfIM4bYKBYrg=; b=lvQ9xB+xsisywmE4OwpF18zLtU7oDxHXs25kqApo7WI50SahcFXfw9uQ ircwtLcRT+W/MijVSWv++/b0mrW+u8XzIZeV1Ttrm1XlsDbamhfCVofRb 4Yfdf+2MxSXUvKPo43+GzVgwRRzCuAe9w72HXItbjPvEbz/B1N4EjNhpD 19dmzOulTxhzccjRS4TstfvSUvQh/sSdtru1jrVBsTyrNW5qkrjJu42/S 46Kn00n9c89MGD4t7ZP933HS+o1Sr4zgvXRyLWnM7WvuB7lUcaqDCSqtR 43+R8xh5TQSPrRdrXYv5jOz+gFcRRIUqJ2jawZowHhvCT9mp8svD0SbqY A==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251668108" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251668108" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 02:48:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="638517133" Received: from silpixa00400308.ir.intel.com ([10.237.214.95]) by orsmga004.jf.intel.com with ESMTP; 21 Feb 2022 02:48:41 -0800 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Subject: [PATCH v3 2/5] crypto/qat: add named elliptic curves Date: Mon, 21 Feb 2022 10:48:28 +0000 Message-Id: <20220221104831.30149-3-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> References: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds secp256r1 and secp521r1 elliptic curves to Intel QuickAssist Technology PMD. Signed-off-by: Arek Kusztal --- drivers/crypto/qat/qat_asym.c | 15 +++ drivers/crypto/qat/qat_ec.h | 206 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 221 insertions(+) create mode 100644 drivers/crypto/qat/qat_ec.h diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index 56dc0019dc..0a5831f531 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -13,6 +13,7 @@ #include "icp_qat_fw_pke.h" #include "icp_qat_fw.h" #include "qat_pke.h" +#include "qat_ec.h" uint8_t qat_asym_driver_id; @@ -64,6 +65,20 @@ static const struct rte_driver cryptodev_qat_asym_driver = { what.data, \ what.length) +#define SET_PKE_LN_9A(where, what, how, idx) \ + rte_memcpy(&where[idx * RTE_ALIGN_CEIL(how, 8)] + \ + RTE_ALIGN_CEIL(how, 8) - \ + what.length, \ + what.data, \ + what.length) + +#define SET_PKE_LN_EC(where, what, how, idx) \ + rte_memcpy(&where[idx * RTE_ALIGN_CEIL(how, 8)] + \ + RTE_ALIGN_CEIL(how, 8) - \ + how, \ + what.data, \ + how) + static void request_init(struct icp_qat_fw_pke_request *qat_req) { diff --git a/drivers/crypto/qat/qat_ec.h b/drivers/crypto/qat/qat_ec.h new file mode 100644 index 0000000000..c2f3ce93d2 --- /dev/null +++ b/drivers/crypto/qat/qat_ec.h @@ -0,0 +1,206 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021-2022 Intel Corporation + */ + +#ifndef _QAT_EC_H_ +#define _QAT_EC_H_ + +#define EC_MAX_SIZE 571 + +#include + +typedef struct { + uint8_t data[(EC_MAX_SIZE >> 3) + 1]; +} buffer; + +enum EC_NAME { + SECP256R1 = 1, + SECP384R1, + SECP521R1, +}; + +struct elliptic_curve { + const char *name; + uint32_t bytesize; + buffer x; + buffer y; + buffer n; + buffer p; + buffer a; + buffer b; + buffer h; +}; + +static struct elliptic_curve __rte_unused curve[] = { + [SECP256R1] = { + .name = "secp256r1", + .bytesize = 32, + .x = { + .data = { + 0x6B, 0x17, 0xD1, 0xF2, 0xE1, 0x2C, 0x42, 0x47, + 0xF8, 0xBC, 0xE6, 0xE5, 0x63, 0xA4, 0x40, 0xF2, + 0x77, 0x03, 0x7D, 0x81, 0x2D, 0xEB, 0x33, 0xA0, + 0xF4, 0xA1, 0x39, 0x45, 0xD8, 0x98, 0xC2, 0x96, + }, + }, + .y = { + .data = { + 0x4F, 0xE3, 0x42, 0xE2, 0xFE, 0x1A, 0x7F, 0x9B, + 0x8E, 0xE7, 0xEB, 0x4A, 0x7C, 0x0F, 0x9E, 0x16, + 0x2B, 0xCE, 0x33, 0x57, 0x6B, 0x31, 0x5E, 0xCE, + 0xCB, 0xB6, 0x40, 0x68, 0x37, 0xBF, 0x51, 0xF5, + }, + }, + .n = { + .data = { + 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xBC, 0xE6, 0xFA, 0xAD, 0xA7, 0x17, 0x9E, 0x84, + 0xF3, 0xB9, 0xCA, 0xC2, 0xFC, 0x63, 0x25, 0x51, + }, + }, + .p = { + .data = { + 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + }, + }, + .a = { + .data = { + 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x01, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFC, + }, + }, + .b = { + .data = { + 0x5A, 0xC6, 0x35, 0xD8, 0xAA, 0x3A, 0x93, 0xE7, + 0xB3, 0xEB, 0xBD, 0x55, 0x76, 0x98, 0x86, 0xBC, + 0x65, 0x1D, 0x06, 0xB0, 0xCC, 0x53, 0xB0, 0xF6, + 0x3B, 0xCE, 0x3C, 0x3E, 0x27, 0xD2, 0x60, 0x4B, + }, + }, + .h = { + .data = { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, + }, + }, + }, + + [SECP521R1] = { + .name = "secp521r1", + .bytesize = 66, + .x = { + .data = { + 0x00, 0xC6, 0x85, 0x8E, 0x06, 0xB7, 0x04, 0x04, + 0xE9, 0xCD, 0x9E, 0x3E, 0xCB, 0x66, 0x23, 0x95, + 0xB4, 0x42, 0x9C, 0x64, 0x81, 0x39, 0x05, 0x3F, + 0xB5, 0x21, 0xF8, 0x28, 0xAF, 0x60, 0x6B, 0x4D, + 0x3D, 0xBA, 0xA1, 0x4B, 0x5E, 0x77, 0xEF, 0xE7, + 0x59, 0x28, 0xFE, 0x1D, 0xC1, 0x27, 0xA2, 0xFF, + 0xA8, 0xDE, 0x33, 0x48, 0xB3, 0xC1, 0x85, 0x6A, + 0x42, 0x9B, 0xF9, 0x7E, 0x7E, 0x31, 0xC2, 0xE5, + 0xBD, 0x66, + }, + }, + .y = { + .data = { + 0x01, 0x18, 0x39, 0x29, 0x6A, 0x78, 0x9A, 0x3B, + 0xC0, 0x04, 0x5C, 0x8A, 0x5F, 0xB4, 0x2C, 0x7D, + 0x1B, 0xD9, 0x98, 0xF5, 0x44, 0x49, 0x57, 0x9B, + 0x44, 0x68, 0x17, 0xAF, 0xBD, 0x17, 0x27, 0x3E, + 0x66, 0x2C, 0x97, 0xEE, 0x72, 0x99, 0x5E, 0xF4, + 0x26, 0x40, 0xC5, 0x50, 0xB9, 0x01, 0x3F, 0xAD, + 0x07, 0x61, 0x35, 0x3C, 0x70, 0x86, 0xA2, 0x72, + 0xC2, 0x40, 0x88, 0xBE, 0x94, 0x76, 0x9F, 0xD1, + 0x66, 0x50, + }, + }, + .n = { + .data = { + 0x01, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFA, 0x51, 0x86, 0x87, 0x83, 0xBF, 0x2F, + 0x96, 0x6B, 0x7F, 0xCC, 0x01, 0x48, 0xF7, 0x09, + 0xA5, 0xD0, 0x3B, 0xB5, 0xC9, 0xB8, 0x89, 0x9C, + 0x47, 0xAE, 0xBB, 0x6F, 0xB7, 0x1E, 0x91, 0x38, + 0x64, 0x09, + }, + }, + .p = { + .data = { + 0x01, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, + }, + }, + .a = { + .data = { + 0x01, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFC, + }, + }, + .b = { + .data = { + 0x00, 0x51, 0x95, 0x3E, 0xB9, 0x61, 0x8E, 0x1C, + 0x9A, 0x1F, 0x92, 0x9A, 0x21, 0xA0, 0xB6, 0x85, + 0x40, 0xEE, 0xA2, 0xDA, 0x72, 0x5B, 0x99, 0xB3, + 0x15, 0xF3, 0xB8, 0xB4, 0x89, 0x91, 0x8E, 0xF1, + 0x09, 0xE1, 0x56, 0x19, 0x39, 0x51, 0xEC, 0x7E, + 0x93, 0x7B, 0x16, 0x52, 0xC0, 0xBD, 0x3B, 0xB1, + 0xBF, 0x07, 0x35, 0x73, 0xDF, 0x88, 0x3D, 0x2C, + 0x34, 0xF1, 0xEF, 0x45, 0x1F, 0xD4, 0x6B, 0x50, + 0x3F, 0x00, + }, + }, + .h = { + .data = { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x01, + }, + }, + } +}; + +static int __rte_unused +pick_curve(struct rte_crypto_asym_xform *xform) +{ + switch (xform->ec.curve_id) { + case RTE_CRYPTO_EC_GROUP_SECP256R1: + return SECP256R1; + case RTE_CRYPTO_EC_GROUP_SECP521R1: + return SECP521R1; + default: + return -1; + } +} + +#endif From patchwork Mon Feb 21 10:48:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 107894 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C640BA034E; Mon, 21 Feb 2022 11:48:57 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B113541150; Mon, 21 Feb 2022 11:48:47 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id AC1D741150 for ; Mon, 21 Feb 2022 11:48:45 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645440525; x=1676976525; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Y+tTYHhgs7YJ332AZEIgcNm/nXHxfNowf4Lced+kGPo=; b=IHhgve3muP6WIFq3TfOc4pTg1CTXCMZvWas9jukDlN1g8GknuO9zDq0c EnRj/UHX00ls6bUbId62EgG2iKSduZX3OEll+lgh8heWzK1oIc8eUnWaG BKqJu3PTeV9hlvqSPHSQUkDNmiBtjeD6SM4F+zFiA2BkHF6SrDWtoHNPd MGueim4sLKFq70iyRMrWNHpZiNjW6pTk3Y9M6ZurRXrbwyj0aZMzcDVlS giOKiH6fn2MGoOPgJEb2B5edEilFtUMp5E21s5vwNfR2Ep3V+RnO1nxt1 aE/J+m468V/Jdxc4F7gWj8hl6sJ5J1W79f0M1NQlK0HRjtrPi8pxgWKOW A==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251668111" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251668111" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 02:48:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="638517144" Received: from silpixa00400308.ir.intel.com ([10.237.214.95]) by orsmga004.jf.intel.com with ESMTP; 21 Feb 2022 02:48:43 -0800 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Subject: [PATCH v3 3/5] crypto/qat: add ecdsa algorithm Date: Mon, 21 Feb 2022 10:48:29 +0000 Message-Id: <20220221104831.30149-4-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> References: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds ECDSA algorithm to Intel QuickAssist Technology PMD. Signed-off-by: Arek Kusztal --- doc/guides/cryptodevs/qat.rst | 1 + doc/guides/rel_notes/release_22_03.rst | 5 ++ drivers/common/qat/qat_adf/qat_pke.h | 40 +++++++++ drivers/crypto/qat/qat_asym.c | 148 +++++++++++++++++++++++++++++++++ drivers/crypto/qat/qat_asym.h | 4 + 5 files changed, 198 insertions(+) diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst index 452bc843c2..593c2471ed 100644 --- a/doc/guides/cryptodevs/qat.rst +++ b/doc/guides/cryptodevs/qat.rst @@ -175,6 +175,7 @@ The QAT ASYM PMD has support for: * ``RTE_CRYPTO_ASYM_XFORM_MODEX`` * ``RTE_CRYPTO_ASYM_XFORM_MODINV`` * ``RTE_CRYPTO_ASYM_XFORM_RSA`` +* ``RTE_CRYPTO_ASYM_XFORM_ECDSA`` Limitations ~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index ff3095d742..c060cb562a 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -149,6 +149,11 @@ New Features * Called ``rte_ipv4/6_udptcp_cksum_mbuf()`` functions in testpmd csum mode to support software UDP/TCP checksum over multiple segments. +* **Updated Intel QuickAssist Technology asymmetric crypto PMD.** + + * ECDSA algorithm is now supported by Intel QuickAssist + Technology asymmetric crypto PMD. + Removed Items ------------- diff --git a/drivers/common/qat/qat_adf/qat_pke.h b/drivers/common/qat/qat_adf/qat_pke.h index 82bb1ee55e..1fe5f6bd8e 100644 --- a/drivers/common/qat/qat_adf/qat_pke.h +++ b/drivers/common/qat/qat_adf/qat_pke.h @@ -212,4 +212,44 @@ get_rsa_crt_function(struct rte_crypto_asym_xform *xform) return qat_function; } +static struct qat_asym_function +get_ecdsa_verify_function(struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + + switch (xform->ec.curve_id) { + case RTE_CRYPTO_EC_GROUP_SECP256R1: + qat_function.func_id = PKE_ECDSA_VERIFY_GFP_L256; + qat_function.bytesize = 32; + break; + case RTE_CRYPTO_EC_GROUP_SECP521R1: + qat_function.func_id = PKE_ECDSA_VERIFY_GFP_521; + qat_function.bytesize = 66; + break; + default: + qat_function.func_id = 0; + } + return qat_function; +} + +static struct qat_asym_function +get_ecdsa_function(struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + + switch (xform->ec.curve_id) { + case RTE_CRYPTO_EC_GROUP_SECP256R1: + qat_function.func_id = PKE_ECDSA_SIGN_RS_GFP_L256; + qat_function.bytesize = 32; + break; + case RTE_CRYPTO_EC_GROUP_SECP521R1: + qat_function.func_id = PKE_ECDSA_SIGN_RS_GFP_521; + qat_function.bytesize = 66; + break; + default: + qat_function.func_id = 0; + } + return qat_function; +} + #endif diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index 0a5831f531..24dd3ee57f 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -31,14 +31,24 @@ static const struct rte_driver cryptodev_qat_asym_driver = { .alias = qat_asym_drv_name }; +/* + * Macros with suffix _F are used with some of predefinded identifiers: + * - cookie->input_buffer + * - qat_alg_bytesize + */ #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define HEXDUMP(name, where, size) QAT_DP_HEXDUMP_LOG(DEBUG, name, \ where, size) #define HEXDUMP_OFF(name, where, size, idx) QAT_DP_HEXDUMP_LOG(DEBUG, name, \ &where[idx * size], size) + +#define HEXDUMP_OFF_F(name, idx) QAT_DP_HEXDUMP_LOG(DEBUG, name, \ + &cookie->input_buffer[idx * qat_alg_bytesize], \ + qat_alg_bytesize) #else #define HEXDUMP(name, where, size) #define HEXDUMP_OFF(name, where, size, idx) +#define HEXDUMP_OFF_F(name, idx) #endif #define CHECK_IF_NOT_EMPTY(param, name, pname, status) \ @@ -79,6 +89,17 @@ static const struct rte_driver cryptodev_qat_asym_driver = { what.data, \ how) +#define SET_PKE_LN_9A_F(what, idx) \ + rte_memcpy(&cookie->input_buffer[idx * qat_alg_bytesize] + \ + qat_alg_bytesize - what.length, \ + what.data, what.length) + +#define SET_PKE_LN_EC_F(what, how, idx) \ + rte_memcpy(&cookie->input_buffer[idx * \ + RTE_ALIGN_CEIL(how, 8)] + \ + RTE_ALIGN_CEIL(how, 8) - how, \ + what.data, how) + static void request_init(struct icp_qat_fw_pke_request *qat_req) { @@ -544,6 +565,128 @@ rsa_collect(struct rte_crypto_asym_op *asym_op, return RTE_CRYPTO_OP_STATUS_SUCCESS; } +static int +ecdsa_set_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + uint32_t alg_bytesize, qat_alg_bytesize, func_id; + int curve_id; + + curve_id = pick_curve(xform); + if (curve_id < 0) { + QAT_LOG(ERR, "Incorrect elliptic curve"); + return -EINVAL; + } + + switch (asym_op->ecdsa.op_type) { + case RTE_CRYPTO_ASYM_OP_SIGN: + qat_function = get_ecdsa_function(xform); + func_id = qat_function.func_id; + if (func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; + } + alg_bytesize = qat_function.bytesize; + qat_alg_bytesize = RTE_ALIGN_CEIL(alg_bytesize, 8); + + SET_PKE_LN_9A_F(asym_op->ecdsa.pkey, 0); + SET_PKE_LN_9A_F(asym_op->ecdsa.message, 1); + SET_PKE_LN_9A_F(asym_op->ecdsa.k, 2); + SET_PKE_LN_EC_F(curve[curve_id].b, alg_bytesize, 3); + SET_PKE_LN_EC_F(curve[curve_id].a, alg_bytesize, 4); + SET_PKE_LN_EC_F(curve[curve_id].p, alg_bytesize, 5); + SET_PKE_LN_EC_F(curve[curve_id].n, alg_bytesize, 6); + SET_PKE_LN_EC_F(curve[curve_id].y, alg_bytesize, 7); + SET_PKE_LN_EC_F(curve[curve_id].x, alg_bytesize, 8); + + cookie->alg_bytesize = alg_bytesize; + qat_req->pke_hdr.cd_pars.func_id = func_id; + qat_req->input_param_count = + QAT_ASYM_ECDSA_RS_SIGN_IN_PARAMS; + qat_req->output_param_count = + QAT_ASYM_ECDSA_RS_SIGN_OUT_PARAMS; + + HEXDUMP_OFF_F("ECDSA d", 0); + HEXDUMP_OFF_F("ECDSA e", 1); + HEXDUMP_OFF_F("ECDSA k", 2); + HEXDUMP_OFF_F("ECDSA b", 3); + HEXDUMP_OFF_F("ECDSA a", 4); + HEXDUMP_OFF_F("ECDSA n", 5); + HEXDUMP_OFF_F("ECDSA y", 6); + HEXDUMP_OFF_F("ECDSA x", 7); + break; + case RTE_CRYPTO_ASYM_OP_VERIFY: + qat_function = get_ecdsa_verify_function(xform); + func_id = qat_function.func_id; + if (func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; + } + alg_bytesize = qat_function.bytesize; + qat_alg_bytesize = RTE_ALIGN_CEIL(alg_bytesize, 8); + + SET_PKE_LN_9A_F(asym_op->ecdsa.message, 10); + SET_PKE_LN_9A_F(asym_op->ecdsa.s, 9); + SET_PKE_LN_9A_F(asym_op->ecdsa.r, 8); + SET_PKE_LN_EC_F(curve[curve_id].n, alg_bytesize, 7); + SET_PKE_LN_EC_F(curve[curve_id].x, alg_bytesize, 6); + SET_PKE_LN_EC_F(curve[curve_id].y, alg_bytesize, 5); + SET_PKE_LN_9A_F(asym_op->ecdsa.q.x, 4); + SET_PKE_LN_9A_F(asym_op->ecdsa.q.y, 3); + SET_PKE_LN_EC_F(curve[curve_id].a, alg_bytesize, 2); + SET_PKE_LN_EC_F(curve[curve_id].b, alg_bytesize, 1); + SET_PKE_LN_EC_F(curve[curve_id].p, alg_bytesize, 0); + + cookie->alg_bytesize = alg_bytesize; + qat_req->pke_hdr.cd_pars.func_id = func_id; + qat_req->input_param_count = + QAT_ASYM_ECDSA_RS_VERIFY_IN_PARAMS; + qat_req->output_param_count = + QAT_ASYM_ECDSA_RS_VERIFY_OUT_PARAMS; + + HEXDUMP_OFF_F("e", 0); + HEXDUMP_OFF_F("s", 1); + HEXDUMP_OFF_F("r", 2); + HEXDUMP_OFF_F("n", 3); + HEXDUMP_OFF_F("xG", 4); + HEXDUMP_OFF_F("yG", 5); + HEXDUMP_OFF_F("xQ", 6); + HEXDUMP_OFF_F("yQ", 7); + HEXDUMP_OFF_F("a", 8); + HEXDUMP_OFF_F("b", 9); + HEXDUMP_OFF_F("q", 10); + break; + default: + return -1; + } + + return 0; +} + +static uint8_t +ecdsa_collect(struct rte_crypto_asym_op *asym_op, + struct qat_asym_op_cookie *cookie) +{ + uint32_t alg_bytesize = RTE_ALIGN_CEIL(cookie->alg_bytesize, 8); + + if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) { + uint8_t *r = asym_op->ecdsa.r.data; + uint8_t *s = asym_op->ecdsa.s.data; + + asym_op->ecdsa.r.length = alg_bytesize; + asym_op->ecdsa.s.length = alg_bytesize; + rte_memcpy(r, cookie->output_array[0], alg_bytesize); + rte_memcpy(s, cookie->output_array[1], alg_bytesize); + HEXDUMP("R", cookie->output_array[0], + alg_bytesize); + HEXDUMP("S", cookie->output_array[1], + alg_bytesize); + } + return RTE_CRYPTO_OP_STATUS_SUCCESS; +} static int asym_set_input(struct rte_crypto_asym_op *asym_op, @@ -561,6 +704,9 @@ asym_set_input(struct rte_crypto_asym_op *asym_op, case RTE_CRYPTO_ASYM_XFORM_RSA: return rsa_set_input(asym_op, qat_req, cookie, xform); + case RTE_CRYPTO_ASYM_XFORM_ECDSA: + return ecdsa_set_input(asym_op, qat_req, + cookie, xform); default: QAT_LOG(ERR, "Invalid/unsupported asymmetric crypto xform"); return -EINVAL; @@ -635,6 +781,8 @@ qat_asym_collect_response(struct rte_crypto_op *rx_op, return modinv_collect(asym_op, cookie, xform); case RTE_CRYPTO_ASYM_XFORM_RSA: return rsa_collect(asym_op, cookie); + case RTE_CRYPTO_ASYM_XFORM_ECDSA: + return ecdsa_collect(asym_op, cookie); default: QAT_LOG(ERR, "Not supported xform type"); return RTE_CRYPTO_OP_STATUS_ERROR; diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h index cb7102aa3b..5e926125f2 100644 --- a/drivers/crypto/qat/qat_asym.h +++ b/drivers/crypto/qat/qat_asym.h @@ -28,6 +28,10 @@ typedef uint64_t large_int_ptr; #define QAT_ASYM_RSA_NUM_IN_PARAMS 3 #define QAT_ASYM_RSA_NUM_OUT_PARAMS 1 #define QAT_ASYM_RSA_QT_NUM_IN_PARAMS 6 +#define QAT_ASYM_ECDSA_RS_SIGN_IN_PARAMS 1 +#define QAT_ASYM_ECDSA_RS_SIGN_OUT_PARAMS 2 +#define QAT_ASYM_ECDSA_RS_VERIFY_IN_PARAMS 1 +#define QAT_ASYM_ECDSA_RS_VERIFY_OUT_PARAMS 0 /** * helper function to add an asym capability From patchwork Mon Feb 21 10:48:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 107895 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 97460A034E; Mon, 21 Feb 2022 11:49:03 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A8BF41157; Mon, 21 Feb 2022 11:48:49 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id C132741155 for ; Mon, 21 Feb 2022 11:48:47 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645440527; x=1676976527; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Eoa8fmEc8v3eNzIdbCZ20OlxrBNgkGi/4hDD4CK30K4=; b=BNCWR4Gv29OZV0brNwaY75z3Vq+m+oEKFeF9/nWxrHh9Xlv2e3lbRVil pynhhszMVvyIBjJ5c3E3r31JJ/UVr0drjHjYzZgb33M4B9UQjK7+xeGbv UOot5jOA/tULnH3JAAjdfqonlQzEe1m9MxF2qysf1ILSogtiD1A6/VtT8 Qg+Vgftkon3XplNDApLtWpZbg6MXFa6Sb23f/LPLANl3v8q2p2tbRoPoZ AQDgY9CRnKQrdr9gTCuAx0wV9YqzMbCeBKZvY0vNb7PbbykpfZ5BGYcqp XEdUWhvSsVBi4nB2R0EBPSL0exJk6QSZWvkZZe5gdM4WIir3L7n6hmTtC w==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251668117" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251668117" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 02:48:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="638517156" Received: from silpixa00400308.ir.intel.com ([10.237.214.95]) by orsmga004.jf.intel.com with ESMTP; 21 Feb 2022 02:48:46 -0800 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Subject: [PATCH v3 4/5] crypto/qat: add ecpm algorithm Date: Mon, 21 Feb 2022 10:48:30 +0000 Message-Id: <20220221104831.30149-5-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> References: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds Elliptic Curve Multiplication algorithm to Intel QuickAssist Technology PMD. Signed-off-by: Arek Kusztal --- doc/guides/cryptodevs/qat.rst | 1 + doc/guides/rel_notes/release_22_03.rst | 5 ++ drivers/common/qat/qat_adf/qat_pke.h | 20 ++++++++ drivers/crypto/qat/qat_asym.c | 85 +++++++++++++++++++++++++++++++++- drivers/crypto/qat/qat_asym.h | 2 + 5 files changed, 112 insertions(+), 1 deletion(-) diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst index 593c2471ed..785e041324 100644 --- a/doc/guides/cryptodevs/qat.rst +++ b/doc/guides/cryptodevs/qat.rst @@ -176,6 +176,7 @@ The QAT ASYM PMD has support for: * ``RTE_CRYPTO_ASYM_XFORM_MODINV`` * ``RTE_CRYPTO_ASYM_XFORM_RSA`` * ``RTE_CRYPTO_ASYM_XFORM_ECDSA`` +* ``RTE_CRYPTO_ASYM_XFORM_ECPM`` Limitations ~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index c060cb562a..dc26157819 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -154,6 +154,11 @@ New Features * ECDSA algorithm is now supported by Intel QuickAssist Technology asymmetric crypto PMD. +* **Updated Intel QuickAssist Technology asymmetric crypto PMD.** + + * ECPM algorithm is now supported by Intel QuickAssist + Technology asymmetric crypto PMD. + Removed Items ------------- diff --git a/drivers/common/qat/qat_adf/qat_pke.h b/drivers/common/qat/qat_adf/qat_pke.h index 1fe5f6bd8e..092fc373de 100644 --- a/drivers/common/qat/qat_adf/qat_pke.h +++ b/drivers/common/qat/qat_adf/qat_pke.h @@ -252,4 +252,24 @@ get_ecdsa_function(struct rte_crypto_asym_xform *xform) return qat_function; } +static struct qat_asym_function +get_ecpm_function(struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + + switch (xform->ec.curve_id) { + case RTE_CRYPTO_EC_GROUP_SECP256R1: + qat_function.func_id = MATHS_POINT_MULTIPLICATION_GFP_L256; + qat_function.bytesize = 32; + break; + case RTE_CRYPTO_EC_GROUP_SECP521R1: + qat_function.func_id = MATHS_POINT_MULTIPLICATION_GFP_521; + qat_function.bytesize = 66; + break; + default: + qat_function.func_id = 0; + } + return qat_function; +} + #endif diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index 24dd3ee57f..af96cc6bd2 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -83,7 +83,7 @@ static const struct rte_driver cryptodev_qat_asym_driver = { what.length) #define SET_PKE_LN_EC(where, what, how, idx) \ - rte_memcpy(&where[idx * RTE_ALIGN_CEIL(how, 8)] + \ + rte_memcpy(where[idx] + \ RTE_ALIGN_CEIL(how, 8) - \ how, \ what.data, \ @@ -689,6 +689,84 @@ ecdsa_collect(struct rte_crypto_asym_op *asym_op, } static int +ecpm_set_input(struct rte_crypto_asym_op *asym_op, + struct icp_qat_fw_pke_request *qat_req, + struct qat_asym_op_cookie *cookie, + struct rte_crypto_asym_xform *xform) +{ + struct qat_asym_function qat_function; + uint32_t alg_bytesize, __rte_unused qat_alg_bytesize, func_id; + int curve_id; + + curve_id = pick_curve(xform); + if (curve_id < 0) { + QAT_LOG(ERR, "Incorrect elliptic curve"); + return -EINVAL; + } + + qat_function = get_ecpm_function(xform); + func_id = qat_function.func_id; + if (func_id == 0) { + QAT_LOG(ERR, "Cannot obtain functionality id"); + return -EINVAL; + } + alg_bytesize = qat_function.bytesize; + qat_alg_bytesize = RTE_ALIGN_CEIL(alg_bytesize, 8); + + SET_PKE_LN(cookie->input_array, asym_op->ecpm.scalar, + alg_bytesize, 0); + SET_PKE_LN(cookie->input_array, asym_op->ecpm.p.x, + alg_bytesize, 1); + SET_PKE_LN(cookie->input_array, asym_op->ecpm.p.y, + alg_bytesize, 2); + SET_PKE_LN_EC(cookie->input_array, curve[SECP256R1].a, + alg_bytesize, 3); + SET_PKE_LN_EC(cookie->input_array, curve[SECP256R1].b, + alg_bytesize, 4); + SET_PKE_LN_EC(cookie->input_array, curve[SECP256R1].p, + alg_bytesize, 5); + SET_PKE_LN_EC(cookie->input_array, curve[SECP256R1].h, + alg_bytesize, 6); + + cookie->alg_bytesize = alg_bytesize; + qat_req->pke_hdr.cd_pars.func_id = func_id; + qat_req->input_param_count = + QAT_ASYM_ECPM_IN_PARAMS; + qat_req->output_param_count = + QAT_ASYM_ECPM_OUT_PARAMS; + + HEXDUMP("k", cookie->input_array[0], qat_alg_bytesize); + HEXDUMP("xG", cookie->input_array[1], qat_alg_bytesize); + HEXDUMP("yG", cookie->input_array[2], qat_alg_bytesize); + HEXDUMP("a", cookie->input_array[3], qat_alg_bytesize); + HEXDUMP("b", cookie->input_array[4], qat_alg_bytesize); + HEXDUMP("q", cookie->input_array[5], qat_alg_bytesize); + HEXDUMP("h", cookie->input_array[6], qat_alg_bytesize); + + return 0; +} + +static uint8_t +ecpm_collect(struct rte_crypto_asym_op *asym_op, + struct qat_asym_op_cookie *cookie) +{ + uint8_t *r = asym_op->ecpm.r.x.data; + uint8_t *s = asym_op->ecpm.r.y.data; + uint32_t alg_bytesize = cookie->alg_bytesize; + + asym_op->ecpm.r.x.length = alg_bytesize; + asym_op->ecpm.r.y.length = alg_bytesize; + rte_memcpy(r, cookie->output_array[0], alg_bytesize); + rte_memcpy(s, cookie->output_array[1], alg_bytesize); + + HEXDUMP("rX", cookie->output_array[0], + alg_bytesize); + HEXDUMP("rY", cookie->output_array[1], + alg_bytesize); + return RTE_CRYPTO_OP_STATUS_SUCCESS; +} + +static int asym_set_input(struct rte_crypto_asym_op *asym_op, struct icp_qat_fw_pke_request *qat_req, struct qat_asym_op_cookie *cookie, @@ -707,6 +785,9 @@ asym_set_input(struct rte_crypto_asym_op *asym_op, case RTE_CRYPTO_ASYM_XFORM_ECDSA: return ecdsa_set_input(asym_op, qat_req, cookie, xform); + case RTE_CRYPTO_ASYM_XFORM_ECPM: + return ecpm_set_input(asym_op, qat_req, + cookie, xform); default: QAT_LOG(ERR, "Invalid/unsupported asymmetric crypto xform"); return -EINVAL; @@ -783,6 +864,8 @@ qat_asym_collect_response(struct rte_crypto_op *rx_op, return rsa_collect(asym_op, cookie); case RTE_CRYPTO_ASYM_XFORM_ECDSA: return ecdsa_collect(asym_op, cookie); + case RTE_CRYPTO_ASYM_XFORM_ECPM: + return ecpm_collect(asym_op, cookie); default: QAT_LOG(ERR, "Not supported xform type"); return RTE_CRYPTO_OP_STATUS_ERROR; diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h index 5e926125f2..6267c53bea 100644 --- a/drivers/crypto/qat/qat_asym.h +++ b/drivers/crypto/qat/qat_asym.h @@ -32,6 +32,8 @@ typedef uint64_t large_int_ptr; #define QAT_ASYM_ECDSA_RS_SIGN_OUT_PARAMS 2 #define QAT_ASYM_ECDSA_RS_VERIFY_IN_PARAMS 1 #define QAT_ASYM_ECDSA_RS_VERIFY_OUT_PARAMS 0 +#define QAT_ASYM_ECPM_IN_PARAMS 7 +#define QAT_ASYM_ECPM_OUT_PARAMS 2 /** * helper function to add an asym capability From patchwork Mon Feb 21 10:48:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 107896 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8FD02A034E; Mon, 21 Feb 2022 11:49:11 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BE2294115A; Mon, 21 Feb 2022 11:48:51 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 2C3724115A for ; Mon, 21 Feb 2022 11:48:50 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645440530; x=1676976530; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=LUPJWJ6+yRqCHtDerjjHreE9JstCBb32i+P9xj+4at4=; b=PVwRLInwnjMSgzeHyuZ3rxnJeT2+kUy7nhfLcPZZ3L5RkCW8lHVikMfs cCQpyLGZZgDeoancDImygVGPiY5PJC7nO7KEbabfTrBjc36vPHb8Xz56P 4FOi6NwAixnHkDrBtNeJYYnB3a4faC1KTaLgoAjMLI8udbbfPh5n8b245 GgQxPB+bWE8TLs1hBzmMsP2u6gIZZ7bswcIGqKnWuiWjfq48XXaknBDUF jU3KQxrvJAJz1Z4aRjJjzGHUDvRZEbnQuJMYwsBHnM31nb0cgpfMzytqP /urC5gxXuzOQA7yjzio+Abf5RP6PSYFZIQyRRVwXbqlPIl2ijfgPmtpER g==; X-IronPort-AV: E=McAfee;i="6200,9189,10264"; a="251668122" X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="251668122" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Feb 2022 02:48:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,385,1635231600"; d="scan'208";a="638517182" Received: from silpixa00400308.ir.intel.com ([10.237.214.95]) by orsmga004.jf.intel.com with ESMTP; 21 Feb 2022 02:48:48 -0800 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, roy.fan.zhang@intel.com, Arek Kusztal Subject: [PATCH v3 5/5] crypto/qat: refactor asymmetric session Date: Mon, 21 Feb 2022 10:48:31 +0000 Message-Id: <20220221104831.30149-6-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> References: <20220221104831.30149-1-arkadiuszx.kusztal@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch refactors asymmetric session in Intel QuickAssist Technology PMD and fixes some issues with xform. Code will be now bit more scalable, and easier readable. Signed-off-by: Arek Kusztal --- drivers/crypto/qat/qat_asym.c | 423 +++++++++++++++++++++++++++++++++++------- drivers/crypto/qat/qat_asym.h | 2 +- drivers/crypto/qat/qat_ec.h | 4 +- 3 files changed, 364 insertions(+), 65 deletions(-) diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index af96cc6bd2..badf018f13 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -647,17 +647,17 @@ ecdsa_set_input(struct rte_crypto_asym_op *asym_op, qat_req->output_param_count = QAT_ASYM_ECDSA_RS_VERIFY_OUT_PARAMS; - HEXDUMP_OFF_F("e", 0); - HEXDUMP_OFF_F("s", 1); - HEXDUMP_OFF_F("r", 2); - HEXDUMP_OFF_F("n", 3); - HEXDUMP_OFF_F("xG", 4); + HEXDUMP_OFF_F("p", 0); + HEXDUMP_OFF_F("b", 1); + HEXDUMP_OFF_F("a", 2); + HEXDUMP_OFF_F("y", 3); + HEXDUMP_OFF_F("x", 4); HEXDUMP_OFF_F("yG", 5); - HEXDUMP_OFF_F("xQ", 6); - HEXDUMP_OFF_F("yQ", 7); - HEXDUMP_OFF_F("a", 8); - HEXDUMP_OFF_F("b", 9); - HEXDUMP_OFF_F("q", 10); + HEXDUMP_OFF_F("xG", 6); + HEXDUMP_OFF_F("n", 7); + HEXDUMP_OFF_F("r", 8); + HEXDUMP_OFF_F("s", 9); + HEXDUMP_OFF_F("e", 10); break; default: return -1; @@ -670,7 +670,9 @@ static uint8_t ecdsa_collect(struct rte_crypto_asym_op *asym_op, struct qat_asym_op_cookie *cookie) { - uint32_t alg_bytesize = RTE_ALIGN_CEIL(cookie->alg_bytesize, 8); + uint32_t alg_bytesize = cookie->alg_bytesize; + uint32_t qat_alg_bytesize = RTE_ALIGN_CEIL(cookie->alg_bytesize, 8); + uint32_t ltrim = qat_alg_bytesize - alg_bytesize; if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) { uint8_t *r = asym_op->ecdsa.r.data; @@ -678,8 +680,9 @@ ecdsa_collect(struct rte_crypto_asym_op *asym_op, asym_op->ecdsa.r.length = alg_bytesize; asym_op->ecdsa.s.length = alg_bytesize; - rte_memcpy(r, cookie->output_array[0], alg_bytesize); - rte_memcpy(s, cookie->output_array[1], alg_bytesize); + rte_memcpy(r, &cookie->output_array[0][ltrim], alg_bytesize); + rte_memcpy(s, &cookie->output_array[1][ltrim], alg_bytesize); + HEXDUMP("R", cookie->output_array[0], alg_bytesize); HEXDUMP("S", cookie->output_array[1], @@ -713,19 +716,19 @@ ecpm_set_input(struct rte_crypto_asym_op *asym_op, alg_bytesize = qat_function.bytesize; qat_alg_bytesize = RTE_ALIGN_CEIL(alg_bytesize, 8); - SET_PKE_LN(cookie->input_array, asym_op->ecpm.scalar, - alg_bytesize, 0); - SET_PKE_LN(cookie->input_array, asym_op->ecpm.p.x, - alg_bytesize, 1); - SET_PKE_LN(cookie->input_array, asym_op->ecpm.p.y, - alg_bytesize, 2); - SET_PKE_LN_EC(cookie->input_array, curve[SECP256R1].a, + SET_PKE_LN_EC(cookie->input_array, asym_op->ecpm.scalar, + asym_op->ecpm.scalar.length, 0); + SET_PKE_LN_EC(cookie->input_array, asym_op->ecpm.p.x, + asym_op->ecpm.p.x.length, 1); + SET_PKE_LN_EC(cookie->input_array, asym_op->ecpm.p.y, + asym_op->ecpm.p.y.length, 2); + SET_PKE_LN_EC(cookie->input_array, curve[curve_id].a, alg_bytesize, 3); - SET_PKE_LN_EC(cookie->input_array, curve[SECP256R1].b, + SET_PKE_LN_EC(cookie->input_array, curve[curve_id].b, alg_bytesize, 4); - SET_PKE_LN_EC(cookie->input_array, curve[SECP256R1].p, + SET_PKE_LN_EC(cookie->input_array, curve[curve_id].p, alg_bytesize, 5); - SET_PKE_LN_EC(cookie->input_array, curve[SECP256R1].h, + SET_PKE_LN_EC(cookie->input_array, curve[curve_id].h, alg_bytesize, 6); cookie->alg_bytesize = alg_bytesize; @@ -750,14 +753,16 @@ static uint8_t ecpm_collect(struct rte_crypto_asym_op *asym_op, struct qat_asym_op_cookie *cookie) { - uint8_t *r = asym_op->ecpm.r.x.data; - uint8_t *s = asym_op->ecpm.r.y.data; + uint8_t *x = asym_op->ecpm.r.x.data; + uint8_t *y = asym_op->ecpm.r.y.data; uint32_t alg_bytesize = cookie->alg_bytesize; + uint32_t qat_alg_bytesize = RTE_ALIGN_CEIL(cookie->alg_bytesize, 8); + uint32_t ltrim = qat_alg_bytesize - alg_bytesize; asym_op->ecpm.r.x.length = alg_bytesize; asym_op->ecpm.r.y.length = alg_bytesize; - rte_memcpy(r, cookie->output_array[0], alg_bytesize); - rte_memcpy(s, cookie->output_array[1], alg_bytesize); + rte_memcpy(x, &cookie->output_array[0][ltrim], alg_bytesize); + rte_memcpy(y, &cookie->output_array[1][ltrim], alg_bytesize); HEXDUMP("rX", cookie->output_array[0], alg_bytesize); @@ -806,29 +811,37 @@ qat_asym_build_request(void *in_op, uint8_t *out_msg, void *op_cookie, (struct icp_qat_fw_pke_request *)out_msg; struct qat_asym_op_cookie *cookie = (struct qat_asym_op_cookie *)op_cookie; + struct rte_crypto_asym_xform *xform; + struct qat_asym_session *qat_session = (struct qat_asym_session *) + op->asym->session->sess_private_data; int err = 0; + if (unlikely(qat_session == NULL)) { + QAT_DP_LOG(ERR, "Session was not created for this device"); + goto error; + } + op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; switch (op->sess_type) { case RTE_CRYPTO_OP_WITH_SESSION: - QAT_LOG(ERR, - "QAT asymmetric crypto PMD does not support session" - ); - goto error; + request_init(qat_req); + xform = &qat_session->xform; + break; case RTE_CRYPTO_OP_SESSIONLESS: request_init(qat_req); - err = asym_set_input(asym_op, qat_req, cookie, - op->asym->xform); - if (err) { - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - goto error; - } + xform = op->asym->xform; break; default: QAT_DP_LOG(ERR, "Invalid session/xform settings"); op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION; goto error; } + err = asym_set_input(asym_op, qat_req, cookie, + xform); + if (err) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + goto error; + } qat_req->pke_mid.opaque = (uint64_t)(uintptr_t)op; qat_req->pke_mid.src_data_addr = cookie->input_addr; @@ -849,11 +862,11 @@ qat_asym_build_request(void *in_op, uint8_t *out_msg, void *op_cookie, } static uint8_t -qat_asym_collect_response(struct rte_crypto_op *rx_op, +qat_asym_collect_response(struct rte_crypto_op *op, struct qat_asym_op_cookie *cookie, struct rte_crypto_asym_xform *xform) { - struct rte_crypto_asym_op *asym_op = rx_op->asym; + struct rte_crypto_asym_op *asym_op = op->asym; switch (xform->xform_type) { case RTE_CRYPTO_ASYM_XFORM_MODEX: @@ -873,69 +886,355 @@ qat_asym_collect_response(struct rte_crypto_op *rx_op, } static int -qat_asym_process_response(void **op, uint8_t *resp, +qat_asym_process_response(void **out_op, uint8_t *resp, void *op_cookie, __rte_unused uint64_t *dequeue_err_count) { struct icp_qat_fw_pke_resp *resp_msg = (struct icp_qat_fw_pke_resp *)resp; - struct rte_crypto_op *rx_op = (struct rte_crypto_op *)(uintptr_t) + struct rte_crypto_op *op = (struct rte_crypto_op *)(uintptr_t) (resp_msg->opaque); struct qat_asym_op_cookie *cookie = op_cookie; + struct rte_crypto_asym_xform *xform; + struct qat_asym_session *qat_session = (struct qat_asym_session *) + op->asym->session->sess_private_data; if (cookie->error) { cookie->error = 0; - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) - rx_op->status = RTE_CRYPTO_OP_STATUS_ERROR; + if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; QAT_DP_LOG(ERR, "Cookie status returned error"); } else { if (ICP_QAT_FW_PKE_RESP_PKE_STAT_GET( resp_msg->pke_resp_hdr.resp_status.pke_resp_flags)) { - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) - rx_op->status = RTE_CRYPTO_OP_STATUS_ERROR; + if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; QAT_DP_LOG(ERR, "Asymmetric response status" " returned error"); } if (resp_msg->pke_resp_hdr.resp_status.comn_err_code) { - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) - rx_op->status = RTE_CRYPTO_OP_STATUS_ERROR; + if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; QAT_DP_LOG(ERR, "Asymmetric common status" " returned error"); } } - if (rx_op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) { - rx_op->status = qat_asym_collect_response(rx_op, - cookie, rx_op->asym->xform); - cleanup(cookie, rx_op->asym->xform, - cookie->alg_bytesize); + + switch (op->sess_type) { + case RTE_CRYPTO_OP_WITH_SESSION: + xform = &qat_session->xform; + break; + case RTE_CRYPTO_OP_SESSIONLESS: + xform = op->asym->xform; + break; + default: + QAT_DP_LOG(ERR, + "Invalid session/xform settings in response ring!"); + op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION; + } + + if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) { + op->status = qat_asym_collect_response(op, + cookie, xform); + cleanup(cookie, xform, cookie->alg_bytesize); } - *op = rx_op; + *out_op = op; HEXDUMP("resp_msg:", resp_msg, sizeof(struct icp_qat_fw_pke_resp)); return 1; } +static int +session_set_modexp(struct qat_asym_session *qat_session, + struct rte_crypto_asym_xform *xform) +{ + uint8_t *modulus = xform->modex.modulus.data; + uint8_t *exponent = xform->modex.exponent.data; + + qat_session->xform.modex.modulus.data = + rte_malloc(NULL, xform->modex.modulus.length, 0); + if (qat_session->xform.modex.modulus.data == NULL) + return -ENOMEM; + qat_session->xform.modex.modulus.length = xform->modex.modulus.length; + qat_session->xform.modex.exponent.data = rte_malloc(NULL, + xform->modex.exponent.length, 0); + if (qat_session->xform.modex.exponent.data == NULL) { + rte_free(qat_session->xform.modex.exponent.data); + return -ENOMEM; + } + qat_session->xform.modex.exponent.length = xform->modex.exponent.length; + + rte_memcpy(qat_session->xform.modex.modulus.data, modulus, + xform->modex.modulus.length); + rte_memcpy(qat_session->xform.modex.exponent.data, exponent, + xform->modex.exponent.length); + + return 0; +} + +static int +session_set_modinv(struct qat_asym_session *qat_session, + struct rte_crypto_asym_xform *xform) +{ + uint8_t *modulus = xform->modinv.modulus.data; + + qat_session->xform.modinv.modulus.data = + rte_malloc(NULL, xform->modinv.modulus.length, 0); + if (qat_session->xform.modinv.modulus.data == NULL) + return -ENOMEM; + qat_session->xform.modinv.modulus.length = xform->modinv.modulus.length; + + rte_memcpy(qat_session->xform.modinv.modulus.data, modulus, + xform->modinv.modulus.length); + + return 0; +} + +static int +session_set_rsa(struct qat_asym_session *qat_session, + struct rte_crypto_asym_xform *xform) +{ + uint8_t *n = xform->rsa.n.data; + uint8_t *e = xform->rsa.e.data; + int ret = 0; + + qat_session->xform.rsa.key_type = xform->rsa.key_type; + + qat_session->xform.rsa.n.data = + rte_malloc(NULL, xform->rsa.n.length, 0); + if (qat_session->xform.rsa.n.data == NULL) + return -ENOMEM; + qat_session->xform.rsa.n.length = + xform->rsa.n.length; + + qat_session->xform.rsa.e.data = + rte_malloc(NULL, xform->rsa.e.length, 0); + if (qat_session->xform.rsa.e.data == NULL) { + ret = -ENOMEM; + goto err; + } + qat_session->xform.rsa.e.length = + xform->rsa.e.length; + + if (xform->rsa.key_type == RTE_RSA_KEY_TYPE_QT) { + uint8_t *p = xform->rsa.qt.p.data; + uint8_t *q = xform->rsa.qt.q.data; + uint8_t *dP = xform->rsa.qt.dP.data; + uint8_t *dQ = xform->rsa.qt.dQ.data; + uint8_t *qInv = xform->rsa.qt.qInv.data; + + qat_session->xform.rsa.qt.p.data = + rte_malloc(NULL, xform->rsa.qt.p.length, 0); + if (qat_session->xform.rsa.qt.p.data == NULL) { + ret = -ENOMEM; + goto err; + } + qat_session->xform.rsa.qt.p.length = + xform->rsa.qt.p.length; + + qat_session->xform.rsa.qt.q.data = + rte_malloc(NULL, xform->rsa.qt.q.length, 0); + if (qat_session->xform.rsa.qt.q.data == NULL) { + ret = -ENOMEM; + goto err; + } + qat_session->xform.rsa.qt.q.length = + xform->rsa.qt.q.length; + + qat_session->xform.rsa.qt.dP.data = + rte_malloc(NULL, xform->rsa.qt.dP.length, 0); + if (qat_session->xform.rsa.qt.dP.data == NULL) { + ret = -ENOMEM; + goto err; + } + qat_session->xform.rsa.qt.dP.length = + xform->rsa.qt.dP.length; + + qat_session->xform.rsa.qt.dQ.data = + rte_malloc(NULL, xform->rsa.qt.dQ.length, 0); + if (qat_session->xform.rsa.qt.dQ.data == NULL) { + ret = -ENOMEM; + goto err; + } + qat_session->xform.rsa.qt.dQ.length = + xform->rsa.qt.dQ.length; + + qat_session->xform.rsa.qt.qInv.data = + rte_malloc(NULL, xform->rsa.qt.qInv.length, 0); + if (qat_session->xform.rsa.qt.qInv.data == NULL) { + ret = -ENOMEM; + goto err; + } + qat_session->xform.rsa.qt.qInv.length = + xform->rsa.qt.qInv.length; + + rte_memcpy(qat_session->xform.rsa.qt.p.data, p, + xform->rsa.qt.p.length); + rte_memcpy(qat_session->xform.rsa.qt.q.data, q, + xform->rsa.qt.q.length); + rte_memcpy(qat_session->xform.rsa.qt.dP.data, dP, + xform->rsa.qt.dP.length); + rte_memcpy(qat_session->xform.rsa.qt.dQ.data, dQ, + xform->rsa.qt.dQ.length); + rte_memcpy(qat_session->xform.rsa.qt.qInv.data, qInv, + xform->rsa.qt.qInv.length); + + } else { + uint8_t *d = xform->rsa.d.data; + + qat_session->xform.rsa.d.data = + rte_malloc(NULL, xform->rsa.d.length, 0); + if (qat_session->xform.rsa.d.data == NULL) { + ret = -ENOMEM; + goto err; + } + qat_session->xform.rsa.d.length = + xform->rsa.d.length; + rte_memcpy(qat_session->xform.rsa.d.data, d, + xform->rsa.d.length); + } + + rte_memcpy(qat_session->xform.rsa.n.data, n, + xform->rsa.n.length); + rte_memcpy(qat_session->xform.rsa.e.data, e, + xform->rsa.e.length); + + return 0; + +err: + rte_free(qat_session->xform.rsa.n.data); + rte_free(qat_session->xform.rsa.e.data); + rte_free(qat_session->xform.rsa.d.data); + rte_free(qat_session->xform.rsa.qt.p.data); + rte_free(qat_session->xform.rsa.qt.q.data); + rte_free(qat_session->xform.rsa.qt.dP.data); + rte_free(qat_session->xform.rsa.qt.dQ.data); + rte_free(qat_session->xform.rsa.qt.qInv.data); + return ret; +} + +static void +session_set_ecdsa(struct qat_asym_session *qat_session, + struct rte_crypto_asym_xform *xform) +{ + qat_session->xform.ec.curve_id = xform->ec.curve_id; +} + int qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused, - struct rte_crypto_asym_xform *xform __rte_unused, - struct rte_cryptodev_asym_session *sess __rte_unused) + struct rte_crypto_asym_xform *xform, + struct rte_cryptodev_asym_session *session) { - QAT_LOG(ERR, "QAT asymmetric PMD currently does not support session"); - return -ENOTSUP; + struct qat_asym_session *qat_session; + int ret = 0; + + qat_session = (struct qat_asym_session *) session->sess_private_data; + memset(qat_session, 0, sizeof(*qat_session)); + + qat_session->xform.xform_type = xform->xform_type; + switch (xform->xform_type) { + case RTE_CRYPTO_ASYM_XFORM_MODEX: + ret = session_set_modexp(qat_session, xform); + break; + case RTE_CRYPTO_ASYM_XFORM_MODINV: + ret = session_set_modinv(qat_session, xform); + break; + case RTE_CRYPTO_ASYM_XFORM_RSA: + ret = session_set_rsa(qat_session, xform); + break; + case RTE_CRYPTO_ASYM_XFORM_ECDSA: + case RTE_CRYPTO_ASYM_XFORM_ECPM: + session_set_ecdsa(qat_session, xform); + break; + default: + ret = -ENOTSUP; + } + + if (ret) { + QAT_LOG(ERR, "Unsupported xform type"); + return ret; + } + + return 0; } unsigned int qat_asym_session_get_private_size(struct rte_cryptodev *dev __rte_unused) { - QAT_LOG(ERR, "QAT asymmetric PMD currently does not support session"); - return 0; + return RTE_ALIGN_CEIL(sizeof(struct qat_asym_session), 8); +} + +static void +session_clear_modexp(struct rte_crypto_modex_xform *modex) +{ + memset(modex->modulus.data, 0, modex->modulus.length); + rte_free(modex->modulus.data); + memset(modex->exponent.data, 0, modex->exponent.length); + rte_free(modex->exponent.data); +} + +static void +session_clear_modinv(struct rte_crypto_modinv_xform *modinv) +{ + memset(modinv->modulus.data, 0, modinv->modulus.length); + rte_free(modinv->modulus.data); +} + +static void +session_clear_rsa(struct rte_crypto_rsa_xform *rsa) +{ + return; + memset(rsa->n.data, 0, rsa->n.length); + rte_free(rsa->n.data); + memset(rsa->e.data, 0, rsa->e.length); + rte_free(rsa->e.data); + if (rsa->key_type == RTE_RSA_KEY_TYPE_EXP) { + memset(rsa->d.data, 0, rsa->d.length); + rte_free(rsa->d.data); + } else { + memset(rsa->qt.p.data, 0, rsa->qt.p.length); + rte_free(rsa->qt.p.data); + memset(rsa->qt.q.data, 0, rsa->qt.q.length); + rte_free(rsa->qt.q.data); + memset(rsa->qt.dP.data, 0, rsa->qt.dP.length); + rte_free(rsa->qt.dP.data); + memset(rsa->qt.dQ.data, 0, rsa->qt.dQ.length); + rte_free(rsa->qt.dQ.data); + memset(rsa->qt.qInv.data, 0, rsa->qt.qInv.length); + rte_free(rsa->qt.qInv.data); + } +} + +static void +session_clear_xform(struct qat_asym_session *qat_session) +{ + switch (qat_session->xform.xform_type) { + case RTE_CRYPTO_ASYM_XFORM_MODEX: + session_clear_modexp(&qat_session->xform.modex); + break; + case RTE_CRYPTO_ASYM_XFORM_MODINV: + session_clear_modinv(&qat_session->xform.modinv); + break; + case RTE_CRYPTO_ASYM_XFORM_RSA: + session_clear_rsa(&qat_session->xform.rsa); + break; + default: + break; + } } void -qat_asym_session_clear(struct rte_cryptodev *dev __rte_unused, - struct rte_cryptodev_asym_session *sess __rte_unused) +qat_asym_session_clear(struct rte_cryptodev *dev, + struct rte_cryptodev_asym_session *session) { - QAT_LOG(ERR, "QAT asymmetric PMD currently does not support session"); + void *sess_priv = session->sess_private_data; + struct qat_asym_session *qat_session = + (struct qat_asym_session *)sess_priv; + + if (sess_priv) { + session_clear_xform(qat_session); + memset(qat_session, 0, qat_asym_session_get_private_size(dev)); + } } static uint16_t diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h index 6267c53bea..b1d403486f 100644 --- a/drivers/crypto/qat/qat_asym.h +++ b/drivers/crypto/qat/qat_asym.h @@ -73,7 +73,7 @@ struct qat_asym_op_cookie { struct qat_asym_session { struct icp_qat_fw_pke_request req_tmpl; - struct rte_crypto_asym_xform *xform; + struct rte_crypto_asym_xform xform; }; static inline void diff --git a/drivers/crypto/qat/qat_ec.h b/drivers/crypto/qat/qat_ec.h index c2f3ce93d2..a310e3f4d3 100644 --- a/drivers/crypto/qat/qat_ec.h +++ b/drivers/crypto/qat/qat_ec.h @@ -31,7 +31,7 @@ struct elliptic_curve { buffer h; }; -static struct elliptic_curve __rte_unused curve[] = { +static struct elliptic_curve curve[] = { [SECP256R1] = { .name = "secp256r1", .bytesize = 32, @@ -190,7 +190,7 @@ static struct elliptic_curve __rte_unused curve[] = { } }; -static int __rte_unused +static int pick_curve(struct rte_crypto_asym_xform *xform) { switch (xform->ec.curve_id) {