From patchwork Fri Oct 1 16:59:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 100336 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B7C9AA0032; Fri, 1 Oct 2021 19:01:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E0A0941249; Fri, 1 Oct 2021 19:00:44 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 8E33B411F7 for ; Fri, 1 Oct 2021 19:00:42 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10124"; a="222294894" X-IronPort-AV: E=Sophos;i="5.85,339,1624345200"; d="scan'208";a="222294894" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Oct 2021 10:00:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,339,1624345200"; d="scan'208";a="521219362" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by fmsmga008.fm.intel.com with ESMTP; 01 Oct 2021 10:00:26 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang , Arek Kusztal , Kai Ji Date: Fri, 1 Oct 2021 17:59:53 +0100 Message-Id: <20211001165954.717846-10-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211001165954.717846-1-roy.fan.zhang@intel.com> References: <20210901144729.26784-1-arkadiuszx.kusztal@intel.com> <20211001165954.717846-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 09/10] crypto/qat: add gen specific implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch replaces the mixed QAT symmetric and asymmetric support implementation by separate files with shared or individual implementation for specific QAT generation. Signed-off-by: Arek Kusztal Signed-off-by: Fan Zhang Signed-off-by: Kai Ji --- drivers/common/qat/meson.build | 7 +- drivers/crypto/qat/dev/qat_asym_pmd_gen1.c | 76 +++++ drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 224 +++++++++++++++ drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 164 +++++++++++ drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 125 ++++++++ drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 36 +++ drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 283 +++++++++++++++++++ drivers/crypto/qat/qat_asym_pmd.h | 1 + drivers/crypto/qat/qat_crypto.h | 3 - 9 files changed, 915 insertions(+), 4 deletions(-) create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 29fd0168ea..ce9959d103 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -71,7 +71,12 @@ endif if qat_crypto foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c', - 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c'] + 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c', + 'dev/qat_sym_pmd_gen1.c', + 'dev/qat_asym_pmd_gen1.c', + 'dev/qat_crypto_pmd_gen2.c', + 'dev/qat_crypto_pmd_gen3.c', + 'dev/qat_crypto_pmd_gen4.c'] sources += files(join_paths(qat_crypto_relpath, f)) endforeach deps += ['security'] diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c new file mode 100644 index 0000000000..61250fe433 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#include +#include "qat_asym.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" +#include "qat_pke_functionality_arrays.h" + +struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = { + /* Device related operations */ + .dev_configure = qat_cryptodev_config, + .dev_start = qat_cryptodev_start, + .dev_stop = qat_cryptodev_stop, + .dev_close = qat_cryptodev_close, + .dev_infos_get = qat_cryptodev_info_get, + + .stats_get = qat_cryptodev_stats_get, + .stats_reset = qat_cryptodev_stats_reset, + .queue_pair_setup = qat_cryptodev_qp_setup, + .queue_pair_release = qat_cryptodev_qp_release, + + /* Crypto related operations */ + .asym_session_get_size = qat_asym_session_get_private_size, + .asym_session_configure = qat_asym_session_configure, + .asym_session_clear = qat_asym_session_clear +}; + +static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen1[] = { + QAT_ASYM_CAP(MODEX, \ + 0, 1, 512, 1), \ + QAT_ASYM_CAP(MODINV, \ + 0, 1, 512, 1), \ + QAT_ASYM_CAP(RSA, \ + ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | \ + (1 << RTE_CRYPTO_ASYM_OP_VERIFY) | \ + (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | \ + (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)), \ + 64, 512, 64), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + + +struct qat_capabilities_info +qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_asym_crypto_caps_gen1; + capa_info.size = sizeof(qat_asym_crypto_caps_gen1); + return capa_info; +} + +uint64_t +qat_asym_crypto_feature_flags_get_gen1( + struct qat_pci_device *qat_dev __rte_unused) +{ + uint64_t feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_HW_ACCELERATED | + RTE_CRYPTODEV_FF_ASYM_SESSIONLESS | + RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP | + RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT; + + return feature_flags; +} + +RTE_INIT(qat_asym_crypto_gen1_init) +{ + qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops = + &qat_asym_crypto_ops_gen1; + qat_asym_gen_dev_ops[QAT_GEN1].get_capabilities = + qat_asym_crypto_cap_get_gen1; + qat_asym_gen_dev_ops[QAT_GEN1].get_feature_flags = + qat_asym_crypto_feature_flags_get_gen1; +} diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c new file mode 100644 index 0000000000..8611ef6864 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c @@ -0,0 +1,224 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_asym.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" + +#define MIXED_CRYPTO_MIN_FW_VER 0x04090000 + +static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen2[] = { + QAT_SYM_PLAIN_AUTH_CAP(SHA1, \ + CAP_SET(block_size, 64), \ + CAP_RNG(digest_size, 1, 20, 1)), \ + QAT_SYM_AEAD_CAP(AES_GCM, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \ + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \ + QAT_SYM_AEAD_CAP(AES_CCM, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \ + CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \ + QAT_SYM_AUTH_CAP(AES_GMAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \ + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \ + QAT_SYM_AUTH_CAP(AES_CMAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA224, \ + CAP_SET(block_size, 64), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA256, \ + CAP_SET(block_size, 64), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA384, \ + CAP_SET(block_size, 128), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA512, \ + CAP_SET(block_size, 128), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA1_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA224_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA256_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA384_HMAC, \ + CAP_SET(block_size, 128), \ + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA512_HMAC, \ + CAP_SET(block_size, 128), \ + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(MD5_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SNOW3G_UIA2, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_AUTH_CAP(KASUMI_F9, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(NULL, \ + CAP_SET(block_size, 1), \ + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_CIPHER_CAP(AES_CBC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(AES_CTR, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(AES_XTS, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(AES_DOCSISBPI, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(SNOW3G_UEA2, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(KASUMI_F8, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(NULL, \ + CAP_SET(block_size, 1), \ + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_CIPHER_CAP(3DES_CBC, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(3DES_CTR, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(DES_CBC, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(DES_DOCSISBPI, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(ZUC_EEA3, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_AUTH_CAP(ZUC_EIA3, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static int +qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, int socket_id) +{ + struct qat_cryptodev_private *qat_sym_private = dev->data->dev_private; + struct qat_qp *qp; + int ret; + + if (qat_cryptodev_qp_setup(dev, qp_id, qp_conf, socket_id)) { + /* Some error there */ + return -1; + } + + qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]; + ret = qat_cq_get_fw_version(qp); + if (ret < 0) { + qat_cryptodev_qp_release(dev, qp_id); + return ret; + } + + if (ret != 0) + QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d", + (ret >> 24) & 0xff, + (ret >> 16) & 0xff, + (ret >> 8) & 0xff); + else + QAT_LOG(DEBUG, "unknown QAT firmware version"); + + /* set capabilities based on the fw version */ + qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID | + ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? + QAT_SYM_CAP_MIXED_CRYPTO : 0); + return 0; +} + +struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = { + + /* Device related operations */ + .dev_configure = qat_cryptodev_config, + .dev_start = qat_cryptodev_start, + .dev_stop = qat_cryptodev_stop, + .dev_close = qat_cryptodev_close, + .dev_infos_get = qat_cryptodev_info_get, + + .stats_get = qat_cryptodev_stats_get, + .stats_reset = qat_cryptodev_stats_reset, + .queue_pair_setup = qat_sym_crypto_qp_setup_gen2, + .queue_pair_release = qat_cryptodev_qp_release, + + /* Crypto related operations */ + .sym_session_get_size = qat_sym_session_get_private_size, + .sym_session_configure = qat_sym_session_configure, + .sym_session_clear = qat_sym_session_clear, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, +}; + +static struct qat_capabilities_info +qat_sym_crypto_cap_get_gen2(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_sym_crypto_caps_gen2; + capa_info.size = sizeof(qat_sym_crypto_caps_gen2); + return capa_info; +} + +RTE_INIT(qat_sym_crypto_gen2_init) +{ + qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2; + qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities = + qat_sym_crypto_cap_get_gen2; + qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags = + qat_sym_crypto_feature_flags_get_gen1; + +#ifdef RTE_LIB_SECURITY + qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx = + qat_sym_create_security_gen1; +#endif +} + +RTE_INIT(qat_asym_crypto_gen2_init) +{ + qat_asym_gen_dev_ops[QAT_GEN2].cryptodev_ops = + &qat_asym_crypto_ops_gen1; + qat_asym_gen_dev_ops[QAT_GEN2].get_capabilities = + qat_asym_crypto_cap_get_gen1; + qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags = + qat_asym_crypto_feature_flags_get_gen1; +} diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c new file mode 100644 index 0000000000..1af58b90ed --- /dev/null +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c @@ -0,0 +1,164 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_asym.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" + +static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = { + QAT_SYM_PLAIN_AUTH_CAP(SHA1, \ + CAP_SET(block_size, 64), \ + CAP_RNG(digest_size, 1, 20, 1)), \ + QAT_SYM_AEAD_CAP(AES_GCM, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \ + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \ + QAT_SYM_AEAD_CAP(AES_CCM, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \ + CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \ + QAT_SYM_AUTH_CAP(AES_GMAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \ + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \ + QAT_SYM_AUTH_CAP(AES_CMAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA224, \ + CAP_SET(block_size, 64), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA256, \ + CAP_SET(block_size, 64), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA384, \ + CAP_SET(block_size, 128), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA512, \ + CAP_SET(block_size, 128), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA1_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA224_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA256_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA384_HMAC, \ + CAP_SET(block_size, 128), \ + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA512_HMAC, \ + CAP_SET(block_size, 128), \ + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(MD5_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SNOW3G_UIA2, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_AUTH_CAP(KASUMI_F9, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(NULL, \ + CAP_SET(block_size, 1), \ + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_CIPHER_CAP(AES_CBC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(AES_CTR, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(AES_XTS, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(AES_DOCSISBPI, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(SNOW3G_UEA2, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(KASUMI_F8, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(NULL, \ + CAP_SET(block_size, 1), \ + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_CIPHER_CAP(3DES_CBC, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(3DES_CTR, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(DES_CBC, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(DES_DOCSISBPI, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(ZUC_EEA3, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_AUTH_CAP(ZUC_EIA3, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_AEAD_CAP(CHACHA20_POLY1305, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 32, 32, 0), \ + CAP_RNG(digest_size, 16, 16, 0), \ + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static struct qat_capabilities_info +qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_sym_crypto_caps_gen3; + capa_info.size = sizeof(qat_sym_crypto_caps_gen3); + return capa_info; +} + +RTE_INIT(qat_sym_crypto_gen3_init) +{ + qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1; + qat_sym_gen_dev_ops[QAT_GEN3].get_capabilities = + qat_sym_crypto_cap_get_gen3; + qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags = + qat_sym_crypto_feature_flags_get_gen1; +#ifdef RTE_LIB_SECURITY + qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx = + qat_sym_create_security_gen1; +#endif +} + +RTE_INIT(qat_asym_crypto_gen3_init) +{ + qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL; + qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL; + qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL; +} diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c new file mode 100644 index 0000000000..e44f91e90a --- /dev/null +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c @@ -0,0 +1,125 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_asym.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" + +/* AR: add GEN4 caps here */ +static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = { + QAT_SYM_CIPHER_CAP(AES_CBC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_AUTH_CAP(SHA1_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA224_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA256_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA384_HMAC, \ + CAP_SET(block_size, 128), \ + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA512_HMAC, \ + CAP_SET(block_size, 128), \ + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(AES_CMAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_CIPHER_CAP(AES_DOCSISBPI, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_AUTH_CAP(NULL, \ + CAP_SET(block_size, 1), \ + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_CIPHER_CAP(NULL, \ + CAP_SET(block_size, 1), \ + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_PLAIN_AUTH_CAP(SHA1, \ + CAP_SET(block_size, 64), \ + CAP_RNG(digest_size, 1, 20, 1)), \ + QAT_SYM_AUTH_CAP(SHA224, \ + CAP_SET(block_size, 64), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA256, \ + CAP_SET(block_size, 64), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA384, \ + CAP_SET(block_size, 128), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA512, \ + CAP_SET(block_size, 128), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_CIPHER_CAP(AES_CTR, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_AEAD_CAP(AES_GCM, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \ + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \ + QAT_SYM_AEAD_CAP(AES_CCM, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \ + CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \ + QAT_SYM_AUTH_CAP(AES_GMAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \ + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \ + QAT_SYM_AEAD_CAP(CHACHA20_POLY1305, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 32, 32, 0), \ + CAP_RNG(digest_size, 16, 16, 0), \ + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static struct qat_capabilities_info +qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_sym_crypto_caps_gen4; + capa_info.size = sizeof(qat_sym_crypto_caps_gen4); + return capa_info; +} + +RTE_INIT(qat_sym_crypto_gen4_init) +{ + qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1; + qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities = + qat_sym_crypto_cap_get_gen4; + qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags = + qat_sym_crypto_feature_flags_get_gen1; +#ifdef RTE_LIB_SECURITY + qat_sym_gen_dev_ops[QAT_GEN4].create_security_ctx = + qat_sym_create_security_gen1; +#endif +} + +RTE_INIT(qat_asym_crypto_gen4_init) +{ + qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL; + qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL; + qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL; +} diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h new file mode 100644 index 0000000000..67a4d2cb2c --- /dev/null +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#ifndef _QAT_CRYPTO_PMD_GENS_H_ +#define _QAT_CRYPTO_PMD_GENS_H_ + +#include +#include "qat_crypto.h" +#include "qat_sym_session.h" + +extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1; +extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1; + +/* -----------------GENx control path APIs ---------------- */ +uint64_t +qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev); + +void +qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session, + uint8_t hash_flag); + +struct qat_capabilities_info +qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev); + +uint64_t +qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev); + +#ifdef RTE_LIB_SECURITY +extern struct rte_security_ops security_qat_ops_gen1; + +void * +qat_sym_create_security_gen1(void *cryptodev); +#endif + +#endif diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c new file mode 100644 index 0000000000..c6aa305845 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -0,0 +1,283 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#ifdef RTE_LIB_SECURITY +#include +#endif + +#include "adf_transport_access_macros.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_la.h" + +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_sym_session.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" + +static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen1[] = { + QAT_SYM_PLAIN_AUTH_CAP(SHA1, \ + CAP_SET(block_size, 64), \ + CAP_RNG(digest_size, 1, 20, 1)), \ + QAT_SYM_AEAD_CAP(AES_GCM, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \ + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \ + QAT_SYM_AEAD_CAP(AES_CCM, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \ + CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \ + QAT_SYM_AUTH_CAP(AES_GMAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \ + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \ + QAT_SYM_AUTH_CAP(AES_CMAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA224, \ + CAP_SET(block_size, 64), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA256, \ + CAP_SET(block_size, 64), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA384, \ + CAP_SET(block_size, 128), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA512, \ + CAP_SET(block_size, 128), \ + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA1_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA224_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA256_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA384_HMAC, \ + CAP_SET(block_size, 128), \ + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SHA512_HMAC, \ + CAP_SET(block_size, 128), \ + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(MD5_HMAC, \ + CAP_SET(block_size, 64), \ + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(SNOW3G_UIA2, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_AUTH_CAP(KASUMI_F9, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_AUTH_CAP(NULL, \ + CAP_SET(block_size, 1), \ + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \ + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_CIPHER_CAP(AES_CBC, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(AES_CTR, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(AES_XTS, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(AES_DOCSISBPI, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(SNOW3G_UEA2, \ + CAP_SET(block_size, 16), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \ + QAT_SYM_CIPHER_CAP(KASUMI_F8, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(NULL, \ + CAP_SET(block_size, 1), \ + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \ + QAT_SYM_CIPHER_CAP(3DES_CBC, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(3DES_CTR, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(DES_CBC, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \ + QAT_SYM_CIPHER_CAP(DES_DOCSISBPI, \ + CAP_SET(block_size, 8), \ + CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = { + + /* Device related operations */ + .dev_configure = qat_cryptodev_config, + .dev_start = qat_cryptodev_start, + .dev_stop = qat_cryptodev_stop, + .dev_close = qat_cryptodev_close, + .dev_infos_get = qat_cryptodev_info_get, + + .stats_get = qat_cryptodev_stats_get, + .stats_reset = qat_cryptodev_stats_reset, + .queue_pair_setup = qat_cryptodev_qp_setup, + .queue_pair_release = qat_cryptodev_qp_release, + + /* Crypto related operations */ + .sym_session_get_size = qat_sym_session_get_private_size, + .sym_session_configure = qat_sym_session_configure, + .sym_session_clear = qat_sym_session_clear, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, +}; + +static struct qat_capabilities_info +qat_sym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_sym_crypto_caps_gen1; + capa_info.size = sizeof(qat_sym_crypto_caps_gen1); + return capa_info; +} + +uint64_t +qat_sym_crypto_feature_flags_get_gen1( + struct qat_pci_device *qat_dev __rte_unused) +{ + uint64_t feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_HW_ACCELERATED | + RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | + RTE_CRYPTODEV_FF_IN_PLACE_SGL | + RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | + RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | + RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | + RTE_CRYPTODEV_FF_SYM_RAW_DP; + + return feature_flags; +} + +#ifdef RTE_LIB_SECURITY + +#define QAT_SECURITY_SYM_CAPABILITIES \ + { /* AES DOCSIS BPI */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ + {.cipher = { \ + .algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 32, \ + .increment = 16 \ + }, \ + .iv_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + } \ + }, } \ + }, } \ + } + +#define QAT_SECURITY_CAPABILITIES(sym) \ + [0] = { /* DOCSIS Uplink */ \ + .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ + .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ + .docsis = { \ + .direction = RTE_SECURITY_DOCSIS_UPLINK \ + }, \ + .crypto_capabilities = (sym) \ + }, \ + [1] = { /* DOCSIS Downlink */ \ + .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ + .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ + .docsis = { \ + .direction = RTE_SECURITY_DOCSIS_DOWNLINK \ + }, \ + .crypto_capabilities = (sym) \ + } + +static const struct rte_cryptodev_capabilities + qat_security_sym_capabilities[] = { + QAT_SECURITY_SYM_CAPABILITIES, + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static const struct rte_security_capability qat_security_capabilities_gen1[] = { + QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities), + { + .action = RTE_SECURITY_ACTION_TYPE_NONE + } +}; + +static const struct rte_security_capability * +qat_security_cap_get_gen1(void *dev __rte_unused) +{ + return qat_security_capabilities_gen1; +} + +struct rte_security_ops security_qat_ops_gen1 = { + .session_create = qat_security_session_create, + .session_update = NULL, + .session_stats_get = NULL, + .session_destroy = qat_security_session_destroy, + .set_pkt_metadata = NULL, + .capabilities_get = qat_security_cap_get_gen1 +}; + +void * +qat_sym_create_security_gen1(void *cryptodev) +{ + struct rte_security_ctx *security_instance; + + security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx), + RTE_CACHE_LINE_SIZE); + if (security_instance == NULL) + return NULL; + + security_instance->device = cryptodev; + security_instance->ops = &security_qat_ops_gen1; + security_instance->sess_cnt = 0; + + return (void *)security_instance; +} + +#endif + +RTE_INIT(qat_sym_crypto_gen1_init) +{ + qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1; + qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities = + qat_sym_crypto_cap_get_gen1; + qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags = + qat_sym_crypto_feature_flags_get_gen1; +#ifdef RTE_LIB_SECURITY + qat_sym_gen_dev_ops[QAT_GEN1].create_security_ctx = + qat_sym_create_security_gen1; +#endif +} diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h index fd6b406248..74c12b4bc8 100644 --- a/drivers/crypto/qat/qat_asym_pmd.h +++ b/drivers/crypto/qat/qat_asym_pmd.h @@ -18,6 +18,7 @@ * Helper function to add an asym capability * **/ + #define QAT_ASYM_CAP(n, o, l, r, i) \ { \ .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \ diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index 0a8afb0b31..6eaa15b975 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -6,9 +6,6 @@ #define _QAT_CRYPTO_H_ #include -#ifdef RTE_LIB_SECURITY -#include -#endif #include "qat_device.h"