From patchwork Mon Jun 28 16:34:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94908 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E78A3A0A0C; Mon, 28 Jun 2021 18:34:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E7C734114A; Mon, 28 Jun 2021 18:34:31 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id A68654068A for ; Mon, 28 Jun 2021 18:34:28 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="207934352" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="207934352" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395433" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:25 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:19 +0100 Message-Id: <20210628163434.77741-2-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 01/16] common/qat: rework qp per service function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Different generations of Intel QuickAssist Technology devices may differ in approach to allocate queues. Queue pair number function therefore needs to be more generic. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/common/qat/qat_qp.c | 15 ++++++++++----- drivers/common/qat/qat_qp.h | 2 +- drivers/compress/qat/qat_comp_pmd.c | 9 ++++----- drivers/crypto/qat/qat_asym_pmd.c | 9 ++++----- drivers/crypto/qat/qat_sym_pmd.c | 9 ++++----- 5 files changed, 23 insertions(+), 21 deletions(-) diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 4a8078541c..aa64d2e168 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -145,14 +145,19 @@ static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock); -int qat_qps_per_service(const struct qat_qp_hw_data *qp_hw_data, +int qat_qps_per_service(struct qat_pci_device *qat_dev, enum qat_service_type service) { - int i, count; - - for (i = 0, count = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) - if (qp_hw_data[i].service_type == service) + int i = 0, count = 0, max_ops_per_srv = 0; + const struct qat_qp_hw_data* + sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen] + .qp_hw_data[service]; + + max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE; + for (; i < max_ops_per_srv; i++) + if (sym_hw_qps[i].service_type == service) count++; + return count; } diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index 74f7e7daee..d353e8552b 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -98,7 +98,7 @@ qat_qp_setup(struct qat_pci_device *qat_dev, struct qat_qp_config *qat_qp_conf); int -qat_qps_per_service(const struct qat_qp_hw_data *qp_hw_data, +qat_qps_per_service(struct qat_pci_device *qat_dev, enum qat_service_type service); int diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c index 8de41f6b6e..6eb1ae3a21 100644 --- a/drivers/compress/qat/qat_comp_pmd.c +++ b/drivers/compress/qat/qat_comp_pmd.c @@ -106,6 +106,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, struct qat_qp **qp_addr = (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); struct qat_comp_dev_private *qat_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_private->qat_dev; const struct qat_qp_hw_data *comp_hw_qps = qat_gen_config[qat_private->qat_dev->qat_dev_gen] .qp_hw_data[QAT_SERVICE_COMPRESSION]; @@ -117,7 +118,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, if (ret < 0) return ret; } - if (qp_id >= qat_qps_per_service(comp_hw_qps, + if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_COMPRESSION)) { QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); return -EINVAL; @@ -592,13 +593,11 @@ qat_comp_dev_info_get(struct rte_compressdev *dev, struct rte_compressdev_info *info) { struct qat_comp_dev_private *comp_dev = dev->data->dev_private; - const struct qat_qp_hw_data *comp_hw_qps = - qat_gen_config[comp_dev->qat_dev->qat_dev_gen] - .qp_hw_data[QAT_SERVICE_COMPRESSION]; + struct qat_pci_device *qat_dev = comp_dev->qat_dev; if (info != NULL) { info->max_nb_queue_pairs = - qat_qps_per_service(comp_hw_qps, + qat_qps_per_service(qat_dev, QAT_SERVICE_COMPRESSION); info->feature_flags = dev->feature_flags; info->capabilities = comp_dev->qat_dev_capabilities; diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c index a2c8aca2c1..f0c8ed1bcf 100644 --- a/drivers/crypto/qat/qat_asym_pmd.c +++ b/drivers/crypto/qat/qat_asym_pmd.c @@ -54,12 +54,10 @@ static void qat_asym_dev_info_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *info) { struct qat_asym_dev_private *internals = dev->data->dev_private; - const struct qat_qp_hw_data *asym_hw_qps = - qat_gen_config[internals->qat_dev->qat_dev_gen] - .qp_hw_data[QAT_SERVICE_ASYMMETRIC]; + struct qat_pci_device *qat_dev = internals->qat_dev; if (info != NULL) { - info->max_nb_queue_pairs = qat_qps_per_service(asym_hw_qps, + info->max_nb_queue_pairs = qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC); info->feature_flags = dev->feature_flags; info->capabilities = internals->qat_dev_capabilities; @@ -128,6 +126,7 @@ static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, struct qat_qp **qp_addr = (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); struct qat_asym_dev_private *qat_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_private->qat_dev; const struct qat_qp_hw_data *asym_hw_qps = qat_gen_config[qat_private->qat_dev->qat_dev_gen] .qp_hw_data[QAT_SERVICE_ASYMMETRIC]; @@ -139,7 +138,7 @@ static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, if (ret < 0) return ret; } - if (qp_id >= qat_qps_per_service(asym_hw_qps, QAT_SERVICE_ASYMMETRIC)) { + if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC)) { QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); return -EINVAL; } diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index b9601c6c3a..549345b6fa 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -90,13 +90,11 @@ static void qat_sym_dev_info_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *info) { struct qat_sym_dev_private *internals = dev->data->dev_private; - const struct qat_qp_hw_data *sym_hw_qps = - qat_gen_config[internals->qat_dev->qat_dev_gen] - .qp_hw_data[QAT_SERVICE_SYMMETRIC]; + struct qat_pci_device *qat_dev = internals->qat_dev; if (info != NULL) { info->max_nb_queue_pairs = - qat_qps_per_service(sym_hw_qps, QAT_SERVICE_SYMMETRIC); + qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC); info->feature_flags = dev->feature_flags; info->capabilities = internals->qat_dev_capabilities; info->driver_id = qat_sym_driver_id; @@ -164,6 +162,7 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, struct qat_qp **qp_addr = (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); struct qat_sym_dev_private *qat_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_private->qat_dev; const struct qat_qp_hw_data *sym_hw_qps = qat_gen_config[qat_private->qat_dev->qat_dev_gen] .qp_hw_data[QAT_SERVICE_SYMMETRIC]; @@ -175,7 +174,7 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, if (ret < 0) return ret; } - if (qp_id >= qat_qps_per_service(sym_hw_qps, QAT_SERVICE_SYMMETRIC)) { + if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) { QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); return -EINVAL; } From patchwork Mon Jun 28 16:34:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94909 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 75EC6A0A0C; Mon, 28 Jun 2021 18:34:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 11FC641150; Mon, 28 Jun 2021 18:34:40 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id DA05C41148 for ; Mon, 28 Jun 2021 18:34:37 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165842" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165842" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395535" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:29 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:20 +0100 Message-Id: <20210628163434.77741-3-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 02/16] crypto/qat: add support for generation 4 devices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds support for fourth generation (GEN4) of Intel QuickAssist (QAT) Technology devices. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- doc/guides/cryptodevs/qat.rst | 10 +- doc/guides/rel_notes/release_21_08.rst | 6 + .../adf_transport_access_macros_gen4.h | 52 ++++ .../adf_transport_access_macros_gen4vf.h | 48 ++++ drivers/common/qat/qat_common.h | 3 +- drivers/common/qat/qat_device.c | 22 ++ drivers/common/qat/qat_device.h | 3 + drivers/common/qat/qat_qp.c | 243 +++++++++++++----- drivers/common/qat/qat_qp.h | 29 ++- drivers/compress/qat/qat_comp_pmd.c | 7 +- drivers/crypto/qat/qat_asym_pmd.c | 7 +- drivers/crypto/qat/qat_sym_pmd.c | 33 ++- drivers/crypto/qat/qat_sym_session.c | 1 + 13 files changed, 386 insertions(+), 78 deletions(-) create mode 100644 drivers/common/qat/qat_adf/adf_transport_access_macros_gen4.h create mode 100644 drivers/common/qat/qat_adf/adf_transport_access_macros_gen4vf.h diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst index 96f5ab6afe..666a01df33 100644 --- a/doc/guides/cryptodevs/qat.rst +++ b/doc/guides/cryptodevs/qat.rst @@ -25,6 +25,7 @@ poll mode crypto driver support for the following hardware accelerator devices: * ``Intel QuickAssist Technology 200xx`` * ``Intel QuickAssist Technology D15xx`` * ``Intel QuickAssist Technology C4xxx`` +* ``Intel QuickAssist Technology 4xxx`` Features @@ -94,15 +95,16 @@ All the usual chains are supported and also some mixed chains: +==================+===========+=============+==========+==========+ | NULL CIPHER | Y | 2&3 | 2&3 | Y | +------------------+-----------+-------------+----------+----------+ - | SNOW3G UEA2 | 2&3 | Y | 2&3 | 2&3 | + | SNOW3G UEA2 | 2&3 | 1&2&3 | 2&3 | 2&3 | +------------------+-----------+-------------+----------+----------+ | ZUC EEA3 | 2&3 | 2&3 | 2&3 | 2&3 | +------------------+-----------+-------------+----------+----------+ - | AES CTR | Y | 2&3 | 2&3 | Y | + | AES CTR | 1&2&3 | 2&3 | 2&3 | Y | +------------------+-----------+-------------+----------+----------+ * The combinations marked as "Y" are supported on all QAT hardware versions. -* The combinations marked as "2&3" are supported on GEN2/GEN3 QAT hardware only. +* The combinations marked as "2&3" are supported on GEN2 and GEN3 QAT hardware only. +* The combinations marked as "1&2&3" are supported on GEN1, GEN2 and GEN3 QAT hardware only. Limitations @@ -373,6 +375,8 @@ to see the full table) +-----+-----+-----+-----+----------+---------------+---------------+------------+--------+------+--------+--------+ | Yes | No | No | 3 | C4xxx | p | qat_c4xxx | c4xxx | 18a0 | 1 | 18a1 | 128 | +-----+-----+-----+-----+----------+---------------+---------------+------------+--------+------+--------+--------+ + | Yes | No | No | 4 | 4xxx | N/A | qat_4xxx | 4xxx | 4940 | 4 | 4941 | 16 | + +-----+-----+-----+-----+----------+---------------+---------------+------------+--------+------+--------+--------+ * Note: Symmetric mixed crypto algorithms feature on Gen 2 works only with 01.org driver version 4.9.0+ diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index a6ecfdf3ce..69ef43acf6 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -55,6 +55,12 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Updated Intel QuickAssist PMD.** + + Added fourth generation of QuickAssist Technology devices support. + Only symmetric crypto has been currently enabled, compression and asymmetric + crypto PMD will fail to create. + Removed Items ------------- diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros_gen4.h b/drivers/common/qat/qat_adf/adf_transport_access_macros_gen4.h new file mode 100644 index 0000000000..3ab873db5e --- /dev/null +++ b/drivers/common/qat/qat_adf/adf_transport_access_macros_gen4.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef ADF_TRANSPORT_ACCESS_MACROS_GEN4_H +#define ADF_TRANSPORT_ACCESS_MACROS_GEN4_H + +#include "adf_transport_access_macros.h" + +#define ADF_RINGS_PER_INT_SRCSEL_GEN4 2 +#define ADF_BANK_INT_SRC_SEL_MASK_GEN4 0x44UL +#define ADF_BANK_INT_FLAG_CLEAR_MASK_GEN4 0x3 +#define ADF_RING_BUNDLE_SIZE_GEN4 0x2000 +#define ADF_RING_CSR_ADDR_OFFSET_GEN4 0x100000 +#define ADF_RING_CSR_RING_CONFIG_GEN4 0x1000 +#define ADF_RING_CSR_RING_LBASE_GEN4 0x1040 +#define ADF_RING_CSR_RING_UBASE_GEN4 0x1080 + +#define BUILD_RING_BASE_ADDR_GEN4(addr, size) \ + ((((addr) >> 6) & (0xFFFFFFFFFFFFFFFFULL << (size))) << 6) + +#define WRITE_CSR_RING_BASE_GEN4(csr_base_addr, bank, ring, value) \ +do { \ + uint32_t l_base = 0, u_base = 0; \ + l_base = (uint32_t)(value & 0xFFFFFFFF); \ + u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \ + ADF_CSR_WR(csr_base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * bank) + \ + ADF_RING_CSR_RING_LBASE_GEN4 + (ring << 2), \ + l_base); \ + ADF_CSR_WR(csr_base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * bank) + \ + ADF_RING_CSR_RING_UBASE_GEN4 + (ring << 2), \ + u_base); \ +} while (0) + +#define WRITE_CSR_RING_CONFIG_GEN4(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * bank) + \ + ADF_RING_CSR_RING_CONFIG_GEN4 + (ring << 2), value) + +#define WRITE_CSR_RING_TAIL_GEN4(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR((u8 *)(csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET_GEN4, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * (bank)) + \ + ADF_RING_CSR_RING_TAIL + ((ring) << 2), value) + +#define WRITE_CSR_RING_HEAD_GEN4(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR((u8 *)(csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET_GEN4, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * (bank)) + \ + ADF_RING_CSR_RING_HEAD + ((ring) << 2), value) + +#endif diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros_gen4vf.h b/drivers/common/qat/qat_adf/adf_transport_access_macros_gen4vf.h new file mode 100644 index 0000000000..37e113c443 --- /dev/null +++ b/drivers/common/qat/qat_adf/adf_transport_access_macros_gen4vf.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef ADF_TRANSPORT_ACCESS_MACROS_GEN4VF_H +#define ADF_TRANSPORT_ACCESS_MACROS_GEN4VF_H + +#include "adf_transport_access_macros.h" +#include "adf_transport_access_macros_gen4.h" + +#define ADF_RING_CSR_ADDR_OFFSET_GEN4VF 0x0 + +#define WRITE_CSR_RING_BASE_GEN4VF(csr_base_addr, bank, ring, value) \ +do { \ + uint32_t l_base = 0, u_base = 0; \ + l_base = (uint32_t)(value & 0xFFFFFFFF); \ + u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \ + ADF_CSR_WR(csr_base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * bank) + \ + ADF_RING_CSR_RING_LBASE_GEN4 + (ring << 2), \ + l_base); \ + ADF_CSR_WR(csr_base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * bank) + \ + ADF_RING_CSR_RING_UBASE_GEN4 + (ring << 2), \ + u_base); \ +} while (0) + +#define WRITE_CSR_RING_CONFIG_GEN4VF(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * bank) + \ + ADF_RING_CSR_RING_CONFIG_GEN4 + (ring << 2), value) + +#define WRITE_CSR_RING_TAIL_GEN4VF(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * (bank)) + \ + ADF_RING_CSR_RING_TAIL + ((ring) << 2), (value)) + +#define WRITE_CSR_RING_HEAD_GEN4VF(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * (bank)) + \ + ADF_RING_CSR_RING_HEAD + ((ring) << 2), (value)) + +#define WRITE_CSR_RING_SRV_ARB_EN_GEN4VF(csr_base_addr, bank, value) \ + ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, \ + (ADF_RING_BUNDLE_SIZE_GEN4 * (bank)) + \ + ADF_RING_CSR_RING_SRV_ARB_EN, (value)) + +#endif diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h index cf840fea9b..845c8d99ab 100644 --- a/drivers/common/qat/qat_common.h +++ b/drivers/common/qat/qat_common.h @@ -18,7 +18,8 @@ enum qat_device_gen { QAT_GEN1 = 1, QAT_GEN2, - QAT_GEN3 + QAT_GEN3, + QAT_GEN4 }; enum qat_service_type { diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 9fa142b5e5..932d7110f7 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -30,6 +30,11 @@ struct qat_gen_hw_data qat_gen_config[] = { .qp_hw_data = qat_gen3_qps, .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3 }, + [QAT_GEN4] = { + .dev_gen = QAT_GEN4, + .qp_hw_data = NULL, + .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3 + }, }; /* per-process array of device data */ @@ -59,6 +64,9 @@ static const struct rte_pci_id pci_id_qat_map[] = { { RTE_PCI_DEVICE(0x8086, 0x18a1), }, + { + RTE_PCI_DEVICE(0x8086, 0x4941), + }, {.device_id = 0}, }; @@ -232,6 +240,9 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, case 0x18a1: qat_dev->qat_dev_gen = QAT_GEN3; break; + case 0x4941: + qat_dev->qat_dev_gen = QAT_GEN4; + break; default: QAT_LOG(ERR, "Invalid dev_id, can't determine generation"); rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz); @@ -241,6 +252,17 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, if (devargs && devargs->drv_str) qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param); + if (qat_dev->qat_dev_gen >= QAT_GEN4) { + int ret = qat_read_qp_config(qat_dev, qat_dev->qat_dev_gen); + + if (ret) { + QAT_LOG(ERR, + "Cannot acquire ring configuration for QAT_%d", + qat_dev_id); + return NULL; + } + } + rte_spinlock_init(&qat_dev->arb_csr_lock); qat_nb_pci_devices++; diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index 9c6a3ca4e6..f4fecc9517 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -105,6 +105,9 @@ struct qat_pci_device { /* Data relating to compression service */ struct qat_comp_dev_private *comp_dev; /**< link back to compressdev private data */ + struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM] + [QAT_GEN4_QPS_PER_BUNDLE_NUM]; + /**< Data of ring configuration on gen4 */ }; struct qat_gen_hw_data { diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index aa64d2e168..8be59779f9 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -19,6 +19,7 @@ #include "qat_asym.h" #include "qat_comp.h" #include "adf_transport_access_macros.h" +#include "adf_transport_access_macros_gen4vf.h" #define QAT_CQ_MAX_DEQ_RETRIES 10 @@ -138,25 +139,33 @@ static int qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, struct qat_qp_config *, uint8_t dir); static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num, uint32_t *queue_size_for_csr); -static void adf_configure_queues(struct qat_qp *queue); -static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr, - rte_spinlock_t *lock); -static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr, - rte_spinlock_t *lock); - +static void adf_configure_queues(struct qat_qp *queue, + enum qat_device_gen qat_dev_gen); +static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, + struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock); +static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, + struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock); int qat_qps_per_service(struct qat_pci_device *qat_dev, enum qat_service_type service) { int i = 0, count = 0, max_ops_per_srv = 0; - const struct qat_qp_hw_data* - sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen] - .qp_hw_data[service]; - max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE; - for (; i < max_ops_per_srv; i++) - if (sym_hw_qps[i].service_type == service) - count++; + if (qat_dev->qat_dev_gen == QAT_GEN4) { + max_ops_per_srv = QAT_GEN4_BUNDLE_NUM; + for (i = 0, count = 0; i < max_ops_per_srv; i++) + if (qat_dev->qp_gen4_data[i][0].service_type == service) + count++; + } else { + const struct qat_qp_hw_data *sym_hw_qps = + qat_gen_config[qat_dev->qat_dev_gen] + .qp_hw_data[service]; + + max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE; + for (i = 0, count = 0; i < max_ops_per_srv; i++) + if (sym_hw_qps[i].service_type == service) + count++; + } return count; } @@ -195,12 +204,12 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, struct qat_qp **qp_addr, uint16_t queue_pair_id, struct qat_qp_config *qat_qp_conf) - { struct qat_qp *qp; struct rte_pci_device *pci_dev = qat_pci_devs[qat_dev->qat_dev_id].pci_dev; char op_cookie_pool_name[RTE_RING_NAMESIZE]; + enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; uint32_t i; QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d", @@ -264,8 +273,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, goto create_err; } - adf_configure_queues(qp); - adf_queue_arb_enable(&qp->tx_q, qp->mmap_bar_addr, + adf_configure_queues(qp, qat_dev_gen); + adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr, &qat_dev->arb_csr_lock); snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE, @@ -314,7 +323,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, return -EFAULT; } -int qat_qp_release(struct qat_qp **qp_addr) + +int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr) { struct qat_qp *qp = *qp_addr; uint32_t i; @@ -335,8 +345,8 @@ int qat_qp_release(struct qat_qp **qp_addr) return -EAGAIN; } - adf_queue_arb_disable(&(qp->tx_q), qp->mmap_bar_addr, - &qp->qat_dev->arb_csr_lock); + adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr, + &qp->qat_dev->arb_csr_lock); for (i = 0; i < qp->nb_descriptors; i++) rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]); @@ -386,6 +396,7 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, const struct rte_memzone *qp_mz; struct rte_pci_device *pci_dev = qat_pci_devs[qat_dev->qat_dev_id].pci_dev; + enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; int ret = 0; uint16_t desc_size = (dir == ADF_RING_DIR_TX ? qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size); @@ -445,14 +456,19 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, * Write an unused pattern to the queue memory. */ memset(queue->base_addr, 0x7F, queue_size_bytes); - - queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr, - queue->queue_size); - io_addr = pci_dev->mem_resource[0].addr; - WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number, + if (qat_dev_gen == QAT_GEN4) { + queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr, + queue->queue_size); + WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number, + queue->hw_queue_number, queue_base); + } else { + queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr, + queue->queue_size); + WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number, queue->hw_queue_number, queue_base); + } QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u," " nb msgs %u, msg_size %u, modulo mask %u", @@ -468,6 +484,61 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, return ret; } +int +qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, + enum qat_service_type service_type) +{ + if (qat_dev->qat_dev_gen == QAT_GEN4) { + int i = 0, valid_qps = 0; + + for (; i < QAT_GEN4_BUNDLE_NUM; i++) { + if (qat_dev->qp_gen4_data[i][0].service_type == + service_type) { + if (valid_qps == qp_id) + return i; + ++valid_qps; + } + } + } + return -1; +} + +int +qat_read_qp_config(struct qat_pci_device *qat_dev, + enum qat_device_gen qat_dev_gen) +{ + if (qat_dev_gen == QAT_GEN4) { + /* Read default configuration, + * until some probe of it can be done + */ + int i = 0; + + for (; i < QAT_GEN4_BUNDLE_NUM; i++) { + struct qat_qp_hw_data *hw_data = + &qat_dev->qp_gen4_data[i][0]; + enum qat_service_type service_type = + (QAT_GEN4_QP_DEFCON >> (8 * i)) & 0xFF; + + memset(hw_data, 0, sizeof(*hw_data)); + hw_data->service_type = service_type; + if (service_type == QAT_SERVICE_ASYMMETRIC) { + hw_data->tx_msg_size = 64; + hw_data->rx_msg_size = 32; + } else if (service_type == QAT_SERVICE_SYMMETRIC || + service_type == + QAT_SERVICE_COMPRESSION) { + hw_data->tx_msg_size = 128; + hw_data->rx_msg_size = 32; + } + hw_data->tx_ring_num = 0; + hw_data->rx_ring_num = 1; + hw_data->hw_bundle_num = i; + } + } + /* With default config will always return success */ + return 0; +} + static int qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes) { @@ -491,54 +562,81 @@ static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num, return -EINVAL; } -static void adf_queue_arb_enable(struct qat_queue *txq, void *base_addr, - rte_spinlock_t *lock) +static void +adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) { - uint32_t arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_ARB_REG_SLOT * - txq->hw_bundle_number); - uint32_t value; + uint32_t arb_csr_offset = 0, value; rte_spinlock_lock(lock); - value = ADF_CSR_RD(base_addr, arb_csr_offset); + if (qat_dev_gen == QAT_GEN4) { + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_RING_BUNDLE_SIZE_GEN4 * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, + arb_csr_offset); + } else { + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_ARB_REG_SLOT * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr, + arb_csr_offset); + } value |= (0x01 << txq->hw_queue_number); ADF_CSR_WR(base_addr, arb_csr_offset, value); rte_spinlock_unlock(lock); } -static void adf_queue_arb_disable(struct qat_queue *txq, void *base_addr, - rte_spinlock_t *lock) +static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, + struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock) { - uint32_t arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_ARB_REG_SLOT * - txq->hw_bundle_number); - uint32_t value; + uint32_t arb_csr_offset = 0, value; rte_spinlock_lock(lock); - value = ADF_CSR_RD(base_addr, arb_csr_offset); + if (qat_dev_gen == QAT_GEN4) { + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_RING_BUNDLE_SIZE_GEN4 * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, + arb_csr_offset); + } else { + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_ARB_REG_SLOT * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr, + arb_csr_offset); + } value &= ~(0x01 << txq->hw_queue_number); ADF_CSR_WR(base_addr, arb_csr_offset, value); rte_spinlock_unlock(lock); } -static void adf_configure_queues(struct qat_qp *qp) +static void adf_configure_queues(struct qat_qp *qp, + enum qat_device_gen qat_dev_gen) { - uint32_t queue_config; - struct qat_queue *queue = &qp->tx_q; - - queue_config = BUILD_RING_CONFIG(queue->queue_size); - - WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number, - queue->hw_queue_number, queue_config); - - queue = &qp->rx_q; - queue_config = - BUILD_RESP_RING_CONFIG(queue->queue_size, - ADF_RING_NEAR_WATERMARK_512, - ADF_RING_NEAR_WATERMARK_0); - - WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, queue->hw_bundle_number, - queue->hw_queue_number, queue_config); + uint32_t q_tx_config, q_resp_config; + struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q; + + q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size); + q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size, + ADF_RING_NEAR_WATERMARK_512, + ADF_RING_NEAR_WATERMARK_0); + + if (qat_dev_gen == QAT_GEN4) { + WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, + q_tx->hw_bundle_number, q_tx->hw_queue_number, + q_tx_config); + WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, + q_rx->hw_bundle_number, q_rx->hw_queue_number, + q_resp_config); + } else { + WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, + q_tx->hw_bundle_number, q_tx->hw_queue_number, + q_tx_config); + WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, + q_rx->hw_bundle_number, q_rx->hw_queue_number, + q_resp_config); + } } static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask) @@ -547,14 +645,21 @@ static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask) } static inline void -txq_write_tail(struct qat_qp *qp, struct qat_queue *q) { - WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number, +txq_write_tail(enum qat_device_gen qat_dev_gen, + struct qat_qp *qp, struct qat_queue *q) { + + if (qat_dev_gen == QAT_GEN4) { + WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr, + q->hw_bundle_number, q->hw_queue_number, q->tail); + } else { + WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number, q->hw_queue_number, q->tail); - q->csr_tail = q->tail; + } } static inline -void rxq_free_desc(struct qat_qp *qp, struct qat_queue *q) +void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp, + struct qat_queue *q) { uint32_t old_head, new_head; uint32_t max_head; @@ -576,8 +681,14 @@ void rxq_free_desc(struct qat_qp *qp, struct qat_queue *q) q->csr_head = new_head; /* write current head to CSR */ - WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number, - q->hw_queue_number, new_head); + if (qat_dev_gen == QAT_GEN4) { + WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr, + q->hw_bundle_number, q->hw_queue_number, new_head); + } else { + WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number, + q->hw_queue_number, new_head); + } + } uint16_t @@ -670,7 +781,7 @@ qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops) queue->tail = tail; tmp_qp->enqueued += nb_ops_sent; tmp_qp->stats.enqueued_count += nb_ops_sent; - txq_write_tail(tmp_qp, queue); + txq_write_tail(tmp_qp->qat_dev_gen, tmp_qp, queue); return nb_ops_sent; } @@ -843,7 +954,7 @@ qat_enqueue_comp_op_burst(void *qp, void **ops, uint16_t nb_ops) queue->tail = tail; tmp_qp->enqueued += total_descriptors_built; tmp_qp->stats.enqueued_count += nb_ops_sent; - txq_write_tail(tmp_qp, queue); + txq_write_tail(tmp_qp->qat_dev_gen, tmp_qp, queue); return nb_ops_sent; } @@ -909,7 +1020,7 @@ qat_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops) rx_queue->head = head; if (rx_queue->nb_processed_responses > QAT_CSR_HEAD_WRITE_THRESH) - rxq_free_desc(tmp_qp, rx_queue); + rxq_free_desc(tmp_qp->qat_dev_gen, tmp_qp, rx_queue); QAT_DP_LOG(DEBUG, "Dequeue burst return: %u, QAT responses: %u", op_resp_counter, fw_resp_counter); @@ -951,7 +1062,7 @@ qat_cq_dequeue_response(struct qat_qp *qp, void *out_data) queue->head = adf_modulo(queue->head + queue->msg_size, queue->modulo_mask); - rxq_free_desc(qp, queue); + rxq_free_desc(qp->qat_dev_gen, qp, queue); } return result; @@ -986,7 +1097,7 @@ qat_cq_get_fw_version(struct qat_qp *qp) memcpy(base_addr + queue->tail, &null_msg, sizeof(null_msg)); queue->tail = adf_modulo(queue->tail + queue->msg_size, queue->modulo_mask); - txq_write_tail(qp, queue); + txq_write_tail(qp->qat_dev_gen, qp, queue); /* receive a response */ if (qat_cq_dequeue_response(qp, &response)) { diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index d353e8552b..3d9a757349 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -14,6 +14,16 @@ struct qat_pci_device; #define QAT_QP_MIN_INFL_THRESHOLD 256 +/* Default qp configuration for GEN4 devices */ +#define QAT_GEN4_QP_DEFCON (QAT_SERVICE_SYMMETRIC | \ + QAT_SERVICE_SYMMETRIC << 8 | \ + QAT_SERVICE_SYMMETRIC << 16 | \ + QAT_SERVICE_SYMMETRIC << 24) + +/* QAT GEN 4 specific macros */ +#define QAT_GEN4_BUNDLE_NUM 4 +#define QAT_GEN4_QPS_PER_BUNDLE_NUM 1 + /** * Structure with data needed for creation of queue pair. */ @@ -26,6 +36,15 @@ struct qat_qp_hw_data { uint16_t rx_msg_size; }; +/** + * Structure with data needed for creation of queue pair on gen4. + */ +struct qat_qp_gen4_data { + struct qat_qp_hw_data qat_qp_hw_data; + uint8_t reserved; + uint8_t valid; +}; + /** * Structure with data needed for creation of queue pair. */ @@ -90,7 +109,7 @@ uint16_t qat_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops); int -qat_qp_release(struct qat_qp **qp_addr); +qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr); int qat_qp_setup(struct qat_pci_device *qat_dev, @@ -110,4 +129,12 @@ qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, void *op_cookie __rte_unused, uint64_t *dequeue_err_count __rte_unused); +int +qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, + enum qat_service_type service_type); + +int +qat_read_qp_config(struct qat_pci_device *qat_dev, + enum qat_device_gen qat_dev_gen); + #endif /* _QAT_QP_H_ */ diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c index 6eb1ae3a21..cfdcb6b3d1 100644 --- a/drivers/compress/qat/qat_comp_pmd.c +++ b/drivers/compress/qat/qat_comp_pmd.c @@ -74,6 +74,7 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id) struct qat_qp **qp_addr = (struct qat_qp **)&(dev->data->queue_pairs[queue_pair_id]); struct qat_qp *qp = (struct qat_qp *)*qp_addr; + enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen; uint32_t i; QAT_LOG(DEBUG, "Release comp qp %u on device %d", @@ -90,7 +91,7 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id) rte_free(cookie->qat_sgl_dst_d); } - return qat_qp_release((struct qat_qp **) + return qat_qp_release(qat_dev_gen, (struct qat_qp **) &(dev->data->queue_pairs[queue_pair_id])); } @@ -710,6 +711,10 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev, const struct rte_compressdev_capabilities *capabilities; uint64_t capa_size; + if (qat_pci_dev->qat_dev_gen == QAT_GEN4) { + QAT_LOG(ERR, "Compression PMD not supported on QAT 4xxx"); + return 0; + } snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s", qat_pci_dev->name, "comp"); QAT_LOG(DEBUG, "Creating QAT COMP device %s", name); diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c index f0c8ed1bcf..56ccca36d1 100644 --- a/drivers/crypto/qat/qat_asym_pmd.c +++ b/drivers/crypto/qat/qat_asym_pmd.c @@ -103,6 +103,7 @@ static int qat_asym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) { struct qat_asym_dev_private *qat_private = dev->data->dev_private; + enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen; QAT_LOG(DEBUG, "Release asym qp %u on device %d", queue_pair_id, dev->data->dev_id); @@ -110,7 +111,7 @@ static int qat_asym_qp_release(struct rte_cryptodev *dev, qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][queue_pair_id] = NULL; - return qat_qp_release((struct qat_qp **) + return qat_qp_release(qat_dev_gen, (struct qat_qp **) &(dev->data->queue_pairs[queue_pair_id])); } @@ -250,6 +251,10 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, struct rte_cryptodev *cryptodev; struct qat_asym_dev_private *internals; + if (qat_pci_dev->qat_dev_gen == QAT_GEN4) { + QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx"); + return 0; + } snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s", qat_pci_dev->name, "asym"); QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name); diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index 549345b6fa..e15722ad66 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -139,6 +139,7 @@ static void qat_sym_stats_reset(struct rte_cryptodev *dev) static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) { struct qat_sym_dev_private *qat_private = dev->data->dev_private; + enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen; QAT_LOG(DEBUG, "Release sym qp %u on device %d", queue_pair_id, dev->data->dev_id); @@ -146,7 +147,7 @@ static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][queue_pair_id] = NULL; - return qat_qp_release((struct qat_qp **) + return qat_qp_release(qat_dev_gen, (struct qat_qp **) &(dev->data->queue_pairs[queue_pair_id])); } @@ -158,15 +159,33 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, int ret = 0; uint32_t i; struct qat_qp_config qat_qp_conf; + const struct qat_qp_hw_data *sym_hw_qps = NULL; + const struct qat_qp_hw_data *qp_hw_data = NULL; struct qat_qp **qp_addr = (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); struct qat_sym_dev_private *qat_private = dev->data->dev_private; struct qat_pci_device *qat_dev = qat_private->qat_dev; - const struct qat_qp_hw_data *sym_hw_qps = - qat_gen_config[qat_private->qat_dev->qat_dev_gen] - .qp_hw_data[QAT_SERVICE_SYMMETRIC]; - const struct qat_qp_hw_data *qp_hw_data = sym_hw_qps + qp_id; + + if (qat_dev->qat_dev_gen == QAT_GEN4) { + int ring_pair = + qat_select_valid_queue(qat_dev, qp_id, + QAT_SERVICE_SYMMETRIC); + sym_hw_qps = + &qat_dev->qp_gen4_data[0][0]; + qp_hw_data = + &qat_dev->qp_gen4_data[ring_pair][0]; + if (ring_pair < 0) { + QAT_LOG(ERR, + "qp_id %u invalid for this device, no enough services allocated for GEN4 device", + qp_id); + return -EINVAL; + } + } else { + sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen] + .qp_hw_data[QAT_SERVICE_SYMMETRIC]; + qp_hw_data = sym_hw_qps + qp_id; + } /* If qp is already in use free ring memory and qp metadata. */ if (*qp_addr != NULL) { @@ -430,6 +449,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, capabilities = qat_gen3_sym_capabilities; capa_size = sizeof(qat_gen3_sym_capabilities); break; + case QAT_GEN4: + capabilities = NULL; + capa_size = 0; + break; default: QAT_LOG(DEBUG, "QAT gen %d capabilities unknown", diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 231b1640da..506ffddd20 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -550,6 +550,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, return -EINVAL; } + memset(session, 0, sizeof(*session)); /* Set context descriptor physical address */ session->cd_paddr = session_paddr + offsetof(struct qat_sym_session, cd); From patchwork Mon Jun 28 16:34:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94911 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 14EC9A0A0C; Mon, 28 Jun 2021 18:35:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD93F41165; Mon, 28 Jun 2021 18:34:43 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 152AB41148 for ; Mon, 28 Jun 2021 18:34:38 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165855" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165855" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395545" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:32 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:21 +0100 Message-Id: <20210628163434.77741-4-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 03/16] crypto/qat: enable gen4 legacy algorithms X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit enables algorithms labeled as 'legacy' on QAT generation 4 devices. Following algorithms were enabled: * AES-CBC * AES-CMAC * AES-XCBC MAC * NULL (auth, cipher) * SHA1-HMAC * SHA2-HMAC (224, 256, 384, 512) Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/crypto/qat/qat_sym_capabilities.h | 337 ++++++++++++++++++++++ drivers/crypto/qat/qat_sym_pmd.c | 9 +- 2 files changed, 344 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h index f7cab2f471..21c817bccc 100644 --- a/drivers/crypto/qat/qat_sym_capabilities.h +++ b/drivers/crypto/qat/qat_sym_capabilities.h @@ -731,6 +731,343 @@ }, } \ } +#define QAT_BASE_GEN4_SYM_CAPABILITIES \ + { /* AES CBC */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ + {.cipher = { \ + .algo = RTE_CRYPTO_CIPHER_AES_CBC, \ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 32, \ + .increment = 8 \ + }, \ + .iv_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + } \ + }, } \ + }, } \ + }, \ + { /* SHA1 HMAC */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA1_HMAC, \ + .block_size = 64, \ + .key_size = { \ + .min = 1, \ + .max = 64, \ + .increment = 1 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 20, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* SHA224 HMAC */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA224_HMAC, \ + .block_size = 64, \ + .key_size = { \ + .min = 1, \ + .max = 64, \ + .increment = 1 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 28, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* SHA256 HMAC */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, \ + .block_size = 64, \ + .key_size = { \ + .min = 1, \ + .max = 64, \ + .increment = 1 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 32, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* SHA384 HMAC */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA384_HMAC, \ + .block_size = 128, \ + .key_size = { \ + .min = 1, \ + .max = 128, \ + .increment = 1 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 48, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* SHA512 HMAC */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA512_HMAC, \ + .block_size = 128, \ + .key_size = { \ + .min = 1, \ + .max = 128, \ + .increment = 1 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 64, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* AES XCBC MAC */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC, \ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 12, \ + .max = 12, \ + .increment = 0 \ + }, \ + .aad_size = { 0 }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* AES CMAC */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_AES_CMAC, \ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 4, \ + .max = 16, \ + .increment = 4 \ + } \ + }, } \ + }, } \ + }, \ + { /* AES DOCSIS BPI */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ + {.cipher = { \ + .algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 32, \ + .increment = 16 \ + }, \ + .iv_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + } \ + }, } \ + }, } \ + }, \ + { /* NULL (AUTH) */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_NULL, \ + .block_size = 1, \ + .key_size = { \ + .min = 0, \ + .max = 0, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 0, \ + .max = 0, \ + .increment = 0 \ + }, \ + .iv_size = { 0 } \ + }, }, \ + }, }, \ + }, \ + { /* NULL (CIPHER) */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ + {.cipher = { \ + .algo = RTE_CRYPTO_CIPHER_NULL, \ + .block_size = 1, \ + .key_size = { \ + .min = 0, \ + .max = 0, \ + .increment = 0 \ + }, \ + .iv_size = { \ + .min = 0, \ + .max = 0, \ + .increment = 0 \ + } \ + }, }, \ + }, } \ + }, \ + { /* SHA1 */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA1, \ + .block_size = 64, \ + .key_size = { \ + .min = 0, \ + .max = 0, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 20, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* SHA224 */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA224, \ + .block_size = 64, \ + .key_size = { \ + .min = 0, \ + .max = 0, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 28, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* SHA256 */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA256, \ + .block_size = 64, \ + .key_size = { \ + .min = 0, \ + .max = 0, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 32, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* SHA384 */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA384, \ + .block_size = 128, \ + .key_size = { \ + .min = 0, \ + .max = 0, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 48, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + }, \ + { /* SHA512 */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_SHA512, \ + .block_size = 128, \ + .key_size = { \ + .min = 0, \ + .max = 0, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 1, \ + .max = 64, \ + .increment = 1 \ + }, \ + .iv_size = { 0 } \ + }, } \ + }, } \ + } \ + + + #ifdef RTE_LIB_SECURITY #define QAT_SECURITY_SYM_CAPABILITIES \ { /* AES DOCSIS BPI */ \ diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index e15722ad66..0097ee210f 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -39,6 +39,11 @@ static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = { RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() }; +static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = { + QAT_BASE_GEN4_SYM_CAPABILITIES, + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + #ifdef RTE_LIB_SECURITY static const struct rte_cryptodev_capabilities qat_security_sym_capabilities[] = { @@ -450,8 +455,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, capa_size = sizeof(qat_gen3_sym_capabilities); break; case QAT_GEN4: - capabilities = NULL; - capa_size = 0; + capabilities = qat_gen4_sym_capabilities; + capa_size = sizeof(qat_gen4_sym_capabilities); break; default: QAT_LOG(DEBUG, From patchwork Mon Jun 28 16:34:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94910 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5EF8A0A0C; Mon, 28 Jun 2021 18:34:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9503941160; Mon, 28 Jun 2021 18:34:42 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 2B07041150 for ; Mon, 28 Jun 2021 18:34:39 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165858" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165858" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395555" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:35 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:22 +0100 Message-Id: <20210628163434.77741-5-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 04/16] crypto/qat: add gen4 ucs slice type, add ctr mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds unified cipher slice to Intel QuickAssist Technology PMD and enables AES-CTR algorithm. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/common/qat/qat_adf/icp_qat_fw_la.h | 28 ++++++++++++++++++++++ drivers/common/qat/qat_adf/icp_qat_hw.h | 10 ++++++++ drivers/crypto/qat/qat_sym_capabilities.h | 20 ++++++++++++++++ drivers/crypto/qat/qat_sym_session.c | 27 ++++++++++++++++++++- drivers/crypto/qat/qat_sym_session.h | 1 + 5 files changed, 85 insertions(+), 1 deletion(-) diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h index 20eb145def..c4901eb869 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h @@ -371,4 +371,32 @@ struct icp_qat_fw_la_resp { & ICP_QAT_FW_COMN_NEXT_ID_MASK) | \ ((val) & ICP_QAT_FW_COMN_CURR_ID_MASK)) } +#define ICP_QAT_FW_LA_USE_WIRELESS_SLICE_TYPE 2 +#define ICP_QAT_FW_LA_USE_UCS_SLICE_TYPE 1 +#define ICP_QAT_FW_LA_USE_LEGACY_SLICE_TYPE 0 +#define QAT_LA_SLICE_TYPE_BITPOS 14 +#define QAT_LA_SLICE_TYPE_MASK 0x3 +#define ICP_QAT_FW_LA_SLICE_TYPE_SET(flags, val) \ + QAT_FIELD_SET(flags, val, QAT_LA_SLICE_TYPE_BITPOS, \ + QAT_LA_SLICE_TYPE_MASK) + +struct icp_qat_fw_la_cipher_20_req_params { + uint32_t cipher_offset; + uint32_t cipher_length; + union { + uint32_t cipher_IV_array[ICP_QAT_FW_NUM_LONGWORDS_4]; + struct { + uint64_t cipher_IV_ptr; + uint64_t resrvd1; + } s; + + } u; + uint32_t spc_aad_offset; + uint32_t spc_aad_sz; + uint64_t spc_aad_addr; + uint64_t spc_auth_res_addr; + uint8_t reserved[3]; + uint8_t spc_auth_res_sz; +}; + #endif diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h index fdc0f191a2..b1e6a1fa15 100644 --- a/drivers/common/qat/qat_adf/icp_qat_hw.h +++ b/drivers/common/qat/qat_adf/icp_qat_hw.h @@ -342,6 +342,16 @@ struct icp_qat_hw_cipher_algo_blk { uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; } __rte_cache_aligned; +struct icp_qat_hw_ucs_cipher_config { + uint32_t val; + uint32_t reserved[3]; +}; + +struct icp_qat_hw_cipher_algo_blk20 { + struct icp_qat_hw_ucs_cipher_config cipher_config; + uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; +} __rte_cache_aligned; + /* ========================================================================= */ /* COMPRESSION SLICE */ /* ========================================================================= */ diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h index 21c817bccc..aca528b991 100644 --- a/drivers/crypto/qat/qat_sym_capabilities.h +++ b/drivers/crypto/qat/qat_sym_capabilities.h @@ -1064,6 +1064,26 @@ .iv_size = { 0 } \ }, } \ }, } \ + }, \ + { /* AES CTR */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ + {.cipher = { \ + .algo = RTE_CRYPTO_CIPHER_AES_CTR, \ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 32, \ + .increment = 8 \ + }, \ + .iv_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + } \ + }, } \ + }, } \ } \ diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 506ffddd20..2c44b1f1aa 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -246,6 +246,8 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev, { struct qat_sym_dev_private *internals = dev->data->dev_private; struct rte_crypto_cipher_xform *cipher_xform = NULL; + enum qat_device_gen qat_dev_gen = + internals->qat_dev->qat_dev_gen; int ret; /* Get cipher xform from crypto xform chain */ @@ -272,6 +274,13 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev, goto error_out; } session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE; + if (qat_dev_gen == QAT_GEN4) { + /* TODO: Filter WCP */ + ICP_QAT_FW_LA_SLICE_TYPE_SET( + session->fw_req.comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_USE_UCS_SLICE_TYPE); + session->is_ucs = 1; + } break; case RTE_CRYPTO_CIPHER_SNOW3G_UEA2: if (qat_sym_validate_snow3g_key(cipher_xform->key.length, @@ -556,6 +565,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, offsetof(struct qat_sym_session, cd); session->min_qat_dev_gen = QAT_GEN1; + session->is_ucs = 0; /* Get requested QAT command id */ qat_cmd_id = qat_get_cmd_id(xform); @@ -1518,6 +1528,7 @@ int qat_sym_session_aead_create_cd_cipher(struct qat_sym_session *cdesc, uint32_t cipherkeylen) { struct icp_qat_hw_cipher_algo_blk *cipher; + struct icp_qat_hw_cipher_algo_blk20 *cipher20; struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req; struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; @@ -1611,7 +1622,6 @@ int qat_sym_session_aead_create_cd_cipher(struct qat_sym_session *cdesc, qat_proto_flag = qat_get_crypto_proto_flag(header->serv_specif_flags); } - cipher_cd_ctrl->cipher_key_sz = total_key_size >> 3; cipher_offset = cdesc->cd_cur_ptr-((uint8_t *)&cdesc->cd); cipher_cd_ctrl->cipher_cfg_offset = cipher_offset >> 3; @@ -1619,6 +1629,7 @@ int qat_sym_session_aead_create_cd_cipher(struct qat_sym_session *cdesc, qat_sym_session_init_common_hdr(header, qat_proto_flag); cipher = (struct icp_qat_hw_cipher_algo_blk *)cdesc->cd_cur_ptr; + cipher20 = (struct icp_qat_hw_cipher_algo_blk20 *)cdesc->cd_cur_ptr; cipher->cipher_config.val = ICP_QAT_HW_CIPHER_CONFIG_BUILD(cdesc->qat_mode, cdesc->qat_cipher_alg, key_convert, @@ -1638,6 +1649,19 @@ int qat_sym_session_aead_create_cd_cipher(struct qat_sym_session *cdesc, cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_cipher_config) + cipherkeylen + cipherkeylen; + } else if (cdesc->is_ucs) { + const uint8_t *final_key = cipherkey; + + total_key_size = RTE_ALIGN_CEIL(cipherkeylen, + ICP_QAT_HW_AES_128_KEY_SZ); + cipher20->cipher_config.reserved[0] = 0; + cipher20->cipher_config.reserved[1] = 0; + cipher20->cipher_config.reserved[2] = 0; + + rte_memcpy(cipher20->key, final_key, cipherkeylen); + cdesc->cd_cur_ptr += + sizeof(struct icp_qat_hw_ucs_cipher_config) + + cipherkeylen; } else { memcpy(cipher->key, cipherkey, cipherkeylen); cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_cipher_config) + @@ -1664,6 +1688,7 @@ int qat_sym_session_aead_create_cd_cipher(struct qat_sym_session *cdesc, } cd_size = cdesc->cd_cur_ptr-(uint8_t *)&cdesc->cd; cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3; + cipher_cd_ctrl->cipher_key_sz = total_key_size >> 3; return 0; } diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 72eee06597..4450df6911 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -92,6 +92,7 @@ struct qat_sym_session { uint8_t aes_cmac; uint8_t is_single_pass; uint8_t is_single_pass_gmac; + uint8_t is_ucs; }; int From patchwork Mon Jun 28 16:34:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94912 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3A9DA0A0C; Mon, 28 Jun 2021 18:35:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DB7204116B; Mon, 28 Jun 2021 18:34:44 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id D697441156 for ; Mon, 28 Jun 2021 18:34:40 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165864" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165864" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395569" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:38 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:23 +0100 Message-Id: <20210628163434.77741-6-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 05/16] crypto/qat: rename content descriptor functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Content descriptor functions are incorrectly named, having them with proper name will improve readability and facilitate further work. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/crypto/qat/qat_sym_session.c | 39 ++++++++++++++++++---------- drivers/crypto/qat/qat_sym_session.h | 13 ---------- 2 files changed, 26 insertions(+), 26 deletions(-) diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 2c44b1f1aa..56c85e8435 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -57,6 +57,19 @@ static const uint8_t sha512InitialState[] = { 0x2b, 0x3e, 0x6c, 0x1f, 0x1f, 0x83, 0xd9, 0xab, 0xfb, 0x41, 0xbd, 0x6b, 0x5b, 0xe0, 0xcd, 0x19, 0x13, 0x7e, 0x21, 0x79}; +static int +qat_sym_cd_cipher_set(struct qat_sym_session *cd, + const uint8_t *enckey, + uint32_t enckeylen); + +static int +qat_sym_cd_auth_set(struct qat_sym_session *cdesc, + const uint8_t *authkey, + uint32_t authkeylen, + uint32_t aad_length, + uint32_t digestsize, + unsigned int operation); + /** Frees a context previously created * Depends on openssl libcrypto */ @@ -420,7 +433,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev, else session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT; - if (qat_sym_session_aead_create_cd_cipher(session, + if (qat_sym_cd_cipher_set(session, cipher_xform->key.data, cipher_xform->key.length)) { ret = -EINVAL; @@ -669,7 +682,7 @@ qat_sym_session_handle_single_pass(struct qat_sym_session *session, } session->cipher_iv.offset = aead_xform->iv.offset; session->cipher_iv.length = aead_xform->iv.length; - if (qat_sym_session_aead_create_cd_cipher(session, + if (qat_sym_cd_cipher_set(session, aead_xform->key.data, aead_xform->key.length)) return -EINVAL; session->aad_len = aead_xform->aad_length; @@ -825,12 +838,12 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, * then authentication */ - if (qat_sym_session_aead_create_cd_cipher(session, + if (qat_sym_cd_cipher_set(session, auth_xform->key.data, auth_xform->key.length)) return -EINVAL; - if (qat_sym_session_aead_create_cd_auth(session, + if (qat_sym_cd_auth_set(session, key_data, key_length, 0, @@ -845,7 +858,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, * then cipher */ - if (qat_sym_session_aead_create_cd_auth(session, + if (qat_sym_cd_auth_set(session, key_data, key_length, 0, @@ -853,7 +866,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, auth_xform->op)) return -EINVAL; - if (qat_sym_session_aead_create_cd_cipher(session, + if (qat_sym_cd_cipher_set(session, auth_xform->key.data, auth_xform->key.length)) return -EINVAL; @@ -861,7 +874,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, /* Restore to authentication only only */ session->qat_cmd = ICP_QAT_FW_LA_CMD_AUTH; } else { - if (qat_sym_session_aead_create_cd_auth(session, + if (qat_sym_cd_auth_set(session, key_data, key_length, 0, @@ -948,12 +961,12 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, crypto_operation = aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM ? RTE_CRYPTO_AUTH_OP_GENERATE : RTE_CRYPTO_AUTH_OP_VERIFY; - if (qat_sym_session_aead_create_cd_cipher(session, + if (qat_sym_cd_cipher_set(session, aead_xform->key.data, aead_xform->key.length)) return -EINVAL; - if (qat_sym_session_aead_create_cd_auth(session, + if (qat_sym_cd_auth_set(session, aead_xform->key.data, aead_xform->key.length, aead_xform->aad_length, @@ -970,7 +983,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, crypto_operation = aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM ? RTE_CRYPTO_AUTH_OP_VERIFY : RTE_CRYPTO_AUTH_OP_GENERATE; - if (qat_sym_session_aead_create_cd_auth(session, + if (qat_sym_cd_auth_set(session, aead_xform->key.data, aead_xform->key.length, aead_xform->aad_length, @@ -978,7 +991,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, crypto_operation)) return -EINVAL; - if (qat_sym_session_aead_create_cd_cipher(session, + if (qat_sym_cd_cipher_set(session, aead_xform->key.data, aead_xform->key.length)) return -EINVAL; @@ -1523,7 +1536,7 @@ qat_get_crypto_proto_flag(uint16_t flags) return qat_proto_flag; } -int qat_sym_session_aead_create_cd_cipher(struct qat_sym_session *cdesc, +int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, const uint8_t *cipherkey, uint32_t cipherkeylen) { @@ -1693,7 +1706,7 @@ int qat_sym_session_aead_create_cd_cipher(struct qat_sym_session *cdesc, return 0; } -int qat_sym_session_aead_create_cd_auth(struct qat_sym_session *cdesc, +int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, const uint8_t *authkey, uint32_t authkeylen, uint32_t aad_length, diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 4450df6911..5d28e5a089 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -120,19 +120,6 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, struct rte_crypto_sym_xform *xform, struct qat_sym_session *session); -int -qat_sym_session_aead_create_cd_cipher(struct qat_sym_session *cd, - const uint8_t *enckey, - uint32_t enckeylen); - -int -qat_sym_session_aead_create_cd_auth(struct qat_sym_session *cdesc, - const uint8_t *authkey, - uint32_t authkeylen, - uint32_t aad_length, - uint32_t digestsize, - unsigned int operation); - void qat_sym_session_clear(struct rte_cryptodev *dev, struct rte_cryptodev_sym_session *session); From patchwork Mon Jun 28 16:34:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94913 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DAF33A0A0C; Mon, 28 Jun 2021 18:35:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0248641170; Mon, 28 Jun 2021 18:34:47 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id C7C7441169 for ; Mon, 28 Jun 2021 18:34:44 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165883" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165883" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395592" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:42 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:24 +0100 Message-Id: <20210628163434.77741-7-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 06/16] crypto/qat: add legacy gcm and ccm X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add AES-GCM, AES-CCM algorithms in legacy mode. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/crypto/qat/qat_sym_capabilities.h | 60 +++++++++++++++++++++++ drivers/crypto/qat/qat_sym_session.c | 27 +++++----- drivers/crypto/qat/qat_sym_session.h | 3 +- 3 files changed, 78 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h index aca528b991..fc8e667687 100644 --- a/drivers/crypto/qat/qat_sym_capabilities.h +++ b/drivers/crypto/qat/qat_sym_capabilities.h @@ -1084,6 +1084,66 @@ } \ }, } \ }, } \ + }, \ + { /* AES GCM */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ + {.aead = { \ + .algo = RTE_CRYPTO_AEAD_AES_GCM, \ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 32, \ + .increment = 8 \ + }, \ + .digest_size = { \ + .min = 8, \ + .max = 16, \ + .increment = 4 \ + }, \ + .aad_size = { \ + .min = 0, \ + .max = 240, \ + .increment = 1 \ + }, \ + .iv_size = { \ + .min = 0, \ + .max = 12, \ + .increment = 12 \ + }, \ + }, } \ + }, } \ + }, \ + { /* AES CCM */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ + {.aead = { \ + .algo = RTE_CRYPTO_AEAD_AES_CCM, \ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 4, \ + .max = 16, \ + .increment = 2 \ + }, \ + .aad_size = { \ + .min = 0, \ + .max = 224, \ + .increment = 1 \ + }, \ + .iv_size = { \ + .min = 7, \ + .max = 13, \ + .increment = 1 \ + }, \ + }, } \ + }, } \ } \ diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 56c85e8435..5140d61a9c 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -287,13 +287,8 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev, goto error_out; } session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE; - if (qat_dev_gen == QAT_GEN4) { - /* TODO: Filter WCP */ - ICP_QAT_FW_LA_SLICE_TYPE_SET( - session->fw_req.comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_USE_UCS_SLICE_TYPE); + if (qat_dev_gen == QAT_GEN4) session->is_ucs = 1; - } break; case RTE_CRYPTO_CIPHER_SNOW3G_UEA2: if (qat_sym_validate_snow3g_key(cipher_xform->key.length, @@ -918,14 +913,15 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, } session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE; session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128; - if (qat_dev_gen > QAT_GEN2 && aead_xform->iv.length == + if (qat_dev_gen == QAT_GEN3 && aead_xform->iv.length == QAT_AES_GCM_SPC_IV_SIZE) { return qat_sym_session_handle_single_pass(session, aead_xform); } if (session->cipher_iv.length == 0) session->cipher_iv.length = AES_GCM_J0_LEN; - + if (qat_dev_gen == QAT_GEN4) + session->is_ucs = 1; break; case RTE_CRYPTO_AEAD_AES_CCM: if (qat_sym_validate_aes_key(aead_xform->key.length, @@ -935,6 +931,8 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, } session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE; session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC; + if (qat_dev_gen == QAT_GEN4) + session->is_ucs = 1; break; case RTE_CRYPTO_AEAD_CHACHA20_POLY1305: if (aead_xform->key.length != ICP_QAT_HW_CHACHAPOLY_KEY_SZ) @@ -1469,7 +1467,8 @@ static int qat_sym_do_precomputes(enum icp_qat_hw_auth_algo hash_alg, } static void -qat_sym_session_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header, +qat_sym_session_init_common_hdr(struct qat_sym_session *session, + struct icp_qat_fw_comn_req_hdr *header, enum qat_sym_proto_flag proto_flags) { header->hdr_flags = @@ -1510,6 +1509,12 @@ qat_sym_session_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header, ICP_QAT_FW_LA_NO_UPDATE_STATE); ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags, ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER); + + if (session->is_ucs) { + ICP_QAT_FW_LA_SLICE_TYPE_SET( + session->fw_req.comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_USE_UCS_SLICE_TYPE); + } } /* @@ -1639,7 +1644,7 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, cipher_cd_ctrl->cipher_cfg_offset = cipher_offset >> 3; header->service_cmd_id = cdesc->qat_cmd; - qat_sym_session_init_common_hdr(header, qat_proto_flag); + qat_sym_session_init_common_hdr(cdesc, header, qat_proto_flag); cipher = (struct icp_qat_hw_cipher_algo_blk *)cdesc->cd_cur_ptr; cipher20 = (struct icp_qat_hw_cipher_algo_blk20 *)cdesc->cd_cur_ptr; @@ -2032,7 +2037,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, } /* Request template setup */ - qat_sym_session_init_common_hdr(header, qat_proto_flag); + qat_sym_session_init_common_hdr(cdesc, header, qat_proto_flag); header->service_cmd_id = cdesc->qat_cmd; /* Auth CD config setup */ diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 5d28e5a089..e003a34f7f 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -128,7 +128,8 @@ unsigned int qat_sym_session_get_private_size(struct rte_cryptodev *dev); void -qat_sym_sesssion_init_common_hdr(struct icp_qat_fw_comn_req_hdr *header, +qat_sym_sesssion_init_common_hdr(struct qat_sym_session *session, + struct icp_qat_fw_comn_req_hdr *header, enum qat_sym_proto_flag proto_flags); int qat_sym_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg); From patchwork Mon Jun 28 16:34:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94914 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9EF23A0A0C; Mon, 28 Jun 2021 18:35:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8DB584117F; Mon, 28 Jun 2021 18:34:48 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 2507D41173 for ; Mon, 28 Jun 2021 18:34:46 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165894" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165894" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395605" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:45 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:25 +0100 Message-Id: <20210628163434.77741-8-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 07/16] crypto/qat: rework init common header function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rework init common header function for request descriptor so it can be called only once. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/crypto/qat/qat_sym.c | 25 +-- drivers/crypto/qat/qat_sym_session.c | 265 ++++++++++++++------------- drivers/crypto/qat/qat_sym_session.h | 12 ++ 3 files changed, 158 insertions(+), 144 deletions(-) diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 9415ec7d32..eef4a886c5 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -289,8 +289,9 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, auth_param = (void *)((uint8_t *)cipher_param + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); - if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || - ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) { + if ((ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || + ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) && + !ctx->is_gmac) { /* AES-GCM or AES-CCM */ if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 || @@ -303,7 +304,7 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, do_auth = 1; do_cipher = 1; } - } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) { + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH || ctx->is_gmac) { do_auth = 1; do_cipher = 0; } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { @@ -383,15 +384,6 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, auth_param->u1.aad_adr = 0; auth_param->u2.aad_sz = 0; - /* - * If len(iv)==12B fw computes J0 - */ - if (ctx->auth_iv.length == 12) { - ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( - qat_req->comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); - - } } else { auth_ofs = op->sym->auth.data.offset; auth_len = op->sym->auth.data.length; @@ -416,14 +408,7 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) { - /* - * If len(iv)==12B fw computes J0 - */ - if (ctx->cipher_iv.length == 12) { - ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( - qat_req->comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); - } + set_cipher_iv(ctx->cipher_iv.length, ctx->cipher_iv.offset, cipher_param, op, qat_req); diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 5140d61a9c..fd6fe4423d 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -69,6 +69,16 @@ qat_sym_cd_auth_set(struct qat_sym_session *cdesc, uint32_t aad_length, uint32_t digestsize, unsigned int operation); +static void +qat_sym_session_init_common_hdr(struct qat_sym_session *session); + +/* Req/cd init functions */ + +static void +qat_sym_session_finalize(struct qat_sym_session *session) +{ + qat_sym_session_init_common_hdr(session); +} /** Frees a context previously created * Depends on openssl libcrypto @@ -558,6 +568,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; int ret; int qat_cmd_id; + int handle_mixed = 0; /* Verify the session physical address is known */ rte_iova_t session_paddr = rte_mempool_virt2iova(session); @@ -573,6 +584,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, offsetof(struct qat_sym_session, cd); session->min_qat_dev_gen = QAT_GEN1; + session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE; session->is_ucs = 0; /* Get requested QAT command id */ @@ -612,8 +624,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, xform, session); if (ret < 0) return ret; - /* Special handling of mixed hash+cipher algorithms */ - qat_sym_session_handle_mixed(dev, session); + handle_mixed = 1; } break; case ICP_QAT_FW_LA_CMD_HASH_CIPHER: @@ -631,8 +642,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, xform, session); if (ret < 0) return ret; - /* Special handling of mixed hash+cipher algorithms */ - qat_sym_session_handle_mixed(dev, session); + handle_mixed = 1; } break; case ICP_QAT_FW_LA_CMD_TRNG_GET_RANDOM: @@ -652,72 +662,41 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, session->qat_cmd); return -ENOTSUP; } + qat_sym_session_finalize(session); + if (handle_mixed) { + /* Special handling of mixed hash+cipher algorithms */ + qat_sym_session_handle_mixed(dev, session); + } return 0; } static int qat_sym_session_handle_single_pass(struct qat_sym_session *session, - struct rte_crypto_aead_xform *aead_xform) + const struct rte_crypto_aead_xform *aead_xform) { - struct icp_qat_fw_la_cipher_req_params *cipher_param = - (void *) &session->fw_req.serv_specif_rqpars; - session->is_single_pass = 1; + session->is_auth = 1; session->min_qat_dev_gen = QAT_GEN3; session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER; + /* Chacha-Poly is special case that use QAT CTR mode */ if (aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM) { session->qat_mode = ICP_QAT_HW_CIPHER_AEAD_MODE; - ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( - session->fw_req.comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); } else { - /* Chacha-Poly is special case that use QAT CTR mode */ session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE; } session->cipher_iv.offset = aead_xform->iv.offset; session->cipher_iv.length = aead_xform->iv.length; - if (qat_sym_cd_cipher_set(session, - aead_xform->key.data, aead_xform->key.length)) - return -EINVAL; session->aad_len = aead_xform->aad_length; session->digest_length = aead_xform->digest_length; + if (aead_xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT) { session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT; session->auth_op = ICP_QAT_HW_AUTH_GENERATE; - ICP_QAT_FW_LA_RET_AUTH_SET( - session->fw_req.comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_RET_AUTH_RES); } else { session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT; session->auth_op = ICP_QAT_HW_AUTH_VERIFY; - ICP_QAT_FW_LA_CMP_AUTH_SET( - session->fw_req.comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_CMP_AUTH_RES); } - ICP_QAT_FW_LA_SINGLE_PASS_PROTO_FLAG_SET( - session->fw_req.comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_SINGLE_PASS_PROTO); - ICP_QAT_FW_LA_PROTO_SET( - session->fw_req.comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_NO_PROTO); - session->fw_req.comn_hdr.service_cmd_id = - ICP_QAT_FW_LA_CMD_CIPHER; - session->cd.cipher.cipher_config.val = - ICP_QAT_HW_CIPHER_CONFIG_BUILD( - ICP_QAT_HW_CIPHER_AEAD_MODE, - session->qat_cipher_alg, - ICP_QAT_HW_CIPHER_NO_CONVERT, - session->qat_dir); - QAT_FIELD_SET(session->cd.cipher.cipher_config.val, - aead_xform->digest_length, - QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS, - QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK); - session->cd.cipher.cipher_config.reserved = - ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER( - aead_xform->aad_length); - cipher_param->spc_aad_sz = aead_xform->aad_length; - cipher_param->spc_auth_res_sz = aead_xform->digest_length; return 0; } @@ -737,6 +716,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, session->auth_iv.offset = auth_xform->iv.offset; session->auth_iv.length = auth_xform->iv.length; session->auth_mode = ICP_QAT_HW_AUTH_MODE1; + session->is_auth = 1; switch (auth_xform->algo) { case RTE_CRYPTO_AUTH_SHA1: @@ -791,6 +771,8 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128; if (session->auth_iv.length == 0) session->auth_iv.length = AES_GCM_J0_LEN; + else + session->is_iv12B = 1; break; case RTE_CRYPTO_AUTH_SNOW3G_UIA2: session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2; @@ -825,6 +807,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, } if (auth_xform->algo == RTE_CRYPTO_AUTH_AES_GMAC) { + session->is_gmac = 1; if (auth_xform->op == RTE_CRYPTO_AUTH_OP_GENERATE) { session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_HASH; session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT; @@ -832,7 +815,6 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, * It needs to create cipher desc content first, * then authentication */ - if (qat_sym_cd_cipher_set(session, auth_xform->key.data, auth_xform->key.length)) @@ -866,8 +848,6 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, auth_xform->key.length)) return -EINVAL; } - /* Restore to authentication only only */ - session->qat_cmd = ICP_QAT_FW_LA_CMD_AUTH; } else { if (qat_sym_cd_auth_set(session, key_data, @@ -902,6 +882,8 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, session->cipher_iv.length = xform->aead.iv.length; session->auth_mode = ICP_QAT_HW_AUTH_MODE1; + session->is_auth = 1; + session->digest_length = aead_xform->digest_length; session->is_single_pass = 0; switch (aead_xform->algo) { @@ -913,15 +895,19 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, } session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE; session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128; - if (qat_dev_gen == QAT_GEN3 && aead_xform->iv.length == - QAT_AES_GCM_SPC_IV_SIZE) { - return qat_sym_session_handle_single_pass(session, - aead_xform); - } - if (session->cipher_iv.length == 0) - session->cipher_iv.length = AES_GCM_J0_LEN; + if (qat_dev_gen == QAT_GEN4) session->is_ucs = 1; + + if (session->cipher_iv.length == 0) { + session->cipher_iv.length = AES_GCM_J0_LEN; + break; + } + session->is_iv12B = 1; + if (qat_dev_gen == QAT_GEN3) { + qat_sym_session_handle_single_pass(session, + aead_xform); + } break; case RTE_CRYPTO_AEAD_AES_CCM: if (qat_sym_validate_aes_key(aead_xform->key.length, @@ -939,15 +925,20 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, return -EINVAL; session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305; - return qat_sym_session_handle_single_pass(session, + qat_sym_session_handle_single_pass(session, aead_xform); + break; default: QAT_LOG(ERR, "Crypto: Undefined AEAD specified %u\n", aead_xform->algo); return -EINVAL; } - if ((aead_xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT && + if (session->is_single_pass) { + if (qat_sym_cd_cipher_set(session, + aead_xform->key.data, aead_xform->key.length)) + return -EINVAL; + } else if ((aead_xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT && aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM) || (aead_xform->op == RTE_CRYPTO_AEAD_OP_DECRYPT && aead_xform->algo == RTE_CRYPTO_AEAD_AES_CCM)) { @@ -995,7 +986,6 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, return -EINVAL; } - session->digest_length = aead_xform->digest_length; return 0; } @@ -1467,13 +1457,17 @@ static int qat_sym_do_precomputes(enum icp_qat_hw_auth_algo hash_alg, } static void -qat_sym_session_init_common_hdr(struct qat_sym_session *session, - struct icp_qat_fw_comn_req_hdr *header, - enum qat_sym_proto_flag proto_flags) +qat_sym_session_init_common_hdr(struct qat_sym_session *session) { + struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req; + struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; + enum qat_sym_proto_flag proto_flags = session->qat_proto_flag; + uint32_t slice_flags = session->slice_types; + header->hdr_flags = ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET); header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA; + header->service_cmd_id = session->qat_cmd; header->comn_req_flags = ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR, QAT_COMN_PTR_TYPE_FLAT); @@ -1505,40 +1499,47 @@ qat_sym_session_init_common_hdr(struct qat_sym_session *session, break; } - ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_NO_UPDATE_STATE); - ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER); - - if (session->is_ucs) { + /* More than one of the following flags can be set at once */ + if (QAT_SESSION_IS_SLICE_SET(slice_flags, QAT_CRYPTO_SLICE_SPC)) { + ICP_QAT_FW_LA_SINGLE_PASS_PROTO_FLAG_SET( + header->serv_specif_flags, + ICP_QAT_FW_LA_SINGLE_PASS_PROTO); + } + if (QAT_SESSION_IS_SLICE_SET(slice_flags, QAT_CRYPTO_SLICE_UCS)) { ICP_QAT_FW_LA_SLICE_TYPE_SET( - session->fw_req.comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_USE_UCS_SLICE_TYPE); + header->serv_specif_flags, + ICP_QAT_FW_LA_USE_UCS_SLICE_TYPE); } -} -/* - * Snow3G and ZUC should never use this function - * and set its protocol flag in both cipher and auth part of content - * descriptor building function - */ -static enum qat_sym_proto_flag -qat_get_crypto_proto_flag(uint16_t flags) -{ - int proto = ICP_QAT_FW_LA_PROTO_GET(flags); - enum qat_sym_proto_flag qat_proto_flag = - QAT_CRYPTO_PROTO_FLAG_NONE; + if (session->is_auth) { + if (session->auth_op == ICP_QAT_HW_AUTH_VERIFY) { + ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_NO_RET_AUTH_RES); + ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_CMP_AUTH_RES); + } else if (session->auth_op == ICP_QAT_HW_AUTH_GENERATE) { + ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_RET_AUTH_RES); + ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_NO_CMP_AUTH_RES); + } + } else { + ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_NO_RET_AUTH_RES); + ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_NO_CMP_AUTH_RES); + } - switch (proto) { - case ICP_QAT_FW_LA_GCM_PROTO: - qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_GCM; - break; - case ICP_QAT_FW_LA_CCM_PROTO: - qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_CCM; - break; + if (session->is_iv12B) { + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + header->serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); } - return qat_proto_flag; + ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_NO_UPDATE_STATE); + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags, + ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER); } int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, @@ -1554,8 +1555,12 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr; struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr; enum icp_qat_hw_cipher_convert key_convert; - enum qat_sym_proto_flag qat_proto_flag = - QAT_CRYPTO_PROTO_FLAG_NONE; + struct icp_qat_fw_la_cipher_20_req_params *req_ucs = + (struct icp_qat_fw_la_cipher_20_req_params *) + &cdesc->fw_req.serv_specif_rqpars; + struct icp_qat_fw_la_cipher_req_params *req_cipher = + (struct icp_qat_fw_la_cipher_req_params *) + &cdesc->fw_req.serv_specif_rqpars; uint32_t total_key_size; uint16_t cipher_offset, cd_size; uint32_t wordIndex = 0; @@ -1591,9 +1596,16 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, if (cdesc->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE) { /* * CTR Streaming ciphers are a special case. Decrypt = encrypt - * Overriding default values previously set + * Overriding default values previously set. + * Chacha20-Poly1305 is special case, CTR but single-pass + * so both direction need to be used. */ cdesc->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT; + if (cdesc->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305 && + cdesc->auth_op == ICP_QAT_HW_AUTH_VERIFY) { + cdesc->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT; + } key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT; } else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 || cdesc->qat_cipher_alg == @@ -1601,6 +1613,8 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT; else if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT) key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT; + else if (cdesc->qat_mode == ICP_QAT_HW_CIPHER_AEAD_MODE) + key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT; else key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT; @@ -1609,7 +1623,7 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ; cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ >> 3; - qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G; + cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G; } else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI) { total_key_size = ICP_QAT_HW_KASUMI_F8_KEY_SZ; @@ -1619,33 +1633,24 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, } else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_3DES) { total_key_size = ICP_QAT_HW_3DES_KEY_SZ; cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_3DES_BLK_SZ >> 3; - qat_proto_flag = - qat_get_crypto_proto_flag(header->serv_specif_flags); } else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_DES) { total_key_size = ICP_QAT_HW_DES_KEY_SZ; cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_DES_BLK_SZ >> 3; - qat_proto_flag = - qat_get_crypto_proto_flag(header->serv_specif_flags); } else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) { total_key_size = ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ + ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ; cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3; - qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC; + cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC; cdesc->min_qat_dev_gen = QAT_GEN2; } else { total_key_size = cipherkeylen; cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3; - qat_proto_flag = - qat_get_crypto_proto_flag(header->serv_specif_flags); } cipher_offset = cdesc->cd_cur_ptr-((uint8_t *)&cdesc->cd); cipher_cd_ctrl->cipher_cfg_offset = cipher_offset >> 3; - header->service_cmd_id = cdesc->qat_cmd; - qat_sym_session_init_common_hdr(cdesc, header, qat_proto_flag); - cipher = (struct icp_qat_hw_cipher_algo_blk *)cdesc->cd_cur_ptr; cipher20 = (struct icp_qat_hw_cipher_algo_blk20 *)cdesc->cd_cur_ptr; cipher->cipher_config.val = @@ -1670,6 +1675,7 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, } else if (cdesc->is_ucs) { const uint8_t *final_key = cipherkey; + cdesc->slice_types |= QAT_CRYPTO_SLICE_UCS; total_key_size = RTE_ALIGN_CEIL(cipherkeylen, ICP_QAT_HW_AES_128_KEY_SZ); cipher20->cipher_config.reserved[0] = 0; @@ -1686,6 +1692,18 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, cipherkeylen; } + if (cdesc->is_single_pass) { + QAT_FIELD_SET(cipher->cipher_config.val, + cdesc->digest_length, + QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS, + QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK); + /* UCS and SPC 1.8/2.0 share configuration of 2nd config word */ + cdesc->cd.cipher.cipher_config.reserved = + ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER( + cdesc->aad_len); + cdesc->slice_types |= QAT_CRYPTO_SLICE_SPC; + } + if (total_key_size > cipherkeylen) { uint32_t padding_size = total_key_size-cipherkeylen; if ((cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_3DES) @@ -1704,6 +1722,20 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, cdesc->cd_cur_ptr += padding_size; } + if (cdesc->is_ucs) { + /* + * These values match in terms of position auth + * slice request fields + */ + req_ucs->spc_auth_res_sz = cdesc->digest_length; + if (!cdesc->is_gmac) { + req_ucs->spc_aad_sz = cdesc->aad_len; + req_ucs->spc_aad_offset = 0; + } + } else if (cdesc->is_single_pass) { + req_cipher->spc_aad_sz = cdesc->aad_len; + req_cipher->spc_auth_res_sz = cdesc->digest_length; + } cd_size = cdesc->cd_cur_ptr-(uint8_t *)&cdesc->cd; cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3; cipher_cd_ctrl->cipher_key_sz = total_key_size >> 3; @@ -1722,7 +1754,6 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, struct icp_qat_hw_cipher_algo_blk *cipherconfig; struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req; struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; - struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr; void *ptr = &req_tmpl->cd_ctrl; struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr; struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr; @@ -1735,8 +1766,6 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, uint32_t *aad_len = NULL; uint32_t wordIndex = 0; uint32_t *pTempKey; - enum qat_sym_proto_flag qat_proto_flag = - QAT_CRYPTO_PROTO_FLAG_NONE; if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) { ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl, @@ -1759,19 +1788,10 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, return -EFAULT; } - if (operation == RTE_CRYPTO_AUTH_OP_VERIFY) { - ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_NO_RET_AUTH_RES); - ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_CMP_AUTH_RES); + if (operation == RTE_CRYPTO_AUTH_OP_VERIFY) cdesc->auth_op = ICP_QAT_HW_AUTH_VERIFY; - } else { - ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_RET_AUTH_RES); - ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags, - ICP_QAT_FW_LA_NO_CMP_AUTH_RES); + else cdesc->auth_op = ICP_QAT_HW_AUTH_GENERATE; - } /* * Setup the inner hash config @@ -1913,7 +1933,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, break; case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: - qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_GCM; + cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_GCM; state1_size = ICP_QAT_HW_GALOIS_128_STATE1_SZ; if (qat_sym_do_precomputes(cdesc->qat_hash_alg, authkey, authkeylen, cdesc->cd_cur_ptr + state1_size, @@ -1936,7 +1956,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, cdesc->aad_len = aad_length; break; case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2: - qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G; + cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G; state1_size = qat_hash_get_state1_size( ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2); state2_size = ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ; @@ -1960,7 +1980,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, hash->auth_config.config = ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE0, cdesc->qat_hash_alg, digestsize); - qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC; + cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC; state1_size = qat_hash_get_state1_size( ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3); state2_size = ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ; @@ -1988,7 +2008,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, state2_size = ICP_QAT_HW_NULL_STATE2_SZ; break; case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC: - qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_CCM; + cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_CCM; state1_size = qat_hash_get_state1_size( ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC); state2_size = ICP_QAT_HW_AES_CBC_MAC_KEY_SZ + @@ -2036,10 +2056,6 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, return -EFAULT; } - /* Request template setup */ - qat_sym_session_init_common_hdr(cdesc, header, qat_proto_flag); - header->service_cmd_id = cdesc->qat_cmd; - /* Auth CD config setup */ hash_cd_ctrl->hash_cfg_offset = hash_offset >> 3; hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; @@ -2248,6 +2264,7 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, ret = qat_sym_session_configure_cipher(dev, xform, session); if (ret < 0) return ret; + qat_sym_session_finalize(session); return 0; } diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index e003a34f7f..1568e09200 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -48,6 +48,13 @@ #define QAT_AES_CMAC_CONST_RB 0x87 +#define QAT_CRYPTO_SLICE_SPC 1 +#define QAT_CRYPTO_SLICE_UCS 2 +#define QAT_CRYPTO_SLICE_WCP 4 + +#define QAT_SESSION_IS_SLICE_SET(flags, flag) \ + (!!((flags) & (flag))) + enum qat_sym_proto_flag { QAT_CRYPTO_PROTO_FLAG_NONE = 0, QAT_CRYPTO_PROTO_FLAG_CCM = 1, @@ -93,6 +100,11 @@ struct qat_sym_session { uint8_t is_single_pass; uint8_t is_single_pass_gmac; uint8_t is_ucs; + uint8_t is_iv12B; + uint8_t is_gmac; + uint8_t is_auth; + uint32_t slice_types; + enum qat_sym_proto_flag qat_proto_flag; }; int From patchwork Mon Jun 28 16:34:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94915 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A126A0A0C; Mon, 28 Jun 2021 18:35:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BED734115F; Mon, 28 Jun 2021 18:34:51 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 8109641151 for ; Mon, 28 Jun 2021 18:34:50 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165914" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165914" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395628" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:47 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:26 +0100 Message-Id: <20210628163434.77741-9-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 08/16] crypto/qat: add aes gcm in ucs spc mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds AES-GCM algorithm that works in UCS (Unified crypto slice) SPC(Single-Pass) mode. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/crypto/qat/qat_sym.c | 32 ++++++++++++++++++++-------- drivers/crypto/qat/qat_sym_session.c | 9 ++++---- 2 files changed, 27 insertions(+), 14 deletions(-) diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index eef4a886c5..00fc4d6b1a 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -217,6 +217,7 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, int ret = 0; struct qat_sym_session *ctx = NULL; struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_cipher_20_req_params *cipher_param20; struct icp_qat_fw_la_auth_req_params *auth_param; register struct icp_qat_fw_la_bulk_req *qat_req; uint8_t do_auth = 0, do_cipher = 0, do_aead = 0; @@ -286,6 +287,7 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, rte_mov128((uint8_t *)qat_req, (const uint8_t *)&(ctx->fw_req)); qat_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)op; cipher_param = (void *)&qat_req->serv_specif_rqpars; + cipher_param20 = (void *)&qat_req->serv_specif_rqpars; auth_param = (void *)((uint8_t *)cipher_param + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); @@ -563,13 +565,17 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, cipher_param->cipher_length = 0; } - if (do_auth || do_aead) { - auth_param->auth_off = (uint32_t)rte_pktmbuf_iova_offset( + if (!ctx->is_single_pass) { + /* Do not let to owerwrite spc_aad len */ + if (do_auth || do_aead) { + auth_param->auth_off = + (uint32_t)rte_pktmbuf_iova_offset( op->sym->m_src, auth_ofs) - src_buf_start; - auth_param->auth_len = auth_len; - } else { - auth_param->auth_off = 0; - auth_param->auth_len = 0; + auth_param->auth_len = auth_len; + } else { + auth_param->auth_off = 0; + auth_param->auth_len = 0; + } } qat_req->comn_mid.dst_length = @@ -675,10 +681,18 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, } if (ctx->is_single_pass) { - /* Handle Single-Pass GCM */ - cipher_param->spc_aad_addr = op->sym->aead.aad.phys_addr; - cipher_param->spc_auth_res_addr = + if (ctx->is_ucs) { + /* GEN 4 */ + cipher_param20->spc_aad_addr = + op->sym->aead.aad.phys_addr; + cipher_param20->spc_auth_res_addr = op->sym->aead.digest.phys_addr; + } else { + cipher_param->spc_aad_addr = + op->sym->aead.aad.phys_addr; + cipher_param->spc_auth_res_addr = + op->sym->aead.digest.phys_addr; + } } else if (ctx->is_single_pass_gmac && op->sym->auth.data.length <= QAT_AES_GMAC_SPC_MAX_SIZE) { /* Handle Single-Pass AES-GMAC */ diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index fd6fe4423d..019c9f4f02 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -898,16 +898,15 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, if (qat_dev_gen == QAT_GEN4) session->is_ucs = 1; - if (session->cipher_iv.length == 0) { session->cipher_iv.length = AES_GCM_J0_LEN; break; } session->is_iv12B = 1; - if (qat_dev_gen == QAT_GEN3) { - qat_sym_session_handle_single_pass(session, - aead_xform); - } + if (qat_dev_gen < QAT_GEN3) + break; + qat_sym_session_handle_single_pass(session, + aead_xform); break; case RTE_CRYPTO_AEAD_AES_CCM: if (qat_sym_validate_aes_key(aead_xform->key.length, From patchwork Mon Jun 28 16:34:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94916 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0A1BA0A0C; Mon, 28 Jun 2021 18:35:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 596604118A; Mon, 28 Jun 2021 18:34:53 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 1B0564114C for ; Mon, 28 Jun 2021 18:34:51 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165917" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165917" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395652" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:50 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:27 +0100 Message-Id: <20210628163434.77741-10-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 09/16] crypto/qat: add chacha-poly in ucs spc mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds Chacha20-Poly1305 aglorithm that works in UCS (Unified crypto slice) SPC(Single-Pass) mode. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/crypto/qat/qat_sym_capabilities.h | 32 ++++++++++++++++++++++- drivers/crypto/qat/qat_sym_session.c | 2 ++ 2 files changed, 33 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h index fc8e667687..5c6e723466 100644 --- a/drivers/crypto/qat/qat_sym_capabilities.h +++ b/drivers/crypto/qat/qat_sym_capabilities.h @@ -1144,7 +1144,37 @@ }, \ }, } \ }, } \ - } \ + }, \ + { /* Chacha20-Poly1305 */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ + {.aead = { \ + .algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \ + .block_size = 64, \ + .key_size = { \ + .min = 32, \ + .max = 32, \ + .increment = 0 \ + }, \ + .digest_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + }, \ + .aad_size = { \ + .min = 0, \ + .max = 240, \ + .increment = 1 \ + }, \ + .iv_size = { \ + .min = 12, \ + .max = 12, \ + .increment = 0 \ + }, \ + }, } \ + }, } \ + } diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 019c9f4f02..a49da8e364 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -922,6 +922,8 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, case RTE_CRYPTO_AEAD_CHACHA20_POLY1305: if (aead_xform->key.length != ICP_QAT_HW_CHACHAPOLY_KEY_SZ) return -EINVAL; + if (qat_dev_gen == QAT_GEN4) + session->is_ucs = 1; session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305; qat_sym_session_handle_single_pass(session, From patchwork Mon Jun 28 16:34:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94917 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36551A0A0C; Mon, 28 Jun 2021 18:35:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8B7A541183; Mon, 28 Jun 2021 18:34:55 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 4EC5A4118D for ; Mon, 28 Jun 2021 18:34:54 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165924" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165924" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395666" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:52 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:28 +0100 Message-Id: <20210628163434.77741-11-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 10/16] crypto/qat: add gmac in legacy mode on gen 4 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add AES-GMAC algorithm in legacy mode to generation 4 devices. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/crypto/qat/qat_sym_capabilities.h | 27 ++++++++++++++++++++++- drivers/crypto/qat/qat_sym_session.c | 9 +++++++- drivers/crypto/qat/qat_sym_session.h | 2 ++ 3 files changed, 36 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h index 5c6e723466..cfb176ca94 100644 --- a/drivers/crypto/qat/qat_sym_capabilities.h +++ b/drivers/crypto/qat/qat_sym_capabilities.h @@ -1174,7 +1174,32 @@ }, \ }, } \ }, } \ - } + }, \ + { /* AES GMAC (AUTH) */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_AES_GMAC, \ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 32, \ + .increment = 8 \ + }, \ + .digest_size = { \ + .min = 8, \ + .max = 16, \ + .increment = 4 \ + }, \ + .iv_size = { \ + .min = 0, \ + .max = 12, \ + .increment = 12 \ + } \ + }, } \ + }, } \ + } \ diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index a49da8e364..03514ca073 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -710,6 +710,8 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, struct qat_sym_dev_private *internals = dev->data->dev_private; const uint8_t *key_data = auth_xform->key.data; uint8_t key_length = auth_xform->key.length; + enum qat_device_gen qat_dev_gen = + internals->qat_dev->qat_dev_gen; session->aes_cmac = 0; session->auth_key_length = auth_xform->key.length; @@ -717,6 +719,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, session->auth_iv.length = auth_xform->iv.length; session->auth_mode = ICP_QAT_HW_AUTH_MODE1; session->is_auth = 1; + session->digest_length = auth_xform->digest_length; switch (auth_xform->algo) { case RTE_CRYPTO_AUTH_SHA1: @@ -773,6 +776,10 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, session->auth_iv.length = AES_GCM_J0_LEN; else session->is_iv12B = 1; + if (qat_dev_gen == QAT_GEN4) { + session->is_cnt_zero = 1; + session->is_ucs = 1; + } break; case RTE_CRYPTO_AUTH_SNOW3G_UIA2: session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2; @@ -858,7 +865,6 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, return -EINVAL; } - session->digest_length = auth_xform->digest_length; return 0; } @@ -1811,6 +1817,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL + || cdesc->is_cnt_zero ) hash->auth_counter.counter = 0; else { diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 1568e09200..33b236e49b 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -103,6 +103,8 @@ struct qat_sym_session { uint8_t is_iv12B; uint8_t is_gmac; uint8_t is_auth; + uint8_t is_cnt_zero; + /* Some generations need different setup of counter */ uint32_t slice_types; enum qat_sym_proto_flag qat_proto_flag; }; From patchwork Mon Jun 28 16:34:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94918 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D279AA0A0C; Mon, 28 Jun 2021 18:35:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF37A4118F; Mon, 28 Jun 2021 18:34:59 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id EB5424116A for ; Mon, 28 Jun 2021 18:34:57 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165939" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165939" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395678" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:54 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:29 +0100 Message-Id: <20210628163434.77741-12-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 11/16] common/qat: add pf2vf communication in qat X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add communication between physical device and virtual function in Intel QucikAssist Technology PMD. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/common/qat/meson.build | 1 + drivers/common/qat/qat_adf/adf_pf2vf_msg.h | 154 +++++++++++++++++++++ drivers/common/qat/qat_device.c | 22 ++- drivers/common/qat/qat_device.h | 12 ++ drivers/common/qat/qat_pf2vf.c | 80 +++++++++++ drivers/common/qat/qat_pf2vf.h | 19 +++ 6 files changed, 287 insertions(+), 1 deletion(-) create mode 100644 drivers/common/qat/qat_adf/adf_pf2vf_msg.h create mode 100644 drivers/common/qat/qat_pf2vf.c create mode 100644 drivers/common/qat/qat_pf2vf.h diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 479a46f9f0..11ed37c910 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -49,6 +49,7 @@ sources += files( 'qat_qp.c', 'qat_device.c', 'qat_logs.c', + 'qat_pf2vf.c' ) includes += include_directories( 'qat_adf', diff --git a/drivers/common/qat/qat_adf/adf_pf2vf_msg.h b/drivers/common/qat/qat_adf/adf_pf2vf_msg.h new file mode 100644 index 0000000000..4029b1c14a --- /dev/null +++ b/drivers/common/qat/qat_adf/adf_pf2vf_msg.h @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) + * Copyright(c) 2021 Intel Corporation + */ +#ifndef ADF_PF2VF_MSG_H_ +#define ADF_PF2VF_MSG_H_ + +/* VF/PF compatibility version. */ +/* ADF_PFVF_COMPATIBILITY_EXT_CAP: Support for extended capabilities */ +#define ADF_PFVF_COMPATIBILITY_CAPABILITIES 2 +/* ADF_PFVF_COMPATIBILITY_FAST_ACK: In-use pattern cleared by receiver */ +#define ADF_PFVF_COMPATIBILITY_FAST_ACK 3 +#define ADF_PFVF_COMPATIBILITY_RING_TO_SVC_MAP 4 +#define ADF_PFVF_COMPATIBILITY_VERSION 4 /* PF<->VF compat */ + +#define ADF_PFVF_INT 1 +#define ADF_PFVF_MSGORIGIN_SYSTEM 2 +#define ADF_PFVF_1X_MSGTYPE_SHIFT 2 +#define ADF_PFVF_1X_MSGTYPE_MASK 0xF +#define ADF_PFVF_1X_MSGDATA_SHIFT 6 +#define ADF_PFVF_1X_MSGDATA_MASK 0x3FF +#define ADF_PFVF_2X_MSGTYPE_SHIFT 2 +#define ADF_PFVF_2X_MSGTYPE_MASK 0x3F +#define ADF_PFVF_2X_MSGDATA_SHIFT 8 +#define ADF_PFVF_2X_MSGDATA_MASK 0xFFFFFF + +#define ADF_PFVF_IN_USE 0x6AC2 +#define ADF_PFVF_IN_USE_MASK 0xFFFE +#define ADF_PFVF_VF_MSG_SHIFT 16 + +/* PF->VF messages */ +#define ADF_PF2VF_MSGTYPE_RESTARTING 0x01 +#define ADF_PF2VF_MSGTYPE_VERSION_RESP 0x02 +#define ADF_PF2VF_MSGTYPE_BLOCK_RESP 0x03 +#define ADF_PF2VF_MSGTYPE_FATAL_ERROR 0x04 +/* Do not use messages which start from 0x10 to 1.x as 1.x only use + * 4 bits as message types. Hence they are only applicable to 2.0 + */ +#define ADF_PF2VF_MSGTYPE_RP_RESET_RESP 0x10 + +/* PF->VF Version Response - ADF_PF2VF_MSGTYPE_VERSION_RESP */ +#define ADF_PF2VF_VERSION_RESP_VERS_MASK 0xFF +#define ADF_PF2VF_VERSION_RESP_VERS_SHIFT 0 +#define ADF_PF2VF_VERSION_RESP_RESULT_MASK 0x03 +#define ADF_PF2VF_VERSION_RESP_RESULT_SHIFT 8 +#define ADF_PF2VF_MINORVERSION_SHIFT 0 +#define ADF_PF2VF_MAJORVERSION_SHIFT 4 +#define ADF_PF2VF_VF_COMPATIBLE 1 +#define ADF_PF2VF_VF_INCOMPATIBLE 2 +#define ADF_PF2VF_VF_COMPAT_UNKNOWN 3 + +/* PF->VF Block Response Type - ADF_PF2VF_MSGTYPE_BLOCK_RESP */ +#define ADF_PF2VF_BLOCK_RESP_TYPE_DATA 0x0 +#define ADF_PF2VF_BLOCK_RESP_TYPE_CRC 0x1 +#define ADF_PF2VF_BLOCK_RESP_TYPE_ERROR 0x2 +#define ADF_PF2VF_BLOCK_RESP_TYPE_MASK 0x03 +#define ADF_PF2VF_BLOCK_RESP_TYPE_SHIFT 0 +#define ADF_PF2VF_BLOCK_RESP_DATA_MASK 0xFF +#define ADF_PF2VF_BLOCK_RESP_DATA_SHIFT 2 + +/* + * PF->VF Block Error Code - Returned in data field when the + * response type indicates an error + */ +#define ADF_PF2VF_INVALID_BLOCK_TYPE 0x0 +#define ADF_PF2VF_INVALID_BYTE_NUM_REQ 0x1 +#define ADF_PF2VF_PAYLOAD_TRUNCATED 0x2 +#define ADF_PF2VF_UNSPECIFIED_ERROR 0x3 + +/* VF->PF messages */ +#define ADF_VF2PF_MSGTYPE_INIT 0x3 +#define ADF_VF2PF_MSGTYPE_SHUTDOWN 0x4 +#define ADF_VF2PF_MSGTYPE_VERSION_REQ 0x5 +#define ADF_VF2PF_MSGTYPE_COMPAT_VER_REQ 0x6 +#define ADF_VF2PF_MSGTYPE_GET_LARGE_BLOCK_REQ 0x7 +#define ADF_VF2PF_MSGTYPE_GET_MEDIUM_BLOCK_REQ 0x8 +#define ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ 0x9 +/* Do not use messages which start from 0x10 to 1.x as 1.x only use + * 4 bits as message types. Hence they are only applicable to 2.0 + */ +#define ADF_VF2PF_MSGTYPE_RP_RESET 0x10 + +/* VF->PF Block Request Type - ADF_VF2PF_MSGTYPE_GET_xxx_BLOCK_REQ */ +#define ADF_VF2PF_MIN_SMALL_MESSAGE_TYPE 0 +#define ADF_VF2PF_MAX_SMALL_MESSAGE_TYPE \ + (ADF_VF2PF_MIN_SMALL_MESSAGE_TYPE + 15) +#define ADF_VF2PF_MIN_MEDIUM_MESSAGE_TYPE \ + (ADF_VF2PF_MAX_SMALL_MESSAGE_TYPE + 1) +#define ADF_VF2PF_MAX_MEDIUM_MESSAGE_TYPE \ + (ADF_VF2PF_MIN_MEDIUM_MESSAGE_TYPE + 7) +#define ADF_VF2PF_MIN_LARGE_MESSAGE_TYPE \ + (ADF_VF2PF_MAX_MEDIUM_MESSAGE_TYPE + 1) +#define ADF_VF2PF_MAX_LARGE_MESSAGE_TYPE \ + (ADF_VF2PF_MIN_LARGE_MESSAGE_TYPE + 3) +#define ADF_VF2PF_SMALL_PAYLOAD_SIZE 30 +#define ADF_VF2PF_MEDIUM_PAYLOAD_SIZE 62 +#define ADF_VF2PF_LARGE_PAYLOAD_SIZE 126 + +#define ADF_VF2PF_BLOCK_REQ_TYPE_SHIFT 0 +#define ADF_VF2PF_LARGE_BLOCK_REQ_TYPE_MASK 0x3 +#define ADF_VF2PF_MEDIUM_BLOCK_REQ_TYPE_MASK 0x7 +#define ADF_VF2PF_SMALL_BLOCK_REQ_TYPE_MASK 0xF + +#define ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_SHIFT 2 +#define ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_MASK 0x7F +#define ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_SHIFT 3 +#define ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_MASK 0x3F +#define ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_SHIFT 4 +#define ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_MASK 0x1F +#define ADF_VF2PF_BLOCK_REQ_CRC_SHIFT 9 + +/* PF-VF block message header bytes */ +#define ADF_VF2PF_BLOCK_VERSION_BYTE 0 +#define ADF_VF2PF_BLOCK_LEN_BYTE 1 +#define ADF_VF2PF_BLOCK_DATA 2 + +/* Block message types + * 0..15 - 32 byte message + * 16..23 - 64 byte message + * 24..27 - 128 byte message + * 2 - Get Capability Request message + */ +#define ADF_VF2PF_BLOCK_MSG_CAP_SUMMARY 0x2 +#define ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ 0x3 + +/* VF->PF Compatible Version Request - ADF_VF2PF_MSGTYPE_VERSION_REQ */ +#define ADF_VF2PF_COMPAT_VER_SHIFT 0 +#define ADF_VF2PF_COMPAT_VER_MASK 0xFF + +/* How long to wait for far side to acknowledge receipt */ +#define ADF_IOV_MSG_ACK_DELAY_US 5 +#define ADF_IOV_MSG_ACK_MAX_RETRY (100 * 1000 / ADF_IOV_MSG_ACK_DELAY_US) +/* If CSR is busy, how long to delay before retrying */ +#define ADF_IOV_MSG_RETRY_DELAY 5 +#define ADF_IOV_MSG_MAX_RETRIES 3 +/* How long to wait for a response from the other side */ +#define ADF_IOV_MSG_RESP_TIMEOUT 100 +/* How often to retry when there is no response */ +#define ADF_IOV_MSG_RESP_RETRIES 5 + +#define ADF_IOV_RATELIMIT_INTERVAL 8 +#define ADF_IOV_RATELIMIT_BURST 130 +/* PF VF message byte shift */ +#define ADF_PFVF_DATA_SHIFT 8 +#define ADF_PFVF_DATA_MASK 0xFF + +/* CRC Calculation */ +#define ADF_CRC8_INIT_VALUE 0xFF + +/* Per device register offsets */ +/* GEN 4 */ +#define ADF_4XXXIOV_PF2VM_OFFSET 0x1008 +#define ADF_4XXXIOV_VM2PF_OFFSET 0x100C + +#endif /* ADF_IOV_MSG_H */ diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 932d7110f7..5ee441171e 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -10,6 +10,17 @@ #include "adf_transport_access_macros.h" #include "qat_sym_pmd.h" #include "qat_comp_pmd.h" +#include "adf_pf2vf_msg.h" + +/* pv2vf data Gen 4*/ +struct qat_pf2vf_dev qat_pf2vf_gen4 = { + .pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET, + .vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET, + .pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT, + .pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK, + .pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT, + .pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK, +}; /* Hardware device information per generation */ __extension__ @@ -33,7 +44,8 @@ struct qat_gen_hw_data qat_gen_config[] = { [QAT_GEN4] = { .dev_gen = QAT_GEN4, .qp_hw_data = NULL, - .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3 + .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3, + .pf2vf_dev = &qat_pf2vf_gen4 }, }; @@ -249,6 +261,14 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, return NULL; } + if (qat_dev->qat_dev_gen == QAT_GEN4) { + qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr; + if (qat_dev->misc_bar_io_addr == NULL) { + QAT_LOG(ERR, "QAT cannot get access to VF misc bar"); + return NULL; + } + } + if (devargs && devargs->drv_str) qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param); diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index f4fecc9517..05e164baa7 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -108,12 +108,24 @@ struct qat_pci_device { struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM] [QAT_GEN4_QPS_PER_BUNDLE_NUM]; /**< Data of ring configuration on gen4 */ + void *misc_bar_io_addr; + /**< Address of misc bar */ }; struct qat_gen_hw_data { enum qat_device_gen dev_gen; const struct qat_qp_hw_data (*qp_hw_data)[ADF_MAX_QPS_ON_ANY_SERVICE]; enum qat_comp_num_im_buffers comp_num_im_bufs_required; + struct qat_pf2vf_dev *pf2vf_dev; +}; + +struct qat_pf2vf_dev { + uint32_t pf2vf_offset; + uint32_t vf2pf_offset; + int pf2vf_type_shift; + uint32_t pf2vf_type_mask; + int pf2vf_data_shift; + uint32_t pf2vf_data_mask; }; extern struct qat_gen_hw_data qat_gen_config[]; diff --git a/drivers/common/qat/qat_pf2vf.c b/drivers/common/qat/qat_pf2vf.c new file mode 100644 index 0000000000..6327311199 --- /dev/null +++ b/drivers/common/qat/qat_pf2vf.c @@ -0,0 +1,80 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_pf2vf.h" +#include "adf_pf2vf_msg.h" + +#include + +int qat_pf2vf_exch_msg(struct qat_pci_device *qat_dev, + struct qat_pf2vf_msg pf2vf_msg, + int len, uint8_t *ret) +{ + int i = 0; + struct qat_pf2vf_dev *qat_pf2vf = + qat_gen_config[qat_dev->qat_dev_gen].pf2vf_dev; + void *pmisc_bar_addr = qat_dev->misc_bar_io_addr; + uint32_t msg = 0, count = 0, val = 0; + uint32_t vf_csr_off = qat_pf2vf->vf2pf_offset; + uint32_t pf_csr_off = qat_pf2vf->pf2vf_offset; + int type_shift = qat_pf2vf->pf2vf_type_shift; + uint32_t type_mask = qat_pf2vf->pf2vf_type_mask; + int blck_hdr_shift = qat_pf2vf->pf2vf_data_shift; + int data_shift = blck_hdr_shift; + + switch (pf2vf_msg.msg_type) { + case ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ: + data_shift += ADF_VF2PF_SMALL_BLOCK_BYTE_NUM_SHIFT; + break; + case ADF_VF2PF_MSGTYPE_GET_MEDIUM_BLOCK_REQ: + data_shift += ADF_VF2PF_MEDIUM_BLOCK_BYTE_NUM_SHIFT; + break; + case ADF_VF2PF_MSGTYPE_GET_LARGE_BLOCK_REQ: + data_shift += ADF_VF2PF_LARGE_BLOCK_BYTE_NUM_SHIFT; + break; + } + + if ((pf2vf_msg.msg_type & type_mask) != pf2vf_msg.msg_type) { + QAT_LOG(ERR, "PF2VF message type 0x%X out of range\n", + pf2vf_msg.msg_type); + return -EINVAL; + } + + for (; i < len; i++) { + count = 0; + if (len == 1) { + msg = (pf2vf_msg.msg_type << type_shift) | + (pf2vf_msg.msg_data << (data_shift)); + } else + msg = (pf2vf_msg.msg_type << type_shift) | + ((pf2vf_msg.msg_data + i) << (data_shift)); + if (pf2vf_msg.block_hdr > 0) + msg |= pf2vf_msg.block_hdr << blck_hdr_shift; + msg |= ADF_PFVF_INT | ADF_PFVF_MSGORIGIN_SYSTEM; + + ADF_CSR_WR(pmisc_bar_addr, vf_csr_off, msg); + int us = 0; + /* + * Wait for confirmation from remote that it received + * the message + */ + do { + rte_delay_us_sleep(5); + us += 5; + val = ADF_CSR_RD(pmisc_bar_addr, vf_csr_off); + } while ((val & ADF_PFVF_INT) && + (++count < ADF_IOV_MSG_ACK_MAX_RETRY)); + + if (val & ADF_PFVF_INT) { + QAT_LOG(ERR, "ACK not received from remote\n"); + return -EIO; + } + + uint32_t pf_val = ADF_CSR_RD(pmisc_bar_addr, pf_csr_off); + + *(ret + i) = (uint8_t)(pf_val >> (pf2vf_msg.block_hdr > 0 ? + 10 : 8) & 0xff); + } + return 0; +} diff --git a/drivers/common/qat/qat_pf2vf.h b/drivers/common/qat/qat_pf2vf.h new file mode 100644 index 0000000000..df59277347 --- /dev/null +++ b/drivers/common/qat/qat_pf2vf.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_device.h" + +#ifndef QAT_PF2VF_H_ +#define QAT_PF2VF_H_ + +struct qat_pf2vf_msg { + uint32_t msg_data; + int block_hdr; + uint16_t msg_type; +}; + +int qat_pf2vf_exch_msg(struct qat_pci_device *qat_dev, + struct qat_pf2vf_msg pf2vf_msg, int len, uint8_t *ret); + +#endif From patchwork Mon Jun 28 16:34:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94919 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5794A0A0C; Mon, 28 Jun 2021 18:36:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DD39C41194; Mon, 28 Jun 2021 18:35:00 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id B851541190 for ; Mon, 28 Jun 2021 18:34:59 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165950" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165950" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:34:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395695" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:34:58 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:30 +0100 Message-Id: <20210628163434.77741-13-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 12/16] common/qat: reset ring pairs before setting gen4 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit resets ring pairs of particular vf before setting PMD. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/common/qat/qat_device.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 5ee441171e..e52d90fcd7 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -11,6 +11,7 @@ #include "qat_sym_pmd.h" #include "qat_comp_pmd.h" #include "adf_pf2vf_msg.h" +#include "qat_pf2vf.h" /* pv2vf data Gen 4*/ struct qat_pf2vf_dev qat_pf2vf_gen4 = { @@ -125,6 +126,28 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev) return qat_pci_get_named_dev(name); } +static int +qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev) +{ + int ret = 0, i; + uint8_t data[4]; + struct qat_pf2vf_msg pf2vf_msg; + + pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET; + pf2vf_msg.block_hdr = -1; + for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) { + pf2vf_msg.msg_data = i; + ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data); + if (ret) { + QAT_LOG(ERR, "QAT error when reset bundle no %d", + i); + return ret; + } + } + + return 0; +} + static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param *qat_dev_cmd_param) { @@ -371,6 +394,15 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, if (qat_pci_dev == NULL) return -ENODEV; + if (qat_pci_dev->qat_dev_gen == QAT_GEN4) { + if (qat_gen4_reset_ring_pair(qat_pci_dev)) { + QAT_LOG(ERR, + "Cannot reset ring pairs, does pf driver supports pf2vf comms?" + ); + return -ENODEV; + } + } + sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param); if (sym_ret == 0) { num_pmds_created++; From patchwork Mon Jun 28 16:34:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94920 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 540AAA0A0C; Mon, 28 Jun 2021 18:36:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2A2534119B; Mon, 28 Jun 2021 18:35:03 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id E9B9B4117E for ; Mon, 28 Jun 2021 18:35:01 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165958" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165958" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:35:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395720" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:35:00 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Arek Kusztal Date: Mon, 28 Jun 2021 17:34:31 +0100 Message-Id: <20210628163434.77741-14-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 13/16] common/qat: add service discovery to qat gen4 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds service discovery to generation four of Intel QuickAssist Technology devices. Signed-off-by: Arek Kusztal Acked-by: Fan Zhang --- drivers/common/qat/qat_common.h | 8 ++++++ drivers/common/qat/qat_device.c | 20 ++++++++++++--- drivers/common/qat/qat_device.h | 3 +++ drivers/common/qat/qat_qp.c | 43 +++++++++++++++++++++++++-------- drivers/common/qat/qat_qp.h | 3 +-- 5 files changed, 62 insertions(+), 15 deletions(-) diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h index 845c8d99ab..23715085f4 100644 --- a/drivers/common/qat/qat_common.h +++ b/drivers/common/qat/qat_common.h @@ -29,6 +29,14 @@ enum qat_service_type { QAT_SERVICE_INVALID }; +enum qat_svc_list { + QAT_SVC_UNUSED = 0, + QAT_SVC_CRYPTO = 1, + QAT_SVC_COMPRESSION = 2, + QAT_SVC_SYM = 3, + QAT_SVC_ASYM = 4, +}; + #define QAT_MAX_SERVICES (QAT_SERVICE_INVALID) /**< Common struct for scatter-gather list operations */ diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index e52d90fcd7..1b967cbcf7 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -148,6 +148,22 @@ qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev) return 0; } +int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val) +{ + int ret = -(EINVAL); + struct qat_pf2vf_msg pf2vf_msg; + + if (qat_dev->qat_dev_gen == QAT_GEN4) { + pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ; + pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ; + pf2vf_msg.msg_data = 2; + ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val); + } + + return ret; +} + + static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param *qat_dev_cmd_param) { @@ -296,9 +312,7 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param); if (qat_dev->qat_dev_gen >= QAT_GEN4) { - int ret = qat_read_qp_config(qat_dev, qat_dev->qat_dev_gen); - - if (ret) { + if (qat_read_qp_config(qat_dev)) { QAT_LOG(ERR, "Cannot acquire ring configuration for QAT_%d", qat_dev_id); diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index 05e164baa7..228c057d1e 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -159,4 +159,7 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused, int qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused); +int +qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret); + #endif /* _QAT_DEVICE_H_ */ diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 8be59779f9..026ea5ee01 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -504,20 +504,43 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, } int -qat_read_qp_config(struct qat_pci_device *qat_dev, - enum qat_device_gen qat_dev_gen) +qat_read_qp_config(struct qat_pci_device *qat_dev) { + int i = 0; + enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; + if (qat_dev_gen == QAT_GEN4) { - /* Read default configuration, - * until some probe of it can be done - */ - int i = 0; + uint16_t svc = 0; + if (qat_query_svc(qat_dev, (uint8_t *)&svc)) + return -(EFAULT); for (; i < QAT_GEN4_BUNDLE_NUM; i++) { struct qat_qp_hw_data *hw_data = &qat_dev->qp_gen4_data[i][0]; - enum qat_service_type service_type = - (QAT_GEN4_QP_DEFCON >> (8 * i)) & 0xFF; + uint8_t svc1 = (svc >> (3 * i)) & 0x7; + enum qat_service_type service_type = QAT_SERVICE_INVALID; + + if (svc1 == QAT_SVC_SYM) { + service_type = QAT_SERVICE_SYMMETRIC; + QAT_LOG(DEBUG, + "Discovered SYMMETRIC service on bundle %d", + i); + } else if (svc1 == QAT_SVC_COMPRESSION) { + service_type = QAT_SERVICE_COMPRESSION; + QAT_LOG(DEBUG, + "Discovered COPRESSION service on bundle %d", + i); + } else if (svc1 == QAT_SVC_ASYM) { + service_type = QAT_SERVICE_ASYMMETRIC; + QAT_LOG(DEBUG, + "Discovered ASYMMETRIC service on bundle %d", + i); + } else { + QAT_LOG(ERR, + "Unrecognized service on bundle %d", + i); + return -(EFAULT); + } memset(hw_data, 0, sizeof(*hw_data)); hw_data->service_type = service_type; @@ -534,9 +557,9 @@ qat_read_qp_config(struct qat_pci_device *qat_dev, hw_data->rx_ring_num = 1; hw_data->hw_bundle_num = i; } + return 0; } - /* With default config will always return success */ - return 0; + return -(EINVAL); } static int qat_qp_check_queue_alignment(uint64_t phys_addr, diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index 3d9a757349..e1627197fa 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -134,7 +134,6 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, enum qat_service_type service_type); int -qat_read_qp_config(struct qat_pci_device *qat_dev, - enum qat_device_gen qat_dev_gen); +qat_read_qp_config(struct qat_pci_device *qat_dev); #endif /* _QAT_QP_H_ */ From patchwork Mon Jun 28 16:34:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94921 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1872CA0A0C; Mon, 28 Jun 2021 18:36:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4F3F41199; Mon, 28 Jun 2021 18:35:06 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 08F42411A6 for ; Mon, 28 Jun 2021 18:35:04 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165968" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165968" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:35:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395753" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:35:02 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com Date: Mon, 28 Jun 2021 17:34:32 +0100 Message-Id: <20210628163434.77741-15-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 14/16] crypto/qat: update raw dp api X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Fan Zhang This commit updates the QAT raw data-path API to support the changes made to device and sessions. The QAT RAW data-path API now works on Generation 1-3 devices. Signed-off-by: Fan Zhang Acked-by: Adam Dybkowski --- drivers/crypto/qat/qat_sym_hw_dp.c | 419 +++++++++++++++-------------- 1 file changed, 216 insertions(+), 203 deletions(-) diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c index 2f64de44a1..4305579b54 100644 --- a/drivers/crypto/qat/qat_sym_hw_dp.c +++ b/drivers/crypto/qat/qat_sym_hw_dp.c @@ -101,204 +101,6 @@ qat_sym_dp_fill_vec_status(int32_t *sta, int status, uint32_t n) #define QAT_SYM_DP_GET_MAX_ENQ(q, c, n) \ RTE_MIN((q->max_inflights - q->enqueued + q->dequeued - c), n) -static __rte_always_inline void -enqueue_one_aead_job(struct qat_sym_session *ctx, - struct icp_qat_fw_la_bulk_req *req, - struct rte_crypto_va_iova_ptr *iv, - struct rte_crypto_va_iova_ptr *digest, - struct rte_crypto_va_iova_ptr *aad, - union rte_crypto_sym_ofs ofs, uint32_t data_len) -{ - struct icp_qat_fw_la_cipher_req_params *cipher_param = - (void *)&req->serv_specif_rqpars; - struct icp_qat_fw_la_auth_req_params *auth_param = - (void *)((uint8_t *)&req->serv_specif_rqpars + - ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); - uint8_t *aad_data; - uint8_t aad_ccm_real_len; - uint8_t aad_len_field_sz; - uint32_t msg_len_be; - rte_iova_t aad_iova = 0; - uint8_t q; - - switch (ctx->qat_hash_alg) { - case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: - case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: - ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( - req->comn_hdr.serv_specif_flags, - ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); - rte_memcpy(cipher_param->u.cipher_IV_array, iv->va, - ctx->cipher_iv.length); - aad_iova = aad->iova; - break; - case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC: - aad_data = aad->va; - aad_iova = aad->iova; - aad_ccm_real_len = 0; - aad_len_field_sz = 0; - msg_len_be = rte_bswap32((uint32_t)data_len - - ofs.ofs.cipher.head); - - if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) { - aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO; - aad_ccm_real_len = ctx->aad_len - - ICP_QAT_HW_CCM_AAD_B0_LEN - - ICP_QAT_HW_CCM_AAD_LEN_INFO; - } else { - aad_data = iv->va; - aad_iova = iv->iova; - } - - q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length; - aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS( - aad_len_field_sz, ctx->digest_length, q); - if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) { - memcpy(aad_data + ctx->cipher_iv.length + - ICP_QAT_HW_CCM_NONCE_OFFSET + (q - - ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE), - (uint8_t *)&msg_len_be, - ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE); - } else { - memcpy(aad_data + ctx->cipher_iv.length + - ICP_QAT_HW_CCM_NONCE_OFFSET, - (uint8_t *)&msg_len_be + - (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE - - q), q); - } - - if (aad_len_field_sz > 0) { - *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] = - rte_bswap16(aad_ccm_real_len); - - if ((aad_ccm_real_len + aad_len_field_sz) - % ICP_QAT_HW_CCM_AAD_B0_LEN) { - uint8_t pad_len = 0; - uint8_t pad_idx = 0; - - pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN - - ((aad_ccm_real_len + - aad_len_field_sz) % - ICP_QAT_HW_CCM_AAD_B0_LEN); - pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN + - aad_ccm_real_len + - aad_len_field_sz; - memset(&aad_data[pad_idx], 0, pad_len); - } - } - - rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array) - + ICP_QAT_HW_CCM_NONCE_OFFSET, - (uint8_t *)iv->va + - ICP_QAT_HW_CCM_NONCE_OFFSET, ctx->cipher_iv.length); - *(uint8_t *)&cipher_param->u.cipher_IV_array[0] = - q - ICP_QAT_HW_CCM_NONCE_OFFSET; - - rte_memcpy((uint8_t *)aad->va + - ICP_QAT_HW_CCM_NONCE_OFFSET, - (uint8_t *)iv->va + ICP_QAT_HW_CCM_NONCE_OFFSET, - ctx->cipher_iv.length); - break; - default: - break; - } - - cipher_param->cipher_offset = ofs.ofs.cipher.head; - cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - - ofs.ofs.cipher.tail; - auth_param->auth_off = ofs.ofs.cipher.head; - auth_param->auth_len = cipher_param->cipher_length; - auth_param->auth_res_addr = digest->iova; - auth_param->u1.aad_adr = aad_iova; - - if (ctx->is_single_pass) { - cipher_param->spc_aad_addr = aad_iova; - cipher_param->spc_auth_res_addr = digest->iova; - } -} - -static __rte_always_inline int -qat_sym_dp_enqueue_single_aead(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_vec *data, uint16_t n_data_vecs, - union rte_crypto_sym_ofs ofs, - struct rte_crypto_va_iova_ptr *iv, - struct rte_crypto_va_iova_ptr *digest, - struct rte_crypto_va_iova_ptr *aad, - void *user_data) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - uint32_t tail = dp_ctx->tail; - - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); - data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs); - if (unlikely(data_len < 0)) - return -1; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data; - - enqueue_one_aead_job(ctx, req, iv, digest, aad, ofs, - (uint32_t)data_len); - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue++; - - return 0; -} - -static __rte_always_inline uint32_t -qat_sym_dp_enqueue_aead_jobs(void *qp_data, uint8_t *drv_ctx, - struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, - void *user_data[], int *status) -{ - struct qat_qp *qp = qp_data; - struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; - struct qat_queue *tx_queue = &qp->tx_q; - struct qat_sym_session *ctx = dp_ctx->session; - uint32_t i, n; - uint32_t tail; - struct icp_qat_fw_la_bulk_req *req; - int32_t data_len; - - n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); - if (unlikely(n == 0)) { - qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); - *status = 0; - return 0; - } - - tail = dp_ctx->tail; - - for (i = 0; i < n; i++) { - req = (struct icp_qat_fw_la_bulk_req *)( - (uint8_t *)tx_queue->base_addr + tail); - rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); - - data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec, - vec->sgl[i].num); - if (unlikely(data_len < 0)) - break; - req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i]; - enqueue_one_aead_job(ctx, req, &vec->iv[i], &vec->digest[i], - &vec->aad[i], ofs, (uint32_t)data_len); - tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; - } - - if (unlikely(i < n)) - qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); - - dp_ctx->tail = tail; - dp_ctx->cached_enqueue += i; - *status = 0; - return i; -} - static __rte_always_inline void enqueue_one_cipher_job(struct qat_sym_session *ctx, struct icp_qat_fw_la_bulk_req *req, @@ -704,6 +506,207 @@ qat_sym_dp_enqueue_chain_jobs(void *qp_data, uint8_t *drv_ctx, return i; } +static __rte_always_inline void +enqueue_one_aead_job(struct qat_sym_session *ctx, + struct icp_qat_fw_la_bulk_req *req, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, + union rte_crypto_sym_ofs ofs, uint32_t data_len) +{ + struct icp_qat_fw_la_cipher_req_params *cipher_param = + (void *)&req->serv_specif_rqpars; + struct icp_qat_fw_la_auth_req_params *auth_param = + (void *)((uint8_t *)&req->serv_specif_rqpars + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + uint8_t *aad_data; + uint8_t aad_ccm_real_len; + uint8_t aad_len_field_sz; + uint32_t msg_len_be; + rte_iova_t aad_iova = 0; + uint8_t q; + + /* CPM 1.7 uses single pass to treat AEAD as cipher operation */ + if (ctx->is_single_pass) { + enqueue_one_cipher_job(ctx, req, iv, ofs, data_len); + cipher_param->spc_aad_addr = aad->iova; + cipher_param->spc_auth_res_addr = digest->iova; + return; + } + + switch (ctx->qat_hash_alg) { + case ICP_QAT_HW_AUTH_ALGO_GALOIS_128: + case ICP_QAT_HW_AUTH_ALGO_GALOIS_64: + ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET( + req->comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS); + rte_memcpy(cipher_param->u.cipher_IV_array, iv->va, + ctx->cipher_iv.length); + aad_iova = aad->iova; + break; + case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC: + aad_data = aad->va; + aad_iova = aad->iova; + aad_ccm_real_len = 0; + aad_len_field_sz = 0; + msg_len_be = rte_bswap32((uint32_t)data_len - + ofs.ofs.cipher.head); + + if (ctx->aad_len > ICP_QAT_HW_CCM_AAD_DATA_OFFSET) { + aad_len_field_sz = ICP_QAT_HW_CCM_AAD_LEN_INFO; + aad_ccm_real_len = ctx->aad_len - + ICP_QAT_HW_CCM_AAD_B0_LEN - + ICP_QAT_HW_CCM_AAD_LEN_INFO; + } else { + aad_data = iv->va; + aad_iova = iv->iova; + } + + q = ICP_QAT_HW_CCM_NQ_CONST - ctx->cipher_iv.length; + aad_data[0] = ICP_QAT_HW_CCM_BUILD_B0_FLAGS( + aad_len_field_sz, ctx->digest_length, q); + if (q > ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE) { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET + (q - + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE), + (uint8_t *)&msg_len_be, + ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE); + } else { + memcpy(aad_data + ctx->cipher_iv.length + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)&msg_len_be + + (ICP_QAT_HW_CCM_MSG_LEN_MAX_FIELD_SIZE + - q), q); + } + + if (aad_len_field_sz > 0) { + *(uint16_t *)&aad_data[ICP_QAT_HW_CCM_AAD_B0_LEN] = + rte_bswap16(aad_ccm_real_len); + + if ((aad_ccm_real_len + aad_len_field_sz) + % ICP_QAT_HW_CCM_AAD_B0_LEN) { + uint8_t pad_len = 0; + uint8_t pad_idx = 0; + + pad_len = ICP_QAT_HW_CCM_AAD_B0_LEN - + ((aad_ccm_real_len + + aad_len_field_sz) % + ICP_QAT_HW_CCM_AAD_B0_LEN); + pad_idx = ICP_QAT_HW_CCM_AAD_B0_LEN + + aad_ccm_real_len + + aad_len_field_sz; + memset(&aad_data[pad_idx], 0, pad_len); + } + } + + rte_memcpy(((uint8_t *)cipher_param->u.cipher_IV_array) + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)iv->va + + ICP_QAT_HW_CCM_NONCE_OFFSET, ctx->cipher_iv.length); + *(uint8_t *)&cipher_param->u.cipher_IV_array[0] = + q - ICP_QAT_HW_CCM_NONCE_OFFSET; + + rte_memcpy((uint8_t *)aad->va + + ICP_QAT_HW_CCM_NONCE_OFFSET, + (uint8_t *)iv->va + ICP_QAT_HW_CCM_NONCE_OFFSET, + ctx->cipher_iv.length); + break; + default: + break; + } + + cipher_param->cipher_offset = ofs.ofs.cipher.head; + cipher_param->cipher_length = data_len - ofs.ofs.cipher.head - + ofs.ofs.cipher.tail; + auth_param->auth_off = ofs.ofs.cipher.head; + auth_param->auth_len = cipher_param->cipher_length; + auth_param->auth_res_addr = digest->iova; + auth_param->u1.aad_adr = aad_iova; +} + +static __rte_always_inline int +qat_sym_dp_enqueue_single_aead(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_vec *data, uint16_t n_data_vecs, + union rte_crypto_sym_ofs ofs, + struct rte_crypto_va_iova_ptr *iv, + struct rte_crypto_va_iova_ptr *digest, + struct rte_crypto_va_iova_ptr *aad, + void *user_data) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + uint32_t tail = dp_ctx->tail; + + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); + data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs); + if (unlikely(data_len < 0)) + return -1; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data; + + enqueue_one_aead_job(ctx, req, iv, digest, aad, ofs, + (uint32_t)data_len); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue++; + + return 0; +} + +static __rte_always_inline uint32_t +qat_sym_dp_enqueue_aead_jobs(void *qp_data, uint8_t *drv_ctx, + struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs, + void *user_data[], int *status) +{ + struct qat_qp *qp = qp_data; + struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; + struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_session *ctx = dp_ctx->session; + uint32_t i, n; + uint32_t tail; + struct icp_qat_fw_la_bulk_req *req; + int32_t data_len; + + n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); + if (unlikely(n == 0)) { + qat_sym_dp_fill_vec_status(vec->status, -1, vec->num); + *status = 0; + return 0; + } + + tail = dp_ctx->tail; + + for (i = 0; i < n; i++) { + req = (struct icp_qat_fw_la_bulk_req *)( + (uint8_t *)tx_queue->base_addr + tail); + rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); + + data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec, + vec->sgl[i].num); + if (unlikely(data_len < 0)) + break; + req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i]; + enqueue_one_aead_job(ctx, req, &vec->iv[i], &vec->digest[i], + &vec->aad[i], ofs, (uint32_t)data_len); + tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; + } + + if (unlikely(i < n)) + qat_sym_dp_fill_vec_status(vec->status + i, -1, n - i); + + dp_ctx->tail = tail; + dp_ctx->cached_enqueue += i; + *status = 0; + return i; +} + static __rte_always_inline uint32_t qat_sym_dp_dequeue_burst(void *qp_data, uint8_t *drv_ctx, rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count, @@ -937,8 +940,9 @@ qat_sym_configure_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, raw_dp_ctx->dequeue = qat_sym_dp_dequeue; raw_dp_ctx->dequeue_done = qat_sym_dp_update_head; - if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || - ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) { + if ((ctx->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER || + ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) && + !ctx->is_gmac) { /* AES-GCM or AES-CCM */ if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 || ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64 || @@ -954,12 +958,21 @@ qat_sym_configure_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id, qat_sym_dp_enqueue_chain_jobs; raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_chain; } - } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) { + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH || ctx->is_gmac) { raw_dp_ctx->enqueue_burst = qat_sym_dp_enqueue_auth_jobs; raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_auth; } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { - raw_dp_ctx->enqueue_burst = qat_sym_dp_enqueue_cipher_jobs; - raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_cipher; + if (ctx->qat_mode == ICP_QAT_HW_CIPHER_AEAD_MODE || + ctx->qat_cipher_alg == + ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305) { + raw_dp_ctx->enqueue_burst = + qat_sym_dp_enqueue_aead_jobs; + raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_aead; + } else { + raw_dp_ctx->enqueue_burst = + qat_sym_dp_enqueue_cipher_jobs; + raw_dp_ctx->enqueue = qat_sym_dp_enqueue_single_cipher; + } } else return -1; From patchwork Mon Jun 28 16:34:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94922 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52FE4A0A0C; Mon, 28 Jun 2021 18:36:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 18A6F4117D; Mon, 28 Jun 2021 18:35:09 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 81AC0411A7 for ; Mon, 28 Jun 2021 18:35:07 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165973" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165973" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:35:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395798" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:35:05 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Adam Dybkowski Date: Mon, 28 Jun 2021 17:34:33 +0100 Message-Id: <20210628163434.77741-16-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 15/16] crypto/qat: enable RAW API on QAT GEN1-3 only X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Adam Dybkowski This patch enables RAW API in feature flags on QAT generations 1 to 3 only. Disables it for later generations. Signed-off-by: Adam Dybkowski Acked-by: Fan Zhang --- drivers/crypto/qat/qat_sym_pmd.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index 0097ee210f..1c7b142511 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -409,8 +409,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | - RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | - RTE_CRYPTODEV_FF_SYM_RAW_DP; + RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED; + + if (qat_pci_dev->qat_dev_gen < QAT_GEN4) + cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SYM_RAW_DP; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; From patchwork Mon Jun 28 16:34:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 94923 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 80E94A0A0C; Mon, 28 Jun 2021 18:36:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72936411A9; Mon, 28 Jun 2021 18:35:12 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 84D14411AF for ; Mon, 28 Jun 2021 18:35:10 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10029"; a="206165979" X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="206165979" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2021 09:35:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,306,1616482800"; d="scan'208";a="456395829" Received: from silpixa00399302.ir.intel.com ([10.237.214.136]) by fmsmga008.fm.intel.com with ESMTP; 28 Jun 2021 09:35:07 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: gakhil@marvell.com, fiona.trahe@intel.com, roy.fan.zhang@intel.com, Adam Dybkowski Date: Mon, 28 Jun 2021 17:34:34 +0100 Message-Id: <20210628163434.77741-17-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> References: <20210628163434.77741-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v2 16/16] test/crypto: check if RAW API is supported X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Adam Dybkowski This patch adds checking if RAW API is supported at the start of the test command "cryptodev_qat_raw_api_autotest". Signed-off-by: Adam Dybkowski Acked-by: Fan Zhang --- app/test/test_cryptodev.c | 34 +++++++++++++++++++++++++++++++++- 1 file changed, 33 insertions(+), 1 deletion(-) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 39db52b17a..64b6cc0db7 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -14769,7 +14769,39 @@ test_cryptodev_bcmfs(void) static int test_cryptodev_qat_raw_api(void /*argv __rte_unused, int argc __rte_unused*/) { - int ret; + static const char *pmd_name = RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD); + struct rte_cryptodev_info dev_info; + uint8_t i, nb_devs, found = 0; + int driver_id, ret; + + driver_id = rte_cryptodev_driver_id_get(pmd_name); + if (driver_id == -1) { + RTE_LOG(WARNING, USER1, "%s PMD must be loaded.\n", pmd_name); + return TEST_SKIPPED; + } + + nb_devs = rte_cryptodev_count(); + if (nb_devs < 1) { + RTE_LOG(WARNING, USER1, "No crypto devices found?\n"); + return TEST_SKIPPED; + } + + for (i = 0; i < nb_devs; i++) { + rte_cryptodev_info_get(i, &dev_info); + if (dev_info.driver_id == driver_id) { + if (!(dev_info.feature_flags & + RTE_CRYPTODEV_FF_SYM_RAW_DP)) { + RTE_LOG(INFO, USER1, "RAW API not supported\n"); + return TEST_SKIPPED; + } + found = 1; + break; + } + } + if (!found) { + RTE_LOG(INFO, USER1, "RAW API not supported\n"); + return TEST_SKIPPED; + } global_api_test_type = CRYPTODEV_RAW_API_TEST; ret = run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));