From patchwork Fri Oct 22 17:03:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 102684 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 609E5A0C43; Fri, 22 Oct 2021 19:04:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4949A410EF; Fri, 22 Oct 2021 19:04:09 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 17F3E4003E for ; Fri, 22 Oct 2021 19:03:59 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10145"; a="315546564" X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="315546564" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 10:03:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="569279737" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2021 10:03:57 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang , Arek Kusztal , Kai Ji Date: Fri, 22 Oct 2021 18:03:46 +0100 Message-Id: <20211022170354.13503-2-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022170354.13503-1-roy.fan.zhang@intel.com> References: <20211014161137.1405168-1-roy.fan.zhang@intel.com> <20211022170354.13503-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v4 1/9] common/qat: add gen specific data and function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds the data structure and function prototypes for different QAT generations. Signed-off-by: Arek Kusztal Signed-off-by: Fan Zhang Signed-off-by: Kai Ji Acked-by: Ciara Power X-Patchwork-Id: 102685 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E1D43A0C43; Fri, 22 Oct 2021 19:04:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 58297410F7; Fri, 22 Oct 2021 19:04:10 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 3F081410EA for ; Fri, 22 Oct 2021 19:04:01 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10145"; a="315546576" X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="315546576" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 10:04:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="569279750" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2021 10:03:59 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang , Arek Kusztal , Kai Ji Date: Fri, 22 Oct 2021 18:03:47 +0100 Message-Id: <20211022170354.13503-3-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022170354.13503-1-roy.fan.zhang@intel.com> References: <20211014161137.1405168-1-roy.fan.zhang@intel.com> <20211022170354.13503-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v4 2/9] common/qat: add gen specific device implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch replaces the mixed QAT device configuration implementation by separate files with shared or individual implementation for specific QAT generation. Signed-off-by: Arek Kusztal Signed-off-by: Fan Zhang Signed-off-by: Kai Ji --- drivers/common/qat/dev/qat_dev_gen1.c | 66 +++++++++ drivers/common/qat/dev/qat_dev_gen2.c | 23 +++ drivers/common/qat/dev/qat_dev_gen3.c | 23 +++ drivers/common/qat/dev/qat_dev_gen4.c | 152 +++++++++++++++++++ drivers/common/qat/dev/qat_dev_gens.h | 34 +++++ drivers/common/qat/meson.build | 4 + drivers/common/qat/qat_device.c | 205 +++++++++++--------------- drivers/common/qat/qat_device.h | 5 +- drivers/common/qat/qat_qp.c | 3 +- 9 files changed, 391 insertions(+), 124 deletions(-) create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c create mode 100644 drivers/common/qat/dev/qat_dev_gens.h diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c new file mode 100644 index 0000000000..d9e75fe9e2 --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen1.c @@ -0,0 +1,66 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_device.h" +#include "adf_transport_access_macros.h" +#include "qat_dev_gens.h" + +#include + +#define ADF_ARB_REG_SLOT 0x1000 + +int +qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused) +{ + /* + * Ring pairs reset not supported on base, continue + */ + return 0; +} + +const struct rte_mem_resource * +qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev) +{ + return &pci_dev->mem_resource[0]; +} + +int +qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused, + struct rte_pci_device *pci_dev __rte_unused) +{ + return -1; +} + +int +qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused) +{ + /* + * Base generations do not have configuration, + * but set this pointer anyway that we can + * distinguish higher generations faulty set to NULL + */ + return 0; +} + +int +qat_dev_get_extra_size_gen1(void) +{ + return 0; +} + +static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = { + .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1, + .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, + .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1, + .qat_dev_read_config = qat_dev_read_config_gen1, + .qat_dev_get_extra_size = qat_dev_get_extra_size_gen1, +}; + +RTE_INIT(qat_dev_gen_gen1_init) +{ + qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1; + qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1; + qat_gen_config[QAT_GEN1].comp_num_im_bufs_required = + QAT_NUM_INTERM_BUFS_GEN1; +} diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c new file mode 100644 index 0000000000..d3470ed6b8 --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen2.c @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_device.h" +#include "adf_transport_access_macros.h" +#include "qat_dev_gens.h" + +#include + +static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = { + .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1, + .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, + .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1, + .qat_dev_read_config = qat_dev_read_config_gen1, + .qat_dev_get_extra_size = qat_dev_get_extra_size_gen1, +}; + +RTE_INIT(qat_dev_gen_gen2_init) +{ + qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2; + qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2; +} diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c new file mode 100644 index 0000000000..e4a66869d2 --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen3.c @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_device.h" +#include "adf_transport_access_macros.h" +#include "qat_dev_gens.h" + +#include + +static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = { + .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1, + .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, + .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1, + .qat_dev_read_config = qat_dev_read_config_gen1, + .qat_dev_get_extra_size = qat_dev_get_extra_size_gen1, +}; + +RTE_INIT(qat_dev_gen_gen3_init) +{ + qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3; + qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3; +} diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c new file mode 100644 index 0000000000..5e5423ebfa --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen4.c @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include + +#include "qat_device.h" +#include "qat_qp.h" +#include "adf_transport_access_macros_gen4vf.h" +#include "adf_pf2vf_msg.h" +#include "qat_pf2vf.h" +#include "qat_dev_gens.h" + +#include + +struct qat_dev_gen4_extra { + struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM] + [QAT_GEN4_QPS_PER_BUNDLE_NUM]; +}; + +static struct qat_pf2vf_dev qat_pf2vf_gen4 = { + .pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET, + .vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET, + .pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT, + .pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK, + .pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT, + .pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK, +}; + +int +qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val) +{ + struct qat_pf2vf_msg pf2vf_msg; + + pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ; + pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ; + pf2vf_msg.msg_data = 2; + return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val); +} + +static enum qat_service_type +gen4_pick_service(uint8_t hw_service) +{ + switch (hw_service) { + case QAT_SVC_SYM: + return QAT_SERVICE_SYMMETRIC; + case QAT_SVC_COMPRESSION: + return QAT_SERVICE_COMPRESSION; + case QAT_SVC_ASYM: + return QAT_SERVICE_ASYMMETRIC; + default: + return QAT_SERVICE_INVALID; + } +} + +static int +qat_dev_read_config_gen4(struct qat_pci_device *qat_dev) +{ + int i = 0; + uint16_t svc = 0; + struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private; + struct qat_qp_hw_data *hw_data; + enum qat_service_type service_type; + uint8_t hw_service; + + if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc)) + return -EFAULT; + for (; i < QAT_GEN4_BUNDLE_NUM; i++) { + hw_service = (svc >> (3 * i)) & 0x7; + service_type = gen4_pick_service(hw_service); + if (service_type == QAT_SERVICE_INVALID) { + QAT_LOG(ERR, + "Unrecognized service on bundle %d", + i); + return -ENOTSUP; + } + hw_data = &dev_extra->qp_gen4_data[i][0]; + memset(hw_data, 0, sizeof(*hw_data)); + hw_data->service_type = service_type; + if (service_type == QAT_SERVICE_ASYMMETRIC) { + hw_data->tx_msg_size = 64; + hw_data->rx_msg_size = 32; + } else if (service_type == QAT_SERVICE_SYMMETRIC || + service_type == + QAT_SERVICE_COMPRESSION) { + hw_data->tx_msg_size = 128; + hw_data->rx_msg_size = 32; + } + hw_data->tx_ring_num = 0; + hw_data->rx_ring_num = 1; + hw_data->hw_bundle_num = i; + } + return 0; +} + +static int +qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev) +{ + int ret = 0, i; + uint8_t data[4]; + struct qat_pf2vf_msg pf2vf_msg; + + pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET; + pf2vf_msg.block_hdr = -1; + for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) { + pf2vf_msg.msg_data = i; + ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data); + if (ret) { + QAT_LOG(ERR, "QAT error when reset bundle no %d", + i); + return ret; + } + } + + return 0; +} + +static const struct +rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev) +{ + return &pci_dev->mem_resource[0]; +} + +static int +qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource, + struct rte_pci_device *pci_dev) +{ + *mem_resource = &pci_dev->mem_resource[2]; + return 0; +} + +static int +qat_dev_get_extra_size_gen4(void) +{ + return sizeof(struct qat_dev_gen4_extra); +} + +static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = { + .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4, + .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4, + .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4, + .qat_dev_read_config = qat_dev_read_config_gen4, + .qat_dev_get_extra_size = qat_dev_get_extra_size_gen4, +}; + +RTE_INIT(qat_dev_gen_4_init) +{ + qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4; + qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4; + qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4; +} diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h new file mode 100644 index 0000000000..4ad0ffa728 --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gens.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef _QAT_DEV_GENS_H_ +#define _QAT_DEV_GENS_H_ + +#include "qat_device.h" +#include "qat_qp.h" + +#include + +extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES] + [ADF_MAX_QPS_ON_ANY_SERVICE]; + +int +qat_dev_get_extra_size_gen1(void); + +int +qat_reset_ring_pairs_gen1( + struct qat_pci_device *qat_pci_dev); +const struct +rte_mem_resource *qat_dev_get_transport_bar_gen1( + struct rte_pci_device *pci_dev); +int +qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource, + struct rte_pci_device *pci_dev); +int +qat_dev_read_config_gen1(struct qat_pci_device *qat_dev); + +int +qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val); + +#endif diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 053c219fed..532e0fabb3 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -50,6 +50,10 @@ sources += files( 'qat_device.c', 'qat_logs.c', 'qat_pf2vf.c', + 'dev/qat_dev_gen1.c', + 'dev/qat_dev_gen2.c', + 'dev/qat_dev_gen3.c', + 'dev/qat_dev_gen4.c' ) includes += include_directories( 'qat_adf', diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index e6b43c541f..437996f2e8 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -17,43 +17,6 @@ struct qat_gen_hw_data qat_gen_config[QAT_N_GENS]; struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS]; -/* pv2vf data Gen 4*/ -struct qat_pf2vf_dev qat_pf2vf_gen4 = { - .pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET, - .vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET, - .pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT, - .pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK, - .pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT, - .pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK, -}; - -/* Hardware device information per generation */ -__extension__ -struct qat_gen_hw_data qat_gen_config[] = { - [QAT_GEN1] = { - .dev_gen = QAT_GEN1, - .qp_hw_data = qat_gen1_qps, - .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1 - }, - [QAT_GEN2] = { - .dev_gen = QAT_GEN2, - .qp_hw_data = qat_gen1_qps, - /* gen2 has same ring layout as gen1 */ - .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2 - }, - [QAT_GEN3] = { - .dev_gen = QAT_GEN3, - .qp_hw_data = qat_gen3_qps, - .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3 - }, - [QAT_GEN4] = { - .dev_gen = QAT_GEN4, - .qp_hw_data = NULL, - .comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3, - .pf2vf_dev = &qat_pf2vf_gen4 - }, -}; - /* per-process array of device data */ struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES]; static int qat_nb_pci_devices; @@ -87,6 +50,16 @@ static const struct rte_pci_id pci_id_qat_map[] = { {.device_id = 0}, }; +static int +qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen) +{ + struct qat_dev_hw_spec_funcs *ops_hw = + qat_dev_hw_spec[qat_dev_gen]; + RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size, + -ENOTSUP); + return ops_hw->qat_dev_get_extra_size(); +} + static struct qat_pci_device * qat_pci_get_named_dev(const char *name) { @@ -130,45 +103,8 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev) return qat_pci_get_named_dev(name); } -static int -qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev) -{ - int ret = 0, i; - uint8_t data[4]; - struct qat_pf2vf_msg pf2vf_msg; - - pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET; - pf2vf_msg.block_hdr = -1; - for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) { - pf2vf_msg.msg_data = i; - ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data); - if (ret) { - QAT_LOG(ERR, "QAT error when reset bundle no %d", - i); - return ret; - } - } - - return 0; -} - -int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val) -{ - int ret = -(EINVAL); - struct qat_pf2vf_msg pf2vf_msg; - - if (qat_dev->qat_dev_gen == QAT_GEN4) { - pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ; - pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ; - pf2vf_msg.msg_data = 2; - ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val); - } - - return ret; -} - - -static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param +static void +qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param *qat_dev_cmd_param) { int i = 0; @@ -230,13 +166,39 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, struct qat_dev_cmd_param *qat_dev_cmd_param) { struct qat_pci_device *qat_dev; + enum qat_device_gen qat_dev_gen; uint8_t qat_dev_id = 0; char name[QAT_DEV_NAME_MAX_LEN]; struct rte_devargs *devargs = pci_dev->device.devargs; + struct qat_dev_hw_spec_funcs *ops_hw; + struct rte_mem_resource *mem_resource; + const struct rte_memzone *qat_dev_mz; + int qat_dev_size, extra_size; rte_pci_device_name(&pci_dev->addr, name, sizeof(name)); snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat"); + switch (pci_dev->id.device_id) { + case 0x0443: + qat_dev_gen = QAT_GEN1; + break; + case 0x37c9: + case 0x19e3: + case 0x6f55: + case 0x18ef: + qat_dev_gen = QAT_GEN2; + break; + case 0x18a1: + qat_dev_gen = QAT_GEN3; + break; + case 0x4941: + qat_dev_gen = QAT_GEN4; + break; + default: + QAT_LOG(ERR, "Invalid dev_id, can't determine generation"); + return NULL; + } + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { const struct rte_memzone *mz = rte_memzone_lookup(name); @@ -267,63 +229,63 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev, return NULL; } - qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name, - sizeof(struct qat_pci_device), + extra_size = qat_pci_get_extra_size(qat_dev_gen); + if (extra_size < 0) { + QAT_LOG(ERR, "QAT internal error: no pci pointer for gen %d", + qat_dev_gen); + return NULL; + } + + qat_dev_size = sizeof(struct qat_pci_device) + extra_size; + qat_dev_mz = rte_memzone_reserve(name, qat_dev_size, rte_socket_id(), 0); - if (qat_pci_devs[qat_dev_id].mz == NULL) { + if (qat_dev_mz == NULL) { QAT_LOG(ERR, "Error when allocating memzone for QAT_%d", qat_dev_id); return NULL; } - qat_dev = qat_pci_devs[qat_dev_id].mz->addr; - memset(qat_dev, 0, sizeof(*qat_dev)); + qat_dev = qat_dev_mz->addr; + memset(qat_dev, 0, qat_dev_size); + qat_dev->dev_private = qat_dev + 1; strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN); qat_dev->qat_dev_id = qat_dev_id; qat_pci_devs[qat_dev_id].pci_dev = pci_dev; - switch (pci_dev->id.device_id) { - case 0x0443: - qat_dev->qat_dev_gen = QAT_GEN1; - break; - case 0x37c9: - case 0x19e3: - case 0x6f55: - case 0x18ef: - qat_dev->qat_dev_gen = QAT_GEN2; - break; - case 0x18a1: - qat_dev->qat_dev_gen = QAT_GEN3; - break; - case 0x4941: - qat_dev->qat_dev_gen = QAT_GEN4; - break; - default: - QAT_LOG(ERR, "Invalid dev_id, can't determine generation"); - rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz); + qat_dev->qat_dev_gen = qat_dev_gen; + + ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen]; + if (ops_hw->qat_dev_get_misc_bar == NULL) { + QAT_LOG(ERR, "qat_dev_get_misc_bar function pointer not set"); + rte_memzone_free(qat_dev_mz); return NULL; } - - if (qat_dev->qat_dev_gen == QAT_GEN4) { - qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr; - if (qat_dev->misc_bar_io_addr == NULL) { + if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) { + if (mem_resource->addr == NULL) { QAT_LOG(ERR, "QAT cannot get access to VF misc bar"); + rte_memzone_free(qat_dev_mz); return NULL; } - } + qat_dev->misc_bar_io_addr = mem_resource->addr; + } else + qat_dev->misc_bar_io_addr = NULL; if (devargs && devargs->drv_str) qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param); - if (qat_dev->qat_dev_gen >= QAT_GEN4) { - if (qat_read_qp_config(qat_dev)) { - QAT_LOG(ERR, - "Cannot acquire ring configuration for QAT_%d", - qat_dev_id); - return NULL; - } + if (qat_read_qp_config(qat_dev)) { + QAT_LOG(ERR, + "Cannot acquire ring configuration for QAT_%d", + qat_dev_id); + rte_memzone_free(qat_dev_mz); + return NULL; } + /* No errors when allocating, attach memzone with + * qat_dev to list of devices + */ + qat_pci_devs[qat_dev_id].mz = qat_dev_mz; + rte_spinlock_init(&qat_dev->arb_csr_lock); qat_nb_pci_devices++; @@ -396,6 +358,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, int sym_ret = 0, asym_ret = 0, comp_ret = 0; int num_pmds_created = 0; struct qat_pci_device *qat_pci_dev; + struct qat_dev_hw_spec_funcs *ops_hw; struct qat_dev_cmd_param qat_dev_cmd_param[] = { { SYM_ENQ_THRESHOLD_NAME, 0 }, { ASYM_ENQ_THRESHOLD_NAME, 0 }, @@ -412,13 +375,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, if (qat_pci_dev == NULL) return -ENODEV; - if (qat_pci_dev->qat_dev_gen == QAT_GEN4) { - if (qat_gen4_reset_ring_pair(qat_pci_dev)) { - QAT_LOG(ERR, - "Cannot reset ring pairs, does pf driver supports pf2vf comms?" - ); - return -ENODEV; - } + ops_hw = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen]; + RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_reset_ring_pairs, + -ENOTSUP); + if (ops_hw->qat_dev_reset_ring_pairs(qat_pci_dev)) { + QAT_LOG(ERR, + "Cannot reset ring pairs, does pf driver supports pf2vf comms?" + ); + return -ENODEV; } sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param); @@ -453,7 +417,8 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, return 0; } -static int qat_pci_remove(struct rte_pci_device *pci_dev) +static int +qat_pci_remove(struct rte_pci_device *pci_dev) { struct qat_pci_device *qat_pci_dev; diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index b8b5c387a3..8b69206df5 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -133,6 +133,8 @@ struct qat_pci_device { /**< Data of ring configuration on gen4 */ void *misc_bar_io_addr; /**< Address of misc bar */ + void *dev_private; + /**< Per generation specific information */ }; struct qat_gen_hw_data { @@ -182,7 +184,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused, int qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused); -int -qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret); - #endif /* _QAT_DEVICE_H_ */ diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 026ea5ee01..b8c6000e86 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -20,6 +20,7 @@ #include "qat_comp.h" #include "adf_transport_access_macros.h" #include "adf_transport_access_macros_gen4vf.h" +#include "dev/qat_dev_gens.h" #define QAT_CQ_MAX_DEQ_RETRIES 10 @@ -512,7 +513,7 @@ qat_read_qp_config(struct qat_pci_device *qat_dev) if (qat_dev_gen == QAT_GEN4) { uint16_t svc = 0; - if (qat_query_svc(qat_dev, (uint8_t *)&svc)) + if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc)) return -(EFAULT); for (; i < QAT_GEN4_BUNDLE_NUM; i++) { struct qat_qp_hw_data *hw_data = From patchwork Fri Oct 22 17:03:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 102686 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A88AA0C43; Fri, 22 Oct 2021 19:04:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9182B41109; Fri, 22 Oct 2021 19:04:11 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 60C2E410EA for ; Fri, 22 Oct 2021 19:04:02 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10145"; a="315546580" X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="315546580" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 10:04:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="569279767" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2021 10:04:00 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang Date: Fri, 22 Oct 2021 18:03:48 +0100 Message-Id: <20211022170354.13503-4-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022170354.13503-1-roy.fan.zhang@intel.com> References: <20211014161137.1405168-1-roy.fan.zhang@intel.com> <20211022170354.13503-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v4 3/9] common/qat: add gen specific queue pair function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds the queue pair data structure and function prototypes for different QAT generations. Signed-off-by: Fan Zhang Acked-by: Ciara Power --- drivers/common/qat/qat_qp.c | 3 ++ drivers/common/qat/qat_qp.h | 103 ++++++++++++++++++++++++------------ 2 files changed, 71 insertions(+), 35 deletions(-) diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index b8c6000e86..27994036b8 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -34,6 +34,9 @@ ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \ (ADF_ARB_REG_SLOT * index), value) +struct qat_qp_hw_spec_funcs* + qat_qp_hw_spec[QAT_N_GENS]; + __extension__ const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES] [ADF_MAX_QPS_ON_ANY_SERVICE] = { diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index e1627197fa..726cd2ef61 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -7,8 +7,6 @@ #include "qat_common.h" #include "adf_transport_access_macros.h" -struct qat_pci_device; - #define QAT_CSR_HEAD_WRITE_THRESH 32U /* number of requests to accumulate before writing head CSR */ @@ -24,37 +22,7 @@ struct qat_pci_device; #define QAT_GEN4_BUNDLE_NUM 4 #define QAT_GEN4_QPS_PER_BUNDLE_NUM 1 -/** - * Structure with data needed for creation of queue pair. - */ -struct qat_qp_hw_data { - enum qat_service_type service_type; - uint8_t hw_bundle_num; - uint8_t tx_ring_num; - uint8_t rx_ring_num; - uint16_t tx_msg_size; - uint16_t rx_msg_size; -}; - -/** - * Structure with data needed for creation of queue pair on gen4. - */ -struct qat_qp_gen4_data { - struct qat_qp_hw_data qat_qp_hw_data; - uint8_t reserved; - uint8_t valid; -}; - -/** - * Structure with data needed for creation of queue pair. - */ -struct qat_qp_config { - const struct qat_qp_hw_data *hw; - uint32_t nb_descriptors; - uint32_t cookie_size; - int socket_id; - const char *service_str; -}; +struct qat_pci_device; /** * Structure associated with each queue. @@ -96,8 +64,28 @@ struct qat_qp { uint16_t min_enq_burst_threshold; } __rte_cache_aligned; -extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; -extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; +/** + * Structure with data needed for creation of queue pair. + */ +struct qat_qp_hw_data { + enum qat_service_type service_type; + uint8_t hw_bundle_num; + uint8_t tx_ring_num; + uint8_t rx_ring_num; + uint16_t tx_msg_size; + uint16_t rx_msg_size; +}; + +/** + * Structure with data needed for creation of queue pair. + */ +struct qat_qp_config { + const struct qat_qp_hw_data *hw; + uint32_t nb_descriptors; + uint32_t cookie_size; + int socket_id; + const char *service_str; +}; uint16_t qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops); @@ -136,4 +124,49 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, int qat_read_qp_config(struct qat_pci_device *qat_dev); +/** + * Function prototypes for GENx specific queue pair operations. + **/ +typedef int (*qat_qp_rings_per_service_t) + (struct qat_pci_device *, enum qat_service_type); + +typedef void (*qat_qp_build_ring_base_t)(void *, struct qat_queue *); + +typedef void (*qat_qp_adf_arb_enable_t)(const struct qat_queue *, void *, + rte_spinlock_t *); + +typedef void (*qat_qp_adf_arb_disable_t)(const struct qat_queue *, void *, + rte_spinlock_t *); + +typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *); + +typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp, struct qat_queue *q); + +typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp, struct qat_queue *q, + uint32_t new_head); + +typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*, void *, + struct qat_qp *); + +typedef const struct qat_qp_hw_data * (*qat_qp_get_hw_data_t)( + struct qat_pci_device *dev, enum qat_service_type service_type, + uint16_t qp_id); + +struct qat_qp_hw_spec_funcs { + qat_qp_rings_per_service_t qat_qp_rings_per_service; + qat_qp_build_ring_base_t qat_qp_build_ring_base; + qat_qp_adf_arb_enable_t qat_qp_adf_arb_enable; + qat_qp_adf_arb_disable_t qat_qp_adf_arb_disable; + qat_qp_adf_configure_queues_t qat_qp_adf_configure_queues; + qat_qp_csr_write_tail_t qat_qp_csr_write_tail; + qat_qp_csr_write_head_t qat_qp_csr_write_head; + qat_qp_csr_setup_t qat_qp_csr_setup; + qat_qp_get_hw_data_t qat_qp_get_hw_data; +}; + +extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[]; + +extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; +extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; + #endif /* _QAT_QP_H_ */ From patchwork Fri Oct 22 17:03:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 102687 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD8FCA0C43; Fri, 22 Oct 2021 19:04:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 931B841125; Fri, 22 Oct 2021 19:04:12 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id DD115410EA for ; Fri, 22 Oct 2021 19:04:04 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10145"; a="315546589" X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="315546589" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 10:04:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="569279783" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2021 10:04:02 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang , Arek Kusztal , Kai Ji Date: Fri, 22 Oct 2021 18:03:49 +0100 Message-Id: <20211022170354.13503-5-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022170354.13503-1-roy.fan.zhang@intel.com> References: <20211014161137.1405168-1-roy.fan.zhang@intel.com> <20211022170354.13503-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v4 4/9] common/qat: add gen specific queue implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch replaces the mixed QAT queue pair configuration implementation by separate files with shared or individual implementation for specific QAT generation. Signed-off-by: Arek Kusztal Signed-off-by: Fan Zhang Signed-off-by: Kai Ji Acked-by: Ciara Power --- drivers/common/qat/dev/qat_dev_gen1.c | 190 +++++ drivers/common/qat/dev/qat_dev_gen2.c | 14 + drivers/common/qat/dev/qat_dev_gen3.c | 60 ++ drivers/common/qat/dev/qat_dev_gen4.c | 161 ++++- drivers/common/qat/dev/qat_dev_gens.h | 37 +- .../qat/qat_adf/adf_transport_access_macros.h | 2 + drivers/common/qat/qat_device.h | 3 - drivers/common/qat/qat_qp.c | 677 +++++++----------- drivers/common/qat/qat_qp.h | 24 +- drivers/crypto/qat/qat_sym_pmd.c | 32 +- 10 files changed, 723 insertions(+), 477 deletions(-) diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c index d9e75fe9e2..cc63b55bd1 100644 --- a/drivers/common/qat/dev/qat_dev_gen1.c +++ b/drivers/common/qat/dev/qat_dev_gen1.c @@ -3,6 +3,7 @@ */ #include "qat_device.h" +#include "qat_qp.h" #include "adf_transport_access_macros.h" #include "qat_dev_gens.h" @@ -10,6 +11,194 @@ #define ADF_ARB_REG_SLOT 0x1000 +#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \ + ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \ + (ADF_ARB_REG_SLOT * index), value) + +__extension__ +const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES] + [ADF_MAX_QPS_ON_ANY_SERVICE] = { + /* queue pairs which provide an asymmetric crypto service */ + [QAT_SERVICE_ASYMMETRIC] = { + { + .service_type = QAT_SERVICE_ASYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 0, + .rx_ring_num = 8, + .tx_msg_size = 64, + .rx_msg_size = 32, + + }, { + .service_type = QAT_SERVICE_ASYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 1, + .rx_ring_num = 9, + .tx_msg_size = 64, + .rx_msg_size = 32, + } + }, + /* queue pairs which provide a symmetric crypto service */ + [QAT_SERVICE_SYMMETRIC] = { + { + .service_type = QAT_SERVICE_SYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 2, + .rx_ring_num = 10, + .tx_msg_size = 128, + .rx_msg_size = 32, + }, + { + .service_type = QAT_SERVICE_SYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 3, + .rx_ring_num = 11, + .tx_msg_size = 128, + .rx_msg_size = 32, + } + }, + /* queue pairs which provide a compression service */ + [QAT_SERVICE_COMPRESSION] = { + { + .service_type = QAT_SERVICE_COMPRESSION, + .hw_bundle_num = 0, + .tx_ring_num = 6, + .rx_ring_num = 14, + .tx_msg_size = 128, + .rx_msg_size = 32, + }, { + .service_type = QAT_SERVICE_COMPRESSION, + .hw_bundle_num = 0, + .tx_ring_num = 7, + .rx_ring_num = 15, + .tx_msg_size = 128, + .rx_msg_size = 32, + } + } +}; + +const struct qat_qp_hw_data * +qat_qp_get_hw_data_gen1(struct qat_pci_device *dev __rte_unused, + enum qat_service_type service_type, uint16_t qp_id) +{ + return qat_gen1_qps[service_type] + qp_id; +} + +int +qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev, + enum qat_service_type service) +{ + int i = 0, count = 0; + + for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) { + const struct qat_qp_hw_data *hw_qps = + qat_qp_get_hw_data(qat_dev, service, i); + if (hw_qps->service_type == service) + count++; + } + + return count; +} + +void +qat_qp_csr_build_ring_base_gen1(void *io_addr, + struct qat_queue *queue) +{ + uint64_t queue_base; + + queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr, + queue->queue_size); + WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number, + queue->hw_queue_number, queue_base); +} + +void +qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = 0, value; + + rte_spinlock_lock(lock); + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_ARB_REG_SLOT * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr, + arb_csr_offset); + value |= (0x01 << txq->hw_queue_number); + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +void +qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_ARB_REG_SLOT * txq->hw_bundle_number); + uint32_t value; + + rte_spinlock_lock(lock); + value = ADF_CSR_RD(base_addr, arb_csr_offset); + value &= ~(0x01 << txq->hw_queue_number); + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +void +qat_qp_adf_configure_queues_gen1(struct qat_qp *qp) +{ + uint32_t q_tx_config, q_resp_config; + struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q; + + q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size); + q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size, + ADF_RING_NEAR_WATERMARK_512, + ADF_RING_NEAR_WATERMARK_0); + WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, + q_tx->hw_bundle_number, q_tx->hw_queue_number, + q_tx_config); + WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, + q_rx->hw_bundle_number, q_rx->hw_queue_number, + q_resp_config); +} + +void +qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q) +{ + WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number, + q->hw_queue_number, q->tail); +} + +void +qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q, + uint32_t new_head) +{ + WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number, + q->hw_queue_number, new_head); +} + +void +qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_qp *qp) +{ + qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q); + qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q); + qat_qp_adf_configure_queues_gen1(qp); + qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr, + &qat_dev->arb_csr_lock); +} + +static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = { + .qat_qp_rings_per_service = qat_qp_rings_per_service_gen1, + .qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1, + .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1, + .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1, + .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1, + .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1, + .qat_qp_csr_write_head = qat_qp_csr_write_head_gen1, + .qat_qp_csr_setup = qat_qp_csr_setup_gen1, + .qat_qp_get_hw_data = qat_qp_get_hw_data_gen1, +}; + int qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused) { @@ -59,6 +248,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = { RTE_INIT(qat_dev_gen_gen1_init) { + qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1; qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1; qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1; qat_gen_config[QAT_GEN1].comp_num_im_bufs_required = diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c index d3470ed6b8..f077fe9eef 100644 --- a/drivers/common/qat/dev/qat_dev_gen2.c +++ b/drivers/common/qat/dev/qat_dev_gen2.c @@ -3,11 +3,24 @@ */ #include "qat_device.h" +#include "qat_qp.h" #include "adf_transport_access_macros.h" #include "qat_dev_gens.h" #include +static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = { + .qat_qp_rings_per_service = qat_qp_rings_per_service_gen1, + .qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1, + .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1, + .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1, + .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1, + .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1, + .qat_qp_csr_write_head = qat_qp_csr_write_head_gen1, + .qat_qp_csr_setup = qat_qp_csr_setup_gen1, + .qat_qp_get_hw_data = qat_qp_get_hw_data_gen1, +}; + static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = { .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1, .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, @@ -18,6 +31,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = { RTE_INIT(qat_dev_gen_gen2_init) { + qat_qp_hw_spec[QAT_GEN2] = &qat_qp_hw_spec_gen2; qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2; qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2; } diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c index e4a66869d2..de3fa17fa9 100644 --- a/drivers/common/qat/dev/qat_dev_gen3.c +++ b/drivers/common/qat/dev/qat_dev_gen3.c @@ -3,11 +3,70 @@ */ #include "qat_device.h" +#include "qat_qp.h" #include "adf_transport_access_macros.h" #include "qat_dev_gens.h" #include +__extension__ +const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES] + [ADF_MAX_QPS_ON_ANY_SERVICE] = { + /* queue pairs which provide an asymmetric crypto service */ + [QAT_SERVICE_ASYMMETRIC] = { + { + .service_type = QAT_SERVICE_ASYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 0, + .rx_ring_num = 4, + .tx_msg_size = 64, + .rx_msg_size = 32, + } + }, + /* queue pairs which provide a symmetric crypto service */ + [QAT_SERVICE_SYMMETRIC] = { + { + .service_type = QAT_SERVICE_SYMMETRIC, + .hw_bundle_num = 0, + .tx_ring_num = 1, + .rx_ring_num = 5, + .tx_msg_size = 128, + .rx_msg_size = 32, + } + }, + /* queue pairs which provide a compression service */ + [QAT_SERVICE_COMPRESSION] = { + { + .service_type = QAT_SERVICE_COMPRESSION, + .hw_bundle_num = 0, + .tx_ring_num = 3, + .rx_ring_num = 7, + .tx_msg_size = 128, + .rx_msg_size = 32, + } + } +}; + + +static const struct qat_qp_hw_data * +qat_qp_get_hw_data_gen3(struct qat_pci_device *dev __rte_unused, + enum qat_service_type service_type, uint16_t qp_id) +{ + return qat_gen3_qps[service_type] + qp_id; +} + +static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = { + .qat_qp_rings_per_service = qat_qp_rings_per_service_gen1, + .qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1, + .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1, + .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1, + .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1, + .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1, + .qat_qp_csr_write_head = qat_qp_csr_write_head_gen1, + .qat_qp_csr_setup = qat_qp_csr_setup_gen1, + .qat_qp_get_hw_data = qat_qp_get_hw_data_gen3 +}; + static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = { .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1, .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1, @@ -18,6 +77,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = { RTE_INIT(qat_dev_gen_gen3_init) { + qat_qp_hw_spec[QAT_GEN3] = &qat_qp_hw_spec_gen3; qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3; qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3; } diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c index 5e5423ebfa..7ffde5f4c8 100644 --- a/drivers/common/qat/dev/qat_dev_gen4.c +++ b/drivers/common/qat/dev/qat_dev_gen4.c @@ -10,10 +10,13 @@ #include "adf_transport_access_macros_gen4vf.h" #include "adf_pf2vf_msg.h" #include "qat_pf2vf.h" -#include "qat_dev_gens.h" #include +/* QAT GEN 4 specific macros */ +#define QAT_GEN4_BUNDLE_NUM 4 +#define QAT_GEN4_QPS_PER_BUNDLE_NUM 1 + struct qat_dev_gen4_extra { struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM] [QAT_GEN4_QPS_PER_BUNDLE_NUM]; @@ -28,7 +31,7 @@ static struct qat_pf2vf_dev qat_pf2vf_gen4 = { .pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK, }; -int +static int qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val) { struct qat_pf2vf_msg pf2vf_msg; @@ -39,6 +42,52 @@ qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val) return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val); } +static int +qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id, + enum qat_service_type service_type) +{ + int i = 0, valid_qps = 0; + struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private; + + for (; i < QAT_GEN4_BUNDLE_NUM; i++) { + if (dev_extra->qp_gen4_data[i][0].service_type == + service_type) { + if (valid_qps == qp_id) + return i; + ++valid_qps; + } + } + return -1; +} + +static const struct qat_qp_hw_data * +qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev, + enum qat_service_type service_type, uint16_t qp_id) +{ + struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private; + int ring_pair = qat_select_valid_queue_gen4(qat_dev, qp_id, + service_type); + + if (ring_pair < 0) + return NULL; + + return &dev_extra->qp_gen4_data[ring_pair][0]; +} + +static int +qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev, + enum qat_service_type service) +{ + int i = 0, count = 0, max_ops_per_srv = 0; + struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private; + + max_ops_per_srv = QAT_GEN4_BUNDLE_NUM; + for (i = 0, count = 0; i < max_ops_per_srv; i++) + if (dev_extra->qp_gen4_data[i][0].service_type == service) + count++; + return count; +} + static enum qat_service_type gen4_pick_service(uint8_t hw_service) { @@ -94,6 +143,109 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev) return 0; } +static void +qat_qp_build_ring_base_gen4(void *io_addr, + struct qat_queue *queue) +{ + uint64_t queue_base; + + queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr, + queue->queue_size); + WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number, + queue->hw_queue_number, queue_base); +} + +static void +qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = 0, value; + + rte_spinlock_lock(lock); + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_RING_BUNDLE_SIZE_GEN4 * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, + arb_csr_offset); + value |= (0x01 << txq->hw_queue_number); + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +static void +qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = 0, value; + + rte_spinlock_lock(lock); + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_RING_BUNDLE_SIZE_GEN4 * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, + arb_csr_offset); + value &= ~(0x01 << txq->hw_queue_number); + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +static void +qat_qp_adf_configure_queues_gen4(struct qat_qp *qp) +{ + uint32_t q_tx_config, q_resp_config; + struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q; + + q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size); + q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size, + ADF_RING_NEAR_WATERMARK_512, + ADF_RING_NEAR_WATERMARK_0); + + WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, + q_tx->hw_bundle_number, q_tx->hw_queue_number, + q_tx_config); + WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, + q_rx->hw_bundle_number, q_rx->hw_queue_number, + q_resp_config); +} + +static void +qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q) +{ + WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr, + q->hw_bundle_number, q->hw_queue_number, q->tail); +} + +static void +qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q, + uint32_t new_head) +{ + WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr, + q->hw_bundle_number, q->hw_queue_number, new_head); +} + +static void +qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_qp *qp) +{ + qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q); + qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q); + qat_qp_adf_configure_queues_gen4(qp); + qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr, + &qat_dev->arb_csr_lock); +} + +static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = { + .qat_qp_rings_per_service = qat_qp_rings_per_service_gen4, + .qat_qp_build_ring_base = qat_qp_build_ring_base_gen4, + .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4, + .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4, + .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4, + .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4, + .qat_qp_csr_write_head = qat_qp_csr_write_head_gen4, + .qat_qp_csr_setup = qat_qp_csr_setup_gen4, + .qat_qp_get_hw_data = qat_qp_get_hw_data_gen4, +}; + static int qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev) { @@ -116,8 +268,8 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev) return 0; } -static const struct -rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev) +static const struct rte_mem_resource * +qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev) { return &pci_dev->mem_resource[0]; } @@ -146,6 +298,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = { RTE_INIT(qat_dev_gen_4_init) { + qat_qp_hw_spec[QAT_GEN4] = &qat_qp_hw_spec_gen4; qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4; qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4; qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4; diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h index 4ad0ffa728..7c92f1938c 100644 --- a/drivers/common/qat/dev/qat_dev_gens.h +++ b/drivers/common/qat/dev/qat_dev_gens.h @@ -16,6 +16,40 @@ extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES] int qat_dev_get_extra_size_gen1(void); +const struct qat_qp_hw_data * +qat_qp_get_hw_data_gen1(struct qat_pci_device *dev, + enum qat_service_type service_type, uint16_t qp_id); + +int +qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev, + enum qat_service_type service); + +void +qat_qp_csr_build_ring_base_gen1(void *io_addr, + struct qat_queue *queue); + +void +qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock); + +void +qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock); + +void +qat_qp_adf_configure_queues_gen1(struct qat_qp *qp); + +void +qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q); + +void +qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q, + uint32_t new_head); + +void +qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_qp *qp); + int qat_reset_ring_pairs_gen1( struct qat_pci_device *qat_pci_dev); @@ -28,7 +62,4 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource, int qat_dev_read_config_gen1(struct qat_pci_device *qat_dev); -int -qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val); - #endif diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h index 504ffb7236..f98bbb5001 100644 --- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h +++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h @@ -51,6 +51,8 @@ #define ADF_MIN_RING_SIZE ADF_RING_SIZE_128 #define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M #define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K +/* ARB CSR offset */ +#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C /* Maximum number of qps on a device for any service type */ #define ADF_MAX_QPS_ON_ANY_SERVICE 2 diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index 8b69206df5..8233cc045d 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -128,9 +128,6 @@ struct qat_pci_device { /* Data relating to compression service */ struct qat_comp_dev_private *comp_dev; /**< link back to compressdev private data */ - struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM] - [QAT_GEN4_QPS_PER_BUNDLE_NUM]; - /**< Data of ring configuration on gen4 */ void *misc_bar_io_addr; /**< Address of misc bar */ void *dev_private; diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 27994036b8..cde421eb77 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -18,124 +18,15 @@ #include "qat_sym.h" #include "qat_asym.h" #include "qat_comp.h" -#include "adf_transport_access_macros.h" -#include "adf_transport_access_macros_gen4vf.h" -#include "dev/qat_dev_gens.h" #define QAT_CQ_MAX_DEQ_RETRIES 10 #define ADF_MAX_DESC 4096 #define ADF_MIN_DESC 128 -#define ADF_ARB_REG_SLOT 0x1000 -#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C - -#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \ - ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \ - (ADF_ARB_REG_SLOT * index), value) - struct qat_qp_hw_spec_funcs* qat_qp_hw_spec[QAT_N_GENS]; -__extension__ -const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES] - [ADF_MAX_QPS_ON_ANY_SERVICE] = { - /* queue pairs which provide an asymmetric crypto service */ - [QAT_SERVICE_ASYMMETRIC] = { - { - .service_type = QAT_SERVICE_ASYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 0, - .rx_ring_num = 8, - .tx_msg_size = 64, - .rx_msg_size = 32, - - }, { - .service_type = QAT_SERVICE_ASYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 1, - .rx_ring_num = 9, - .tx_msg_size = 64, - .rx_msg_size = 32, - } - }, - /* queue pairs which provide a symmetric crypto service */ - [QAT_SERVICE_SYMMETRIC] = { - { - .service_type = QAT_SERVICE_SYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 2, - .rx_ring_num = 10, - .tx_msg_size = 128, - .rx_msg_size = 32, - }, - { - .service_type = QAT_SERVICE_SYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 3, - .rx_ring_num = 11, - .tx_msg_size = 128, - .rx_msg_size = 32, - } - }, - /* queue pairs which provide a compression service */ - [QAT_SERVICE_COMPRESSION] = { - { - .service_type = QAT_SERVICE_COMPRESSION, - .hw_bundle_num = 0, - .tx_ring_num = 6, - .rx_ring_num = 14, - .tx_msg_size = 128, - .rx_msg_size = 32, - }, { - .service_type = QAT_SERVICE_COMPRESSION, - .hw_bundle_num = 0, - .tx_ring_num = 7, - .rx_ring_num = 15, - .tx_msg_size = 128, - .rx_msg_size = 32, - } - } -}; - -__extension__ -const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES] - [ADF_MAX_QPS_ON_ANY_SERVICE] = { - /* queue pairs which provide an asymmetric crypto service */ - [QAT_SERVICE_ASYMMETRIC] = { - { - .service_type = QAT_SERVICE_ASYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 0, - .rx_ring_num = 4, - .tx_msg_size = 64, - .rx_msg_size = 32, - } - }, - /* queue pairs which provide a symmetric crypto service */ - [QAT_SERVICE_SYMMETRIC] = { - { - .service_type = QAT_SERVICE_SYMMETRIC, - .hw_bundle_num = 0, - .tx_ring_num = 1, - .rx_ring_num = 5, - .tx_msg_size = 128, - .rx_msg_size = 32, - } - }, - /* queue pairs which provide a compression service */ - [QAT_SERVICE_COMPRESSION] = { - { - .service_type = QAT_SERVICE_COMPRESSION, - .hw_bundle_num = 0, - .tx_ring_num = 3, - .rx_ring_num = 7, - .tx_msg_size = 128, - .rx_msg_size = 32, - } - } -}; - static int qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes); static void qat_queue_delete(struct qat_queue *queue); @@ -143,77 +34,32 @@ static int qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, struct qat_qp_config *, uint8_t dir); static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num, uint32_t *queue_size_for_csr); -static void adf_configure_queues(struct qat_qp *queue, +static int adf_configure_queues(struct qat_qp *queue, enum qat_device_gen qat_dev_gen); -static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, +static int adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock); -static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, +static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock); +static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_queue *queue); +static const struct rte_memzone *queue_dma_zone_reserve(const char *queue_name, + uint32_t queue_size, int socket_id); +static int qat_qp_csr_setup(struct qat_pci_device *qat_dev, void *io_addr, + struct qat_qp *qp); -int qat_qps_per_service(struct qat_pci_device *qat_dev, - enum qat_service_type service) -{ - int i = 0, count = 0, max_ops_per_srv = 0; - - if (qat_dev->qat_dev_gen == QAT_GEN4) { - max_ops_per_srv = QAT_GEN4_BUNDLE_NUM; - for (i = 0, count = 0; i < max_ops_per_srv; i++) - if (qat_dev->qp_gen4_data[i][0].service_type == service) - count++; - } else { - const struct qat_qp_hw_data *sym_hw_qps = - qat_gen_config[qat_dev->qat_dev_gen] - .qp_hw_data[service]; - - max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE; - for (i = 0, count = 0; i < max_ops_per_srv; i++) - if (sym_hw_qps[i].service_type == service) - count++; - } - - return count; -} - -static const struct rte_memzone * -queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size, - int socket_id) -{ - const struct rte_memzone *mz; - - mz = rte_memzone_lookup(queue_name); - if (mz != 0) { - if (((size_t)queue_size <= mz->len) && - ((socket_id == SOCKET_ID_ANY) || - (socket_id == mz->socket_id))) { - QAT_LOG(DEBUG, "re-use memzone already " - "allocated for %s", queue_name); - return mz; - } - - QAT_LOG(ERR, "Incompatible memzone already " - "allocated %s, size %u, socket %d. " - "Requested size %u, socket %u", - queue_name, (uint32_t)mz->len, - mz->socket_id, queue_size, socket_id); - return NULL; - } - - QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u", - queue_name, queue_size, socket_id); - return rte_memzone_reserve_aligned(queue_name, queue_size, - socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size); -} - -int qat_qp_setup(struct qat_pci_device *qat_dev, +int +qat_qp_setup(struct qat_pci_device *qat_dev, struct qat_qp **qp_addr, uint16_t queue_pair_id, struct qat_qp_config *qat_qp_conf) { - struct qat_qp *qp; + struct qat_qp *qp = NULL; struct rte_pci_device *pci_dev = qat_pci_devs[qat_dev->qat_dev_id].pci_dev; char op_cookie_pool_name[RTE_RING_NAMESIZE]; - enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; + struct qat_dev_hw_spec_funcs *ops_hw = + qat_dev_hw_spec[qat_dev->qat_dev_gen]; + void *io_addr; uint32_t i; QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d", @@ -226,7 +72,15 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, return -EINVAL; } - if (pci_dev->mem_resource[0].addr == NULL) { + if (ops_hw->qat_dev_get_transport_bar == NULL) { + QAT_LOG(ERR, + "QAT Internal Error: qat_dev_get_transport_bar not set for gen %d", + qat_dev->qat_dev_gen); + goto create_err; + } + + io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr; + if (io_addr == NULL) { QAT_LOG(ERR, "Could not find VF config space " "(UIO driver attached?)."); return -EINVAL; @@ -250,7 +104,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, return -ENOMEM; } - qp->mmap_bar_addr = pci_dev->mem_resource[0].addr; + qp->mmap_bar_addr = io_addr; qp->enqueued = qp->dequeued = 0; if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf, @@ -277,10 +131,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, goto create_err; } - adf_configure_queues(qp, qat_dev_gen); - adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr, - &qat_dev->arb_csr_lock); - snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE, "%s%d_cookies_%s_qp%hu", pci_dev->driver->driver.name, qat_dev->qat_dev_id, @@ -298,6 +148,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, if (!qp->op_cookie_pool) { QAT_LOG(ERR, "QAT PMD Cannot create" " op mempool"); + qat_queue_delete(&(qp->tx_q)); + qat_queue_delete(&(qp->rx_q)); goto create_err; } @@ -316,91 +168,32 @@ int qat_qp_setup(struct qat_pci_device *qat_dev, QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s", queue_pair_id, op_cookie_pool_name); + qat_qp_csr_setup(qat_dev, io_addr, qp); + *qp_addr = qp; return 0; create_err: - if (qp->op_cookie_pool) - rte_mempool_free(qp->op_cookie_pool); - rte_free(qp->op_cookies); - rte_free(qp); - return -EFAULT; -} - - -int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr) -{ - struct qat_qp *qp = *qp_addr; - uint32_t i; - - if (qp == NULL) { - QAT_LOG(DEBUG, "qp already freed"); - return 0; - } + if (qp) { + if (qp->op_cookie_pool) + rte_mempool_free(qp->op_cookie_pool); - QAT_LOG(DEBUG, "Free qp on qat_pci device %d", - qp->qat_dev->qat_dev_id); - - /* Don't free memory if there are still responses to be processed */ - if ((qp->enqueued - qp->dequeued) == 0) { - qat_queue_delete(&(qp->tx_q)); - qat_queue_delete(&(qp->rx_q)); - } else { - return -EAGAIN; - } + if (qp->op_cookies) + rte_free(qp->op_cookies); - adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr, - &qp->qat_dev->arb_csr_lock); - - for (i = 0; i < qp->nb_descriptors; i++) - rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]); - - if (qp->op_cookie_pool) - rte_mempool_free(qp->op_cookie_pool); - - rte_free(qp->op_cookies); - rte_free(qp); - *qp_addr = NULL; - return 0; -} - - -static void qat_queue_delete(struct qat_queue *queue) -{ - const struct rte_memzone *mz; - int status = 0; - - if (queue == NULL) { - QAT_LOG(DEBUG, "Invalid queue"); - return; + rte_free(qp); } - QAT_LOG(DEBUG, "Free ring %d, memzone: %s", - queue->hw_queue_number, queue->memz_name); - mz = rte_memzone_lookup(queue->memz_name); - if (mz != NULL) { - /* Write an unused pattern to the queue memory. */ - memset(queue->base_addr, 0x7F, queue->queue_size); - status = rte_memzone_free(mz); - if (status != 0) - QAT_LOG(ERR, "Error %d on freeing queue %s", - status, queue->memz_name); - } else { - QAT_LOG(DEBUG, "queue %s doesn't exist", - queue->memz_name); - } + return -EFAULT; } static int qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, struct qat_qp_config *qp_conf, uint8_t dir) { - uint64_t queue_base; - void *io_addr; const struct rte_memzone *qp_mz; struct rte_pci_device *pci_dev = qat_pci_devs[qat_dev->qat_dev_id].pci_dev; - enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; int ret = 0; uint16_t desc_size = (dir == ADF_RING_DIR_TX ? qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size); @@ -460,19 +253,6 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, * Write an unused pattern to the queue memory. */ memset(queue->base_addr, 0x7F, queue_size_bytes); - io_addr = pci_dev->mem_resource[0].addr; - - if (qat_dev_gen == QAT_GEN4) { - queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr, - queue->queue_size); - WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number, - queue->hw_queue_number, queue_base); - } else { - queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr, - queue->queue_size); - WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number, - queue->hw_queue_number, queue_base); - } QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u," " nb msgs %u, msg_size %u, modulo mask %u", @@ -488,202 +268,231 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue, return ret; } -int -qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, - enum qat_service_type service_type) +static const struct rte_memzone * +queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size, + int socket_id) { - if (qat_dev->qat_dev_gen == QAT_GEN4) { - int i = 0, valid_qps = 0; - - for (; i < QAT_GEN4_BUNDLE_NUM; i++) { - if (qat_dev->qp_gen4_data[i][0].service_type == - service_type) { - if (valid_qps == qp_id) - return i; - ++valid_qps; - } + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(queue_name); + if (mz != 0) { + if (((size_t)queue_size <= mz->len) && + ((socket_id == SOCKET_ID_ANY) || + (socket_id == mz->socket_id))) { + QAT_LOG(DEBUG, "re-use memzone already " + "allocated for %s", queue_name); + return mz; } + + QAT_LOG(ERR, "Incompatible memzone already " + "allocated %s, size %u, socket %d. " + "Requested size %u, socket %u", + queue_name, (uint32_t)mz->len, + mz->socket_id, queue_size, socket_id); + return NULL; } - return -1; + + QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u", + queue_name, queue_size, socket_id); + return rte_memzone_reserve_aligned(queue_name, queue_size, + socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size); } int -qat_read_qp_config(struct qat_pci_device *qat_dev) +qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr) { - int i = 0; - enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; - - if (qat_dev_gen == QAT_GEN4) { - uint16_t svc = 0; - - if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc)) - return -(EFAULT); - for (; i < QAT_GEN4_BUNDLE_NUM; i++) { - struct qat_qp_hw_data *hw_data = - &qat_dev->qp_gen4_data[i][0]; - uint8_t svc1 = (svc >> (3 * i)) & 0x7; - enum qat_service_type service_type = QAT_SERVICE_INVALID; - - if (svc1 == QAT_SVC_SYM) { - service_type = QAT_SERVICE_SYMMETRIC; - QAT_LOG(DEBUG, - "Discovered SYMMETRIC service on bundle %d", - i); - } else if (svc1 == QAT_SVC_COMPRESSION) { - service_type = QAT_SERVICE_COMPRESSION; - QAT_LOG(DEBUG, - "Discovered COPRESSION service on bundle %d", - i); - } else if (svc1 == QAT_SVC_ASYM) { - service_type = QAT_SERVICE_ASYMMETRIC; - QAT_LOG(DEBUG, - "Discovered ASYMMETRIC service on bundle %d", - i); - } else { - QAT_LOG(ERR, - "Unrecognized service on bundle %d", - i); - return -(EFAULT); - } + int ret; + struct qat_qp *qp = *qp_addr; + uint32_t i; - memset(hw_data, 0, sizeof(*hw_data)); - hw_data->service_type = service_type; - if (service_type == QAT_SERVICE_ASYMMETRIC) { - hw_data->tx_msg_size = 64; - hw_data->rx_msg_size = 32; - } else if (service_type == QAT_SERVICE_SYMMETRIC || - service_type == - QAT_SERVICE_COMPRESSION) { - hw_data->tx_msg_size = 128; - hw_data->rx_msg_size = 32; - } - hw_data->tx_ring_num = 0; - hw_data->rx_ring_num = 1; - hw_data->hw_bundle_num = i; - } + if (qp == NULL) { + QAT_LOG(DEBUG, "qp already freed"); return 0; } - return -(EINVAL); + + QAT_LOG(DEBUG, "Free qp on qat_pci device %d", + qp->qat_dev->qat_dev_id); + + /* Don't free memory if there are still responses to be processed */ + if ((qp->enqueued - qp->dequeued) == 0) { + qat_queue_delete(&(qp->tx_q)); + qat_queue_delete(&(qp->rx_q)); + } else { + return -EAGAIN; + } + + ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), + qp->mmap_bar_addr, &qp->qat_dev->arb_csr_lock); + if (ret) + return ret; + + for (i = 0; i < qp->nb_descriptors; i++) + rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]); + + if (qp->op_cookie_pool) + rte_mempool_free(qp->op_cookie_pool); + + rte_free(qp->op_cookies); + rte_free(qp); + *qp_addr = NULL; + return 0; } -static int qat_qp_check_queue_alignment(uint64_t phys_addr, - uint32_t queue_size_bytes) + +static void +qat_queue_delete(struct qat_queue *queue) { - if (((queue_size_bytes - 1) & phys_addr) != 0) - return -EINVAL; + const struct rte_memzone *mz; + int status = 0; + + if (queue == NULL) { + QAT_LOG(DEBUG, "Invalid queue"); + return; + } + QAT_LOG(DEBUG, "Free ring %d, memzone: %s", + queue->hw_queue_number, queue->memz_name); + + mz = rte_memzone_lookup(queue->memz_name); + if (mz != NULL) { + /* Write an unused pattern to the queue memory. */ + memset(queue->base_addr, 0x7F, queue->queue_size); + status = rte_memzone_free(mz); + if (status != 0) + QAT_LOG(ERR, "Error %d on freeing queue %s", + status, queue->memz_name); + } else { + QAT_LOG(DEBUG, "queue %s doesn't exist", + queue->memz_name); + } +} + +static int __rte_unused +adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable, + -ENOTSUP); + ops->qat_qp_adf_arb_enable(txq, base_addr, lock); return 0; } -static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num, - uint32_t *p_queue_size_for_csr) +static int +adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) { - uint8_t i = ADF_MIN_RING_SIZE; + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev_gen]; - for (; i <= ADF_MAX_RING_SIZE; i++) - if ((msg_size * msg_num) == - (uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) { - *p_queue_size_for_csr = i; - return 0; - } - QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num); - return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable, + -ENOTSUP); + ops->qat_qp_adf_arb_disable(txq, base_addr, lock); + return 0; } -static void -adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq, - void *base_addr, rte_spinlock_t *lock) +static int __rte_unused +qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr, + struct qat_queue *queue) { - uint32_t arb_csr_offset = 0, value; - - rte_spinlock_lock(lock); - if (qat_dev_gen == QAT_GEN4) { - arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_RING_BUNDLE_SIZE_GEN4 * - txq->hw_bundle_number); - value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, - arb_csr_offset); - } else { - arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_ARB_REG_SLOT * - txq->hw_bundle_number); - value = ADF_CSR_RD(base_addr, - arb_csr_offset); - } - value |= (0x01 << txq->hw_queue_number); - ADF_CSR_WR(base_addr, arb_csr_offset, value); - rte_spinlock_unlock(lock); + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base, + -ENOTSUP); + ops->qat_qp_build_ring_base(io_addr, queue); + return 0; } -static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, - struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock) +int +qat_qps_per_service(struct qat_pci_device *qat_dev, + enum qat_service_type service) { - uint32_t arb_csr_offset = 0, value; - - rte_spinlock_lock(lock); - if (qat_dev_gen == QAT_GEN4) { - arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_RING_BUNDLE_SIZE_GEN4 * - txq->hw_bundle_number); - value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF, - arb_csr_offset); - } else { - arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + - (ADF_ARB_REG_SLOT * - txq->hw_bundle_number); - value = ADF_CSR_RD(base_addr, - arb_csr_offset); - } - value &= ~(0x01 << txq->hw_queue_number); - ADF_CSR_WR(base_addr, arb_csr_offset, value); - rte_spinlock_unlock(lock); + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service, + -ENOTSUP); + return ops->qat_qp_rings_per_service(qat_dev, service); } -static void adf_configure_queues(struct qat_qp *qp, - enum qat_device_gen qat_dev_gen) +const struct qat_qp_hw_data * +qat_qp_get_hw_data(struct qat_pci_device *qat_dev, + enum qat_service_type service, uint16_t qp_id) { - uint32_t q_tx_config, q_resp_config; - struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q; - - q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size); - q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size, - ADF_RING_NEAR_WATERMARK_512, - ADF_RING_NEAR_WATERMARK_0); - - if (qat_dev_gen == QAT_GEN4) { - WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, - q_tx->hw_bundle_number, q_tx->hw_queue_number, - q_tx_config); - WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr, - q_rx->hw_bundle_number, q_rx->hw_queue_number, - q_resp_config); - } else { - WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, - q_tx->hw_bundle_number, q_tx->hw_queue_number, - q_tx_config); - WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr, - q_rx->hw_bundle_number, q_rx->hw_queue_number, - q_resp_config); - } + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_get_hw_data, NULL); + return ops->qat_qp_get_hw_data(qat_dev, service, qp_id); } -static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask) +int +qat_read_qp_config(struct qat_pci_device *qat_dev) { - return data & modulo_mask; + struct qat_dev_hw_spec_funcs *ops_hw = + qat_dev_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config, + -ENOTSUP); + return ops_hw->qat_dev_read_config(qat_dev); +} + +static int __rte_unused +adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues, + -ENOTSUP); + ops->qat_qp_adf_configure_queues(qp); + return 0; } static inline void txq_write_tail(enum qat_device_gen qat_dev_gen, - struct qat_qp *qp, struct qat_queue *q) { + struct qat_qp *qp, struct qat_queue *q) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev_gen]; - if (qat_dev_gen == QAT_GEN4) { - WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr, - q->hw_bundle_number, q->hw_queue_number, q->tail); - } else { - WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number, - q->hw_queue_number, q->tail); - } + /* + * Pointer check should be done during + * initialization + */ + ops->qat_qp_csr_write_tail(qp, q); } +static inline void +qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp, + struct qat_queue *q, uint32_t new_head) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev_gen]; + + /* + * Pointer check should be done during + * initialization + */ + ops->qat_qp_csr_write_head(qp, q, new_head); +} + +static int +qat_qp_csr_setup(struct qat_pci_device *qat_dev, + void *io_addr, struct qat_qp *qp) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev->qat_dev_gen]; + + RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup, + -ENOTSUP); + ops->qat_qp_csr_setup(qat_dev, io_addr, qp); + return 0; +} + + static inline void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp, struct qat_queue *q) @@ -707,15 +516,37 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp, q->nb_processed_responses = 0; q->csr_head = new_head; - /* write current head to CSR */ - if (qat_dev_gen == QAT_GEN4) { - WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr, - q->hw_bundle_number, q->hw_queue_number, new_head); - } else { - WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number, - q->hw_queue_number, new_head); - } + qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head); +} + +static int +qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes) +{ + if (((queue_size_bytes - 1) & phys_addr) != 0) + return -EINVAL; + return 0; +} + +static int +adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num, + uint32_t *p_queue_size_for_csr) +{ + uint8_t i = ADF_MIN_RING_SIZE; + + for (; i <= ADF_MAX_RING_SIZE; i++) + if ((msg_size * msg_num) == + (uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) { + *p_queue_size_for_csr = i; + return 0; + } + QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num); + return -EINVAL; +} +static inline uint32_t +adf_modulo(uint32_t data, uint32_t modulo_mask) +{ + return data & modulo_mask; } uint16_t diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index 726cd2ef61..deafb407b3 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -12,16 +12,6 @@ #define QAT_QP_MIN_INFL_THRESHOLD 256 -/* Default qp configuration for GEN4 devices */ -#define QAT_GEN4_QP_DEFCON (QAT_SERVICE_SYMMETRIC | \ - QAT_SERVICE_SYMMETRIC << 8 | \ - QAT_SERVICE_SYMMETRIC << 16 | \ - QAT_SERVICE_SYMMETRIC << 24) - -/* QAT GEN 4 specific macros */ -#define QAT_GEN4_BUNDLE_NUM 4 -#define QAT_GEN4_QPS_PER_BUNDLE_NUM 1 - struct qat_pci_device; /** @@ -106,7 +96,11 @@ qat_qp_setup(struct qat_pci_device *qat_dev, int qat_qps_per_service(struct qat_pci_device *qat_dev, - enum qat_service_type service); + enum qat_service_type service); + +const struct qat_qp_hw_data * +qat_qp_get_hw_data(struct qat_pci_device *qat_dev, + enum qat_service_type service, uint16_t qp_id); int qat_cq_get_fw_version(struct qat_qp *qp); @@ -116,11 +110,6 @@ int qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, void *op_cookie __rte_unused, uint64_t *dequeue_err_count __rte_unused); - -int -qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id, - enum qat_service_type service_type); - int qat_read_qp_config(struct qat_pci_device *qat_dev); @@ -166,7 +155,4 @@ struct qat_qp_hw_spec_funcs { extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[]; -extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; -extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE]; - #endif /* _QAT_QP_H_ */ diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index d4f087733f..5b8ee4bee6 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -164,35 +164,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, int ret = 0; uint32_t i; struct qat_qp_config qat_qp_conf; - const struct qat_qp_hw_data *sym_hw_qps = NULL; - const struct qat_qp_hw_data *qp_hw_data = NULL; - struct qat_qp **qp_addr = (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); struct qat_sym_dev_private *qat_private = dev->data->dev_private; struct qat_pci_device *qat_dev = qat_private->qat_dev; - if (qat_dev->qat_dev_gen == QAT_GEN4) { - int ring_pair = - qat_select_valid_queue(qat_dev, qp_id, - QAT_SERVICE_SYMMETRIC); - - if (ring_pair < 0) { - QAT_LOG(ERR, - "qp_id %u invalid for this device, no enough services allocated for GEN4 device", - qp_id); - return -EINVAL; - } - sym_hw_qps = - &qat_dev->qp_gen4_data[0][0]; - qp_hw_data = - &qat_dev->qp_gen4_data[ring_pair][0]; - } else { - sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen] - .qp_hw_data[QAT_SERVICE_SYMMETRIC]; - qp_hw_data = sym_hw_qps + qp_id; - } - /* If qp is already in use free ring memory and qp metadata. */ if (*qp_addr != NULL) { ret = qat_sym_qp_release(dev, qp_id); @@ -204,7 +180,13 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, return -EINVAL; } - qat_qp_conf.hw = qp_hw_data; + qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC, + qp_id); + if (qat_qp_conf.hw == NULL) { + QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); + return -EINVAL; + } + qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie); qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors; qat_qp_conf.socket_id = socket_id; From patchwork Fri Oct 22 17:03:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 102688 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 167E3A0C43; Fri, 22 Oct 2021 19:04:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B340B4113E; Fri, 22 Oct 2021 19:04:13 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 8DA16410EA for ; Fri, 22 Oct 2021 19:04:06 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10145"; a="315546595" X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="315546595" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 10:04:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="569279788" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2021 10:04:04 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang , Adam Dybkowski , Arek Kusztal , Kai Ji Date: Fri, 22 Oct 2021 18:03:50 +0100 Message-Id: <20211022170354.13503-6-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022170354.13503-1-roy.fan.zhang@intel.com> References: <20211014161137.1405168-1-roy.fan.zhang@intel.com> <20211022170354.13503-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v4 5/9] compress/qat: add gen specific data and function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds the compression data structure and function prototypes for different QAT generations. Signed-off-by: Adam Dybkowski Signed-off-by: Arek Kusztal Signed-off-by: Fan Zhang Signed-off-by: Kai Ji Acked-by: Ciara Power --- drivers/common/qat/dev/qat_dev_gen1.c | 2 - .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h | 195 ++++++++++++ .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h | 299 ++++++++++++++++++ drivers/common/qat/qat_common.h | 4 +- drivers/common/qat/qat_device.h | 7 - drivers/compress/qat/qat_comp.c | 101 +++--- drivers/compress/qat/qat_comp.h | 8 +- drivers/compress/qat/qat_comp_pmd.c | 159 ++++------ drivers/compress/qat/qat_comp_pmd.h | 76 +++++ 9 files changed, 675 insertions(+), 176 deletions(-) create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c index cc63b55bd1..38757e6e40 100644 --- a/drivers/common/qat/dev/qat_dev_gen1.c +++ b/drivers/common/qat/dev/qat_dev_gen1.c @@ -251,6 +251,4 @@ RTE_INIT(qat_dev_gen_gen1_init) qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1; qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1; qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1; - qat_gen_config[QAT_GEN1].comp_num_im_bufs_required = - QAT_NUM_INTERM_BUFS_GEN1; } diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h new file mode 100644 index 0000000000..ec69dc7105 --- /dev/null +++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h @@ -0,0 +1,195 @@ +/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef _ICP_QAT_HW_GEN4_COMP_H_ +#define _ICP_QAT_HW_GEN4_COMP_H_ + +#include "icp_qat_fw.h" +#include "icp_qat_hw_gen4_comp_defs.h" + +struct icp_qat_hw_comp_20_config_csr_lower { + icp_qat_hw_comp_20_extended_delay_match_mode_t edmm; + icp_qat_hw_comp_20_hw_comp_format_t algo; + icp_qat_hw_comp_20_search_depth_t sd; + icp_qat_hw_comp_20_hbs_control_t hbs; + icp_qat_hw_comp_20_abd_t abd; + icp_qat_hw_comp_20_lllbd_ctrl_t lllbd; + icp_qat_hw_comp_20_min_match_control_t mmctrl; + icp_qat_hw_comp_20_skip_hash_collision_t hash_col; + icp_qat_hw_comp_20_skip_hash_update_t hash_update; + icp_qat_hw_comp_20_byte_skip_t skip_ctrl; +}; + +static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER( + struct icp_qat_hw_comp_20_config_csr_lower csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, csr.algo, + ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK); + + QAT_FIELD_SET(val32, csr.sd, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK); + + QAT_FIELD_SET(val32, csr.edmm, + ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK); + + QAT_FIELD_SET(val32, csr.hbs, + ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.lllbd, + ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK); + + QAT_FIELD_SET(val32, csr.mmctrl, + ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.hash_col, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK); + + QAT_FIELD_SET(val32, csr.hash_update, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK); + + QAT_FIELD_SET(val32, csr.skip_ctrl, + ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK); + + QAT_FIELD_SET(val32, csr.abd, + ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK); + + QAT_FIELD_SET(val32, csr.lllbd, + ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK); + + return rte_bswap32(val32); +} + +struct icp_qat_hw_comp_20_config_csr_upper { + icp_qat_hw_comp_20_scb_control_t scb_ctrl; + icp_qat_hw_comp_20_rmb_control_t rmb_ctrl; + icp_qat_hw_comp_20_som_control_t som_ctrl; + icp_qat_hw_comp_20_skip_hash_rd_control_t skip_hash_ctrl; + icp_qat_hw_comp_20_scb_unload_control_t scb_unload_ctrl; + icp_qat_hw_comp_20_disable_token_fusion_control_t + disable_token_fusion_ctrl; + icp_qat_hw_comp_20_lbms_t lbms; + icp_qat_hw_comp_20_scb_mode_reset_mask_t scb_mode_reset; + uint16_t lazy; + uint16_t nice; +}; + +static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER( + struct icp_qat_hw_comp_20_config_csr_upper csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, csr.scb_ctrl, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.rmb_ctrl, + ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.som_ctrl, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.skip_hash_ctrl, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.scb_unload_ctrl, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.disable_token_fusion_ctrl, + ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.lbms, + ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK); + + QAT_FIELD_SET(val32, csr.scb_mode_reset, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK); + + QAT_FIELD_SET(val32, csr.lazy, + ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK); + + QAT_FIELD_SET(val32, csr.nice, + ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS, + ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK); + + return rte_bswap32(val32); +} + +struct icp_qat_hw_decomp_20_config_csr_lower { + icp_qat_hw_decomp_20_hbs_control_t hbs; + icp_qat_hw_decomp_20_lbms_t lbms; + icp_qat_hw_decomp_20_hw_comp_format_t algo; + icp_qat_hw_decomp_20_min_match_control_t mmctrl; + icp_qat_hw_decomp_20_lz4_block_checksum_present_t lbc; +}; + +static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER( + struct icp_qat_hw_decomp_20_config_csr_lower csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, csr.hbs, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.lbms, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK); + + QAT_FIELD_SET(val32, csr.algo, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK); + + QAT_FIELD_SET(val32, csr.mmctrl, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.lbc, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK); + + return rte_bswap32(val32); +} + +struct icp_qat_hw_decomp_20_config_csr_upper { + icp_qat_hw_decomp_20_speculative_decoder_control_t sdc; + icp_qat_hw_decomp_20_mini_cam_control_t mcc; +}; + +static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_UPPER( + struct icp_qat_hw_decomp_20_config_csr_upper csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, csr.sdc, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK); + + QAT_FIELD_SET(val32, csr.mcc, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS, + ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK); + + return rte_bswap32(val32); +} + +#endif /* _ICP_QAT_HW_GEN4_COMP_H_ */ diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h new file mode 100644 index 0000000000..ad02d06b12 --- /dev/null +++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h @@ -0,0 +1,299 @@ +/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef _ICP_QAT_HW_GEN4_COMP_DEFS_H +#define _ICP_QAT_HW_GEN4_COMP_DEFS_H + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS 31 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_SCB_CONTROL_ENABLE = 0x0, + ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE = 0x1, +} icp_qat_hw_comp_20_scb_control_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS 30 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL = 0x0, + ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_FC_ONLY = 0x1, +} icp_qat_hw_comp_20_rmb_control_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS 28 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK 0x3 + +typedef enum { + ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE = 0x0, + ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE = 0x1, + ICP_QAT_HW_COMP_20_SOM_CONTROL_INPUT_CRC = 0x2, + ICP_QAT_HW_COMP_20_SOM_CONTROL_RESERVED_MODE = 0x3, +} icp_qat_hw_comp_20_som_control_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS 27 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP = 0x0, + ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_SKIP_HASH_READS = 0x1, +} icp_qat_hw_comp_20_skip_hash_rd_control_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS 26 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD = 0x0, + ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_NO_UNLOAD = 0x1, +} icp_qat_hw_comp_20_scb_unload_control_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS 21 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE = 0x0, + ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_DISABLE = 0x1, +} icp_qat_hw_comp_20_disable_token_fusion_control_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS 19 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK 0x3 + +typedef enum { + ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB = 0x0, + ICP_QAT_HW_COMP_20_LBMS_LBMS_256KB = 0x1, + ICP_QAT_HW_COMP_20_LBMS_LBMS_1MB = 0x2, + ICP_QAT_HW_COMP_20_LBMS_LBMS_4MB = 0x3, +} icp_qat_hw_comp_20_lbms_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS 18 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS = 0x0, + ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS_AND_HISTORY = 0x1, +} icp_qat_hw_comp_20_scb_mode_reset_mask_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS 9 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK 0x1ff +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL 258 + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS 0 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK 0x1ff +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL 259 + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS 14 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK 0x7 + +typedef enum { + ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0, +} icp_qat_hw_comp_20_hbs_control_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS 13 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED = 0x0, + ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED = 0x1, +} icp_qat_hw_comp_20_abd_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS 12 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED = 0x0, + ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED = 0x1, +} icp_qat_hw_comp_20_lllbd_ctrl_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS 8 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK 0xf + +typedef enum { + ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1 = 0x1, + ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6 = 0x3, + ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9 = 0x4, +} icp_qat_hw_comp_20_search_depth_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1 + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS 5 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK 0x7 + +typedef enum { + ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77 = 0x0, + ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE = 0x1, + ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4 = 0x2, + ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4S = 0x3, +} icp_qat_hw_comp_20_hw_comp_format_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS 4 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0, + ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1, +} icp_qat_hw_comp_20_min_match_control_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS 3 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW = 0x0, + ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW = 0x1, +} icp_qat_hw_comp_20_skip_hash_collision_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS 2 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW = 0x0, + ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW = 0x1, +} icp_qat_hw_comp_20_skip_hash_update_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS 1 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN = 0x0, + ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL = 0x1, +} icp_qat_hw_comp_20_byte_skip_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS 0 +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK 0x1 + +typedef enum { + ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED = 0x0, + ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED = 0x1, +} icp_qat_hw_comp_20_extended_delay_match_mode_t; + +#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_DEFAULT_VAL \ + ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS 31 +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE = 0x0, + ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_DISABLE = 0x1, +} icp_qat_hw_decomp_20_speculative_decoder_control_t; + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_DEFAULT_VAL\ + ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS 30 +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE = 0x0, + ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_DISABLE = 0x1, +} icp_qat_hw_decomp_20_mini_cam_control_t; + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS 14 +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK 0x7 + +typedef enum { + ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0, +} icp_qat_hw_decomp_20_hbs_control_t; + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS 8 +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK 0x3 + +typedef enum { + ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB = 0x0, + ICP_QAT_HW_DECOMP_20_LBMS_LBMS_256KB = 0x1, + ICP_QAT_HW_DECOMP_20_LBMS_LBMS_1MB = 0x2, + ICP_QAT_HW_DECOMP_20_LBMS_LBMS_4MB = 0x3, +} icp_qat_hw_decomp_20_lbms_t; + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS 5 +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK 0x7 + +typedef enum { + ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE = 0x1, + ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4 = 0x2, + ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4S = 0x3, +} icp_qat_hw_decomp_20_hw_comp_format_t; + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS 4 +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK 0x1 + +typedef enum { + ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0, + ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1, +} icp_qat_hw_decomp_20_min_match_control_t; + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS 3 +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK 0x1 + +typedef enum { + ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT = 0x0, + ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_PRESENT = 0x1, +} icp_qat_hw_decomp_20_lz4_block_checksum_present_t; + +#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_DEFAULT_VAL \ + ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT + +#endif /* _ICP_QAT_HW_GEN4_COMP_DEFS_H */ diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h index 1889ec4e88..a7632e31f8 100644 --- a/drivers/common/qat/qat_common.h +++ b/drivers/common/qat/qat_common.h @@ -13,9 +13,9 @@ #define QAT_64_BTYE_ALIGN_MASK (~0x3f) /* Intel(R) QuickAssist Technology device generation is enumerated - * from one according to the generation of the device + * from one according to the generation of the device. + * QAT_GEN* is used as the index to find all devices */ - enum qat_device_gen { QAT_GEN1, QAT_GEN2, diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index 8233cc045d..e7c7e9af95 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -49,12 +49,6 @@ struct qat_dev_cmd_param { uint16_t val; }; -enum qat_comp_num_im_buffers { - QAT_NUM_INTERM_BUFS_GEN1 = 12, - QAT_NUM_INTERM_BUFS_GEN2 = 20, - QAT_NUM_INTERM_BUFS_GEN3 = 64 -}; - struct qat_device_info { const struct rte_memzone *mz; /**< mz to store the qat_pci_device so it can be @@ -137,7 +131,6 @@ struct qat_pci_device { struct qat_gen_hw_data { enum qat_device_gen dev_gen; const struct qat_qp_hw_data (*qp_hw_data)[ADF_MAX_QPS_ON_ANY_SERVICE]; - enum qat_comp_num_im_buffers comp_num_im_bufs_required; struct qat_pf2vf_dev *pf2vf_dev; }; diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c index 7ac25a3b4c..e8f57c3cc4 100644 --- a/drivers/compress/qat/qat_comp.c +++ b/drivers/compress/qat/qat_comp.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2018-2019 Intel Corporation + * Copyright(c) 2018-2021 Intel Corporation */ #include @@ -332,7 +332,8 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg, return 0; } -static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask) +static inline uint32_t +adf_modulo(uint32_t data, uint32_t modulo_mask) { return data & modulo_mask; } @@ -793,8 +794,9 @@ qat_comp_stream_size(void) return RTE_ALIGN_CEIL(sizeof(struct qat_comp_stream), 8); } -static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header, - enum qat_comp_request_type request) +static void +qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header, + enum qat_comp_request_type request) { if (request == QAT_COMP_REQUEST_FIXED_COMP_STATELESS) header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC; @@ -811,16 +813,17 @@ static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header, QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_FLAT); } -static int qat_comp_create_templates(struct qat_comp_xform *qat_xform, - const struct rte_memzone *interm_buff_mz, - const struct rte_comp_xform *xform, - const struct qat_comp_stream *stream, - enum rte_comp_op_type op_type) +static int +qat_comp_create_templates(struct qat_comp_xform *qat_xform, + const struct rte_memzone *interm_buff_mz, + const struct rte_comp_xform *xform, + const struct qat_comp_stream *stream, + enum rte_comp_op_type op_type, + enum qat_device_gen qat_dev_gen) { struct icp_qat_fw_comp_req *comp_req; - int comp_level, algo; uint32_t req_par_flags; - int direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS; + int res; if (unlikely(qat_xform == NULL)) { QAT_LOG(ERR, "Session was not created for this device"); @@ -839,46 +842,17 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform, } } - if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) { - direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS; - comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1; + if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD( ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP, ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV, ICP_QAT_FW_COMP_CNV_RECOVERY); - } else { - if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT) - comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8; - else if (xform->compress.level == 1) - comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1; - else if (xform->compress.level == 2) - comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4; - else if (xform->compress.level == 3) - comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8; - else if (xform->compress.level >= 4 && - xform->compress.level <= 9) - comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16; - else { - QAT_LOG(ERR, "compression level not supported"); - return -EINVAL; - } + else req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD( ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP, ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV, ICP_QAT_FW_COMP_CNV_RECOVERY); - } - - switch (xform->compress.algo) { - case RTE_COMP_ALGO_DEFLATE: - algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE; - break; - case RTE_COMP_ALGO_LZS: - default: - /* RTE_COMP_NULL */ - QAT_LOG(ERR, "compression algorithm not supported"); - return -EINVAL; - } comp_req = &qat_xform->qat_comp_req_tmpl; @@ -899,18 +873,10 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform, comp_req->comp_cd_ctrl.comp_state_addr = stream->state_registers_decomp_phys; - /* Enable A, B, C, D, and E (CAMs). */ + /* RAM bank flags */ comp_req->comp_cd_ctrl.ram_bank_flags = - ICP_QAT_FW_COMP_RAM_FLAGS_BUILD( - ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */ - ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */ - ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */ - ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */ - ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank E */ - ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank D */ - ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank C */ - ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank B */ - ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */ + qat_comp_gen_dev_ops[qat_dev_gen] + .qat_comp_get_ram_bank_flags(); comp_req->comp_cd_ctrl.ram_banks_addr = stream->inflate_context_phys; @@ -924,13 +890,11 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform, ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF); } - comp_req->cd_pars.sl.comp_slice_cfg_word[0] = - ICP_QAT_HW_COMPRESSION_CONFIG_BUILD( - direction, - /* In CPM 1.6 only valid mode ! */ - ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED, algo, - /* Translate level to depth */ - comp_level, ICP_QAT_HW_COMPRESSION_FILE_TYPE_0); + res = qat_comp_gen_dev_ops[qat_dev_gen].qat_comp_set_slice_cfg_word( + qat_xform, xform, op_type, + comp_req->cd_pars.sl.comp_slice_cfg_word); + if (res) + return res; comp_req->comp_pars.initial_adler = 1; comp_req->comp_pars.initial_crc32 = 0; @@ -958,7 +922,8 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform, ICP_QAT_FW_SLICE_XLAT); comp_req->u1.xlt_pars.inter_buff_ptr = - interm_buff_mz->iova; + (qat_comp_get_num_im_bufs_required(qat_dev_gen) + == 0) ? 0 : interm_buff_mz->iova; } #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG @@ -991,6 +956,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev, void **private_xform) { struct qat_comp_dev_private *qat = dev->data->dev_private; + enum qat_device_gen qat_dev_gen = qat->qat_dev->qat_dev_gen; + unsigned int im_bufs = qat_comp_get_num_im_bufs_required(qat_dev_gen); if (unlikely(private_xform == NULL)) { QAT_LOG(ERR, "QAT: private_xform parameter is NULL"); @@ -1012,7 +979,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev, if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED || ((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT) - && qat->interm_buff_mz == NULL)) + && qat->interm_buff_mz == NULL + && im_bufs > 0)) qat_xform->qat_comp_request_type = QAT_COMP_REQUEST_FIXED_COMP_STATELESS; @@ -1020,7 +988,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev, RTE_COMP_HUFFMAN_DYNAMIC || xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT) && - qat->interm_buff_mz != NULL) + (qat->interm_buff_mz != NULL || + im_bufs == 0)) qat_xform->qat_comp_request_type = QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS; @@ -1039,7 +1008,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev, } if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform, - NULL, RTE_COMP_OP_STATELESS)) { + NULL, RTE_COMP_OP_STATELESS, + qat_dev_gen)) { QAT_LOG(ERR, "QAT: Problem with setting compression"); return -EINVAL; } @@ -1138,7 +1108,8 @@ qat_comp_stream_create(struct rte_compressdev *dev, ptr->qat_xform.checksum_type = xform->decompress.chksum; if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz, - xform, ptr, RTE_COMP_OP_STATEFUL)) { + xform, ptr, RTE_COMP_OP_STATEFUL, + qat->qat_dev->qat_dev_gen)) { QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream"); rte_mempool_put(qat->streampool, *stream); *stream = NULL; diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h index 0444b50a1e..da7b9a6eec 100644 --- a/drivers/compress/qat/qat_comp.h +++ b/drivers/compress/qat/qat_comp.h @@ -28,14 +28,16 @@ #define QAT_MIN_OUT_BUF_SIZE 46 /* maximum size of the state registers */ -#define QAT_STATE_REGISTERS_MAX_SIZE 64 +#define QAT_STATE_REGISTERS_MAX_SIZE 256 /* 64 bytes for GEN1-3, 256 for GEN4 */ /* decompressor context size */ #define QAT_INFLATE_CONTEXT_SIZE_GEN1 36864 #define QAT_INFLATE_CONTEXT_SIZE_GEN2 34032 #define QAT_INFLATE_CONTEXT_SIZE_GEN3 34032 -#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(QAT_INFLATE_CONTEXT_SIZE_GEN1,\ - QAT_INFLATE_CONTEXT_SIZE_GEN2), QAT_INFLATE_CONTEXT_SIZE_GEN3) +#define QAT_INFLATE_CONTEXT_SIZE_GEN4 36864 +#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(RTE_MAX(\ + QAT_INFLATE_CONTEXT_SIZE_GEN1, QAT_INFLATE_CONTEXT_SIZE_GEN2), \ + QAT_INFLATE_CONTEXT_SIZE_GEN3), QAT_INFLATE_CONTEXT_SIZE_GEN4) enum qat_comp_request_type { QAT_COMP_REQUEST_FIXED_COMP_STATELESS, diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c index caac7839e9..9b24d46e97 100644 --- a/drivers/compress/qat/qat_comp_pmd.c +++ b/drivers/compress/qat/qat_comp_pmd.c @@ -9,30 +9,29 @@ #define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16 +struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[QAT_N_GENS]; + struct stream_create_info { struct qat_comp_dev_private *comp_dev; int socket_id; int error; }; -static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = { - {/* COMPRESSION - deflate */ - .algo = RTE_COMP_ALGO_DEFLATE, - .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM | - RTE_COMP_FF_CRC32_CHECKSUM | - RTE_COMP_FF_ADLER32_CHECKSUM | - RTE_COMP_FF_CRC32_ADLER32_CHECKSUM | - RTE_COMP_FF_SHAREABLE_PRIV_XFORM | - RTE_COMP_FF_HUFFMAN_FIXED | - RTE_COMP_FF_HUFFMAN_DYNAMIC | - RTE_COMP_FF_OOP_SGL_IN_SGL_OUT | - RTE_COMP_FF_OOP_SGL_IN_LB_OUT | - RTE_COMP_FF_OOP_LB_IN_SGL_OUT | - RTE_COMP_FF_STATEFUL_DECOMPRESSION, - .window_size = {.min = 15, .max = 15, .increment = 0} }, - {RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } }; +static struct +qat_comp_capabilities_info qat_comp_get_capa_info( + enum qat_device_gen qat_dev_gen, struct qat_pci_device *qat_dev) +{ + struct qat_comp_capabilities_info ret = { .data = NULL, .size = 0 }; -static void + if (qat_dev_gen >= QAT_N_GENS) + return ret; + RTE_FUNC_PTR_OR_ERR_RET(qat_comp_gen_dev_ops[qat_dev_gen] + .qat_comp_get_capabilities, ret); + return qat_comp_gen_dev_ops[qat_dev_gen] + .qat_comp_get_capabilities(qat_dev); +} + +void qat_comp_stats_get(struct rte_compressdev *dev, struct rte_compressdev_stats *stats) { @@ -52,7 +51,7 @@ qat_comp_stats_get(struct rte_compressdev *dev, stats->dequeue_err_count = qat_stats.dequeue_err_count; } -static void +void qat_comp_stats_reset(struct rte_compressdev *dev) { struct qat_comp_dev_private *qat_priv; @@ -67,7 +66,7 @@ qat_comp_stats_reset(struct rte_compressdev *dev) } -static int +int qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id) { struct qat_comp_dev_private *qat_private = dev->data->dev_private; @@ -95,23 +94,18 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id) &(dev->data->queue_pairs[queue_pair_id])); } -static int +int qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, - uint32_t max_inflight_ops, int socket_id) + uint32_t max_inflight_ops, int socket_id) { - struct qat_qp *qp; - int ret = 0; - uint32_t i; - struct qat_qp_config qat_qp_conf; - + struct qat_qp_config qat_qp_conf = {0}; struct qat_qp **qp_addr = (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); struct qat_comp_dev_private *qat_private = dev->data->dev_private; struct qat_pci_device *qat_dev = qat_private->qat_dev; - const struct qat_qp_hw_data *comp_hw_qps = - qat_gen_config[qat_private->qat_dev->qat_dev_gen] - .qp_hw_data[QAT_SERVICE_COMPRESSION]; - const struct qat_qp_hw_data *qp_hw_data = comp_hw_qps + qp_id; + struct qat_qp *qp; + uint32_t i; + int ret; /* If qp is already in use free ring memory and qp metadata. */ if (*qp_addr != NULL) { @@ -125,7 +119,13 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, return -EINVAL; } - qat_qp_conf.hw = qp_hw_data; + + qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_COMPRESSION, + qp_id); + if (qat_qp_conf.hw == NULL) { + QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); + return -EINVAL; + } qat_qp_conf.cookie_size = sizeof(struct qat_comp_op_cookie); qat_qp_conf.nb_descriptors = max_inflight_ops; qat_qp_conf.socket_id = socket_id; @@ -134,7 +134,6 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf); if (ret != 0) return ret; - /* store a link to the qp in the qat_pci_device */ qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][qp_id] = *qp_addr; @@ -189,7 +188,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, #define QAT_IM_BUFFER_DEBUG 0 -static const struct rte_memzone * +const struct rte_memzone * qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev, uint32_t buff_size) { @@ -202,8 +201,8 @@ qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev, uint32_t full_size; uint32_t offset_of_flat_buffs; int i; - int num_im_sgls = qat_gen_config[ - comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required; + int num_im_sgls = qat_comp_get_num_im_bufs_required( + comp_dev->qat_dev->qat_dev_gen); QAT_LOG(DEBUG, "QAT COMP device %s needs %d sgls", comp_dev->qat_dev->name, num_im_sgls); @@ -480,8 +479,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev) /* Free intermediate buffers */ if (comp_dev->interm_buff_mz) { char mz_name[RTE_MEMZONE_NAMESIZE]; - int i = qat_gen_config[ - comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required; + int i = qat_comp_get_num_im_bufs_required( + comp_dev->qat_dev->qat_dev_gen); while (--i >= 0) { snprintf(mz_name, RTE_MEMZONE_NAMESIZE, @@ -509,28 +508,13 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev) } } -static int +int qat_comp_dev_config(struct rte_compressdev *dev, struct rte_compressdev_config *config) { struct qat_comp_dev_private *comp_dev = dev->data->dev_private; int ret = 0; - if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) { - QAT_LOG(WARNING, - "RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so" - " QAT device can't be used for Dynamic Deflate. " - "Did you really intend to do this?"); - } else { - comp_dev->interm_buff_mz = - qat_comp_setup_inter_buffers(comp_dev, - RTE_PMD_QAT_COMP_IM_BUFFER_SIZE); - if (comp_dev->interm_buff_mz == NULL) { - ret = -ENOMEM; - goto error_out; - } - } - if (config->max_nb_priv_xforms) { comp_dev->xformpool = qat_comp_create_xform_pool(comp_dev, config, config->max_nb_priv_xforms); @@ -558,19 +542,19 @@ qat_comp_dev_config(struct rte_compressdev *dev, return ret; } -static int +int qat_comp_dev_start(struct rte_compressdev *dev __rte_unused) { return 0; } -static void +void qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused) { } -static int +int qat_comp_dev_close(struct rte_compressdev *dev) { int i; @@ -588,8 +572,7 @@ qat_comp_dev_close(struct rte_compressdev *dev) return ret; } - -static void +void qat_comp_dev_info_get(struct rte_compressdev *dev, struct rte_compressdev_info *info) { @@ -662,27 +645,6 @@ qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops, return ret; } -static struct rte_compressdev_ops compress_qat_ops = { - - /* Device related operations */ - .dev_configure = qat_comp_dev_config, - .dev_start = qat_comp_dev_start, - .dev_stop = qat_comp_dev_stop, - .dev_close = qat_comp_dev_close, - .dev_infos_get = qat_comp_dev_info_get, - - .stats_get = qat_comp_stats_get, - .stats_reset = qat_comp_stats_reset, - .queue_pair_setup = qat_comp_qp_setup, - .queue_pair_release = qat_comp_qp_release, - - /* Compression related operations */ - .private_xform_create = qat_comp_private_xform_create, - .private_xform_free = qat_comp_private_xform_free, - .stream_create = qat_comp_stream_create, - .stream_free = qat_comp_stream_free -}; - /* An rte_driver is needed in the registration of the device with compressdev. * The actual qat pci's rte_driver can't be used as its name represents * the whole pci device with all services. Think of this as a holder for a name @@ -693,6 +655,7 @@ static const struct rte_driver compdev_qat_driver = { .name = qat_comp_drv_name, .alias = qat_comp_drv_name }; + int qat_comp_dev_create(struct qat_pci_device *qat_pci_dev, struct qat_dev_cmd_param *qat_dev_cmd_param) @@ -708,17 +671,21 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev, char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN]; struct rte_compressdev *compressdev; struct qat_comp_dev_private *comp_dev; + struct qat_comp_capabilities_info capabilities_info; const struct rte_compressdev_capabilities *capabilities; + const struct qat_comp_gen_dev_ops *qat_comp_gen_ops = + &qat_comp_gen_dev_ops[qat_pci_dev->qat_dev_gen]; uint64_t capa_size; - if (qat_pci_dev->qat_dev_gen == QAT_GEN4) { - QAT_LOG(ERR, "Compression PMD not supported on QAT 4xxx"); - return -EFAULT; - } snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s", qat_pci_dev->name, "comp"); QAT_LOG(DEBUG, "Creating QAT COMP device %s", name); + if (qat_comp_gen_ops->compressdev_ops == NULL) { + QAT_LOG(DEBUG, "Device %s does not support compression", name); + return -ENOTSUP; + } + /* Populate subset device to use in compressdev device creation */ qat_dev_instance->comp_rte_dev.driver = &compdev_qat_driver; qat_dev_instance->comp_rte_dev.numa_node = @@ -733,13 +700,13 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev, if (compressdev == NULL) return -ENODEV; - compressdev->dev_ops = &compress_qat_ops; + compressdev->dev_ops = qat_comp_gen_ops->compressdev_ops; compressdev->enqueue_burst = (compressdev_enqueue_pkt_burst_t) qat_enqueue_comp_op_burst; compressdev->dequeue_burst = qat_comp_pmd_dequeue_first_op_burst; - - compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; + compressdev->feature_flags = + qat_comp_gen_ops->qat_comp_get_feature_flags(); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -752,22 +719,20 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev, comp_dev->qat_dev = qat_pci_dev; comp_dev->compressdev = compressdev; - switch (qat_pci_dev->qat_dev_gen) { - case QAT_GEN1: - case QAT_GEN2: - case QAT_GEN3: - capabilities = qat_comp_gen_capabilities; - capa_size = sizeof(qat_comp_gen_capabilities); - break; - default: - capabilities = qat_comp_gen_capabilities; - capa_size = sizeof(qat_comp_gen_capabilities); + capabilities_info = qat_comp_get_capa_info(qat_pci_dev->qat_dev_gen, + qat_pci_dev); + + if (capabilities_info.data == NULL) { QAT_LOG(DEBUG, "QAT gen %d capabilities unknown, default to GEN1", qat_pci_dev->qat_dev_gen); - break; + capabilities_info = qat_comp_get_capa_info(QAT_GEN1, + qat_pci_dev); } + capabilities = capabilities_info.data; + capa_size = capabilities_info.size; + comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name); if (comp_dev->capa_mz == NULL) { comp_dev->capa_mz = rte_memzone_reserve(capa_memz_name, diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h index 252b4b24e3..86317a513c 100644 --- a/drivers/compress/qat/qat_comp_pmd.h +++ b/drivers/compress/qat/qat_comp_pmd.h @@ -11,10 +11,44 @@ #include #include "qat_device.h" +#include "qat_comp.h" /**< Intel(R) QAT Compression PMD driver name */ #define COMPRESSDEV_NAME_QAT_PMD compress_qat +/* Private data structure for a QAT compression device capability. */ +struct qat_comp_capabilities_info { + const struct rte_compressdev_capabilities *data; + uint64_t size; +}; + +/** + * Function prototypes for GENx specific compress device operations. + **/ +typedef struct qat_comp_capabilities_info (*get_comp_capabilities_info_t) + (struct qat_pci_device *qat_dev); + +typedef uint16_t (*get_comp_ram_bank_flags_t)(void); + +typedef int (*set_comp_slice_cfg_word_t)(struct qat_comp_xform *qat_xform, + const struct rte_comp_xform *xform, + enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word); + +typedef unsigned int (*get_comp_num_im_bufs_required_t)(void); + +typedef uint64_t (*get_comp_feature_flags_t)(void); + +struct qat_comp_gen_dev_ops { + struct rte_compressdev_ops *compressdev_ops; + get_comp_feature_flags_t qat_comp_get_feature_flags; + get_comp_capabilities_info_t qat_comp_get_capabilities; + get_comp_ram_bank_flags_t qat_comp_get_ram_bank_flags; + set_comp_slice_cfg_word_t qat_comp_set_slice_cfg_word; + get_comp_num_im_bufs_required_t qat_comp_get_num_im_bufs_required; +}; + +extern struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[]; + /** private data structure for a QAT compression device. * This QAT device is a device offering only a compression service, * there can be one of these on each qat_pci_device (VF). @@ -37,6 +71,41 @@ struct qat_comp_dev_private { uint16_t min_enq_burst_threshold; }; +int +qat_comp_dev_config(struct rte_compressdev *dev, + struct rte_compressdev_config *config); + +int +qat_comp_dev_start(struct rte_compressdev *dev __rte_unused); + +void +qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused); + +int +qat_comp_dev_close(struct rte_compressdev *dev); + +void +qat_comp_dev_info_get(struct rte_compressdev *dev, + struct rte_compressdev_info *info); + +void +qat_comp_stats_get(struct rte_compressdev *dev, + struct rte_compressdev_stats *stats); + +void +qat_comp_stats_reset(struct rte_compressdev *dev); + +int +qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id); + +int +qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, + uint32_t max_inflight_ops, int socket_id); + +const struct rte_memzone * +qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev, + uint32_t buff_size); + int qat_comp_dev_create(struct qat_pci_device *qat_pci_dev, struct qat_dev_cmd_param *qat_dev_cmd_param); @@ -44,5 +113,12 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev, int qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev); + +static __rte_always_inline unsigned int +qat_comp_get_num_im_bufs_required(enum qat_device_gen gen) +{ + return (*qat_comp_gen_dev_ops[gen].qat_comp_get_num_im_bufs_required)(); +} + #endif #endif /* _QAT_COMP_PMD_H_ */ From patchwork Fri Oct 22 17:03:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 102689 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C3B1A0C43; Fri, 22 Oct 2021 19:04:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9124740E5A; Fri, 22 Oct 2021 19:04:37 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 88EA040E5A for ; Fri, 22 Oct 2021 19:04:08 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10145"; a="315546603" X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="315546603" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 10:04:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="569279798" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2021 10:04:06 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang , Adam Dybkowski , Arek Kusztal , Kai Ji Date: Fri, 22 Oct 2021 18:03:51 +0100 Message-Id: <20211022170354.13503-7-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022170354.13503-1-roy.fan.zhang@intel.com> References: <20211014161137.1405168-1-roy.fan.zhang@intel.com> <20211022170354.13503-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v4 6/9] compress/qat: add gen specific implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch replaces the mixed QAT compression support implementation by separate files with shared or individual implementation for specific QAT generation. Signed-off-by: Adam Dybkowski Signed-off-by: Arek Kusztal Signed-off-by: Fan Zhang Signed-off-by: Kai Ji Acked-by: Ciara Power --- drivers/common/qat/meson.build | 4 +- drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 175 +++++++++++++++ drivers/compress/qat/dev/qat_comp_pmd_gen2.c | 30 +++ drivers/compress/qat/dev/qat_comp_pmd_gen3.c | 30 +++ drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 213 +++++++++++++++++++ drivers/compress/qat/dev/qat_comp_pmd_gens.h | 30 +++ 6 files changed, 481 insertions(+), 1 deletion(-) create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 532e0fabb3..8a1c6d64e8 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -62,7 +62,9 @@ includes += include_directories( ) if qat_compress - foreach f: ['qat_comp_pmd.c', 'qat_comp.c'] + foreach f: ['qat_comp_pmd.c', 'qat_comp.c', + 'dev/qat_comp_pmd_gen1.c', 'dev/qat_comp_pmd_gen2.c', + 'dev/qat_comp_pmd_gen3.c', 'dev/qat_comp_pmd_gen4.c'] sources += files(join_paths(qat_compress_relpath, f)) endforeach endif diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c new file mode 100644 index 0000000000..8a8fa4aec5 --- /dev/null +++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c @@ -0,0 +1,175 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include + +#include "qat_comp_pmd.h" +#include "qat_comp.h" +#include "qat_comp_pmd_gens.h" + +#define QAT_NUM_INTERM_BUFS_GEN1 12 + +const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[] = { + {/* COMPRESSION - deflate */ + .algo = RTE_COMP_ALGO_DEFLATE, + .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM | + RTE_COMP_FF_CRC32_CHECKSUM | + RTE_COMP_FF_ADLER32_CHECKSUM | + RTE_COMP_FF_CRC32_ADLER32_CHECKSUM | + RTE_COMP_FF_SHAREABLE_PRIV_XFORM | + RTE_COMP_FF_HUFFMAN_FIXED | + RTE_COMP_FF_HUFFMAN_DYNAMIC | + RTE_COMP_FF_OOP_SGL_IN_SGL_OUT | + RTE_COMP_FF_OOP_SGL_IN_LB_OUT | + RTE_COMP_FF_OOP_LB_IN_SGL_OUT | + RTE_COMP_FF_STATEFUL_DECOMPRESSION, + .window_size = {.min = 15, .max = 15, .increment = 0} }, + {RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } }; + +static int +qat_comp_dev_config_gen1(struct rte_compressdev *dev, + struct rte_compressdev_config *config) +{ + struct qat_comp_dev_private *comp_dev = dev->data->dev_private; + + if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) { + QAT_LOG(WARNING, + "QAT device cannot be used for Dynamic Deflate."); + } else { + comp_dev->interm_buff_mz = + qat_comp_setup_inter_buffers(comp_dev, + RTE_PMD_QAT_COMP_IM_BUFFER_SIZE); + if (comp_dev->interm_buff_mz == NULL) + return -ENOMEM; + } + + return qat_comp_dev_config(dev, config); +} + +struct rte_compressdev_ops qat_comp_ops_gen1 = { + + /* Device related operations */ + .dev_configure = qat_comp_dev_config_gen1, + .dev_start = qat_comp_dev_start, + .dev_stop = qat_comp_dev_stop, + .dev_close = qat_comp_dev_close, + .dev_infos_get = qat_comp_dev_info_get, + + .stats_get = qat_comp_stats_get, + .stats_reset = qat_comp_stats_reset, + .queue_pair_setup = qat_comp_qp_setup, + .queue_pair_release = qat_comp_qp_release, + + /* Compression related operations */ + .private_xform_create = qat_comp_private_xform_create, + .private_xform_free = qat_comp_private_xform_free, + .stream_create = qat_comp_stream_create, + .stream_free = qat_comp_stream_free +}; + +struct qat_comp_capabilities_info +qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_comp_capabilities_info capa_info = { + .data = qat_gen1_comp_capabilities, + .size = sizeof(qat_gen1_comp_capabilities) + }; + return capa_info; +} + +uint16_t +qat_comp_get_ram_bank_flags_gen1(void) +{ + /* Enable A, B, C, D, and E (CAMs). */ + return ICP_QAT_FW_COMP_RAM_FLAGS_BUILD( + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */ + ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank E */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank D */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank C */ + ICP_QAT_FW_COMP_BANK_ENABLED, /* Bank B */ + ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */ +} + +int +qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform, + const struct rte_comp_xform *xform, + __rte_unused enum rte_comp_op_type op_type, + uint32_t *comp_slice_cfg_word) +{ + unsigned int algo, comp_level, direction; + + if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) + algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE; + else { + QAT_LOG(ERR, "compression algorithm not supported"); + return -EINVAL; + } + + if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) { + direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS; + comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8; + } else { + direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS; + + if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT) + comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8; + else if (xform->compress.level == 1) + comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1; + else if (xform->compress.level == 2) + comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4; + else if (xform->compress.level == 3) + comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8; + else if (xform->compress.level >= 4 && + xform->compress.level <= 9) + comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16; + else { + QAT_LOG(ERR, "compression level not supported"); + return -EINVAL; + } + } + + comp_slice_cfg_word[0] = + ICP_QAT_HW_COMPRESSION_CONFIG_BUILD( + direction, + /* In CPM 1.6 only valid mode ! */ + ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED, + algo, + /* Translate level to depth */ + comp_level, + ICP_QAT_HW_COMPRESSION_FILE_TYPE_0); + + return 0; +} + +static unsigned int +qat_comp_get_num_im_bufs_required_gen1(void) +{ + return QAT_NUM_INTERM_BUFS_GEN1; +} + +uint64_t +qat_comp_get_features_gen1(void) +{ + return RTE_COMPDEV_FF_HW_ACCELERATED; +} + +RTE_INIT(qat_comp_pmd_gen1_init) +{ + qat_comp_gen_dev_ops[QAT_GEN1].compressdev_ops = + &qat_comp_ops_gen1; + qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_capabilities = + qat_comp_cap_get_gen1; + qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_num_im_bufs_required = + qat_comp_get_num_im_bufs_required_gen1; + qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_ram_bank_flags = + qat_comp_get_ram_bank_flags_gen1; + qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_set_slice_cfg_word = + qat_comp_set_slice_cfg_word_gen1; + qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_feature_flags = + qat_comp_get_features_gen1; +} diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen2.c b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c new file mode 100644 index 0000000000..fd6c966f26 --- /dev/null +++ b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_comp_pmd.h" +#include "qat_comp_pmd_gens.h" + +#define QAT_NUM_INTERM_BUFS_GEN2 20 + +static unsigned int +qat_comp_get_num_im_bufs_required_gen2(void) +{ + return QAT_NUM_INTERM_BUFS_GEN2; +} + +RTE_INIT(qat_comp_pmd_gen2_init) +{ + qat_comp_gen_dev_ops[QAT_GEN2].compressdev_ops = + &qat_comp_ops_gen1; + qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_capabilities = + qat_comp_cap_get_gen1; + qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_num_im_bufs_required = + qat_comp_get_num_im_bufs_required_gen2; + qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_ram_bank_flags = + qat_comp_get_ram_bank_flags_gen1; + qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_set_slice_cfg_word = + qat_comp_set_slice_cfg_word_gen1; + qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_feature_flags = + qat_comp_get_features_gen1; +} diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen3.c b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c new file mode 100644 index 0000000000..fccb0941f1 --- /dev/null +++ b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_comp_pmd.h" +#include "qat_comp_pmd_gens.h" + +#define QAT_NUM_INTERM_BUFS_GEN3 64 + +static unsigned int +qat_comp_get_num_im_bufs_required_gen3(void) +{ + return QAT_NUM_INTERM_BUFS_GEN3; +} + +RTE_INIT(qat_comp_pmd_gen3_init) +{ + qat_comp_gen_dev_ops[QAT_GEN3].compressdev_ops = + &qat_comp_ops_gen1; + qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_capabilities = + qat_comp_cap_get_gen1; + qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_num_im_bufs_required = + qat_comp_get_num_im_bufs_required_gen3; + qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_ram_bank_flags = + qat_comp_get_ram_bank_flags_gen1; + qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_set_slice_cfg_word = + qat_comp_set_slice_cfg_word_gen1; + qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_feature_flags = + qat_comp_get_features_gen1; +} diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c new file mode 100644 index 0000000000..79b2ceb414 --- /dev/null +++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c @@ -0,0 +1,213 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_comp.h" +#include "qat_comp_pmd.h" +#include "qat_comp_pmd_gens.h" +#include "icp_qat_hw_gen4_comp.h" +#include "icp_qat_hw_gen4_comp_defs.h" + +#define QAT_NUM_INTERM_BUFS_GEN4 0 + +static const struct rte_compressdev_capabilities +qat_gen4_comp_capabilities[] = { + {/* COMPRESSION - deflate */ + .algo = RTE_COMP_ALGO_DEFLATE, + .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM | + RTE_COMP_FF_CRC32_CHECKSUM | + RTE_COMP_FF_ADLER32_CHECKSUM | + RTE_COMP_FF_CRC32_ADLER32_CHECKSUM | + RTE_COMP_FF_SHAREABLE_PRIV_XFORM | + RTE_COMP_FF_HUFFMAN_FIXED | + RTE_COMP_FF_HUFFMAN_DYNAMIC | + RTE_COMP_FF_OOP_SGL_IN_SGL_OUT | + RTE_COMP_FF_OOP_SGL_IN_LB_OUT | + RTE_COMP_FF_OOP_LB_IN_SGL_OUT, + .window_size = {.min = 15, .max = 15, .increment = 0} }, + {RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } }; + +static int +qat_comp_dev_config_gen4(struct rte_compressdev *dev, + struct rte_compressdev_config *config) +{ + /* QAT GEN4 doesn't need preallocated intermediate buffers */ + + return qat_comp_dev_config(dev, config); +} + +static struct rte_compressdev_ops qat_comp_ops_gen4 = { + + /* Device related operations */ + .dev_configure = qat_comp_dev_config_gen4, + .dev_start = qat_comp_dev_start, + .dev_stop = qat_comp_dev_stop, + .dev_close = qat_comp_dev_close, + .dev_infos_get = qat_comp_dev_info_get, + + .stats_get = qat_comp_stats_get, + .stats_reset = qat_comp_stats_reset, + .queue_pair_setup = qat_comp_qp_setup, + .queue_pair_release = qat_comp_qp_release, + + /* Compression related operations */ + .private_xform_create = qat_comp_private_xform_create, + .private_xform_free = qat_comp_private_xform_free, + .stream_create = qat_comp_stream_create, + .stream_free = qat_comp_stream_free +}; + +static struct qat_comp_capabilities_info +qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_comp_capabilities_info capa_info = { + .data = qat_gen4_comp_capabilities, + .size = sizeof(qat_gen4_comp_capabilities) + }; + return capa_info; +} + +static uint16_t +qat_comp_get_ram_bank_flags_gen4(void) +{ + return 0; +} + +static int +qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform, + const struct rte_comp_xform *xform, + enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word) +{ + if (qat_xform->qat_comp_request_type == + QAT_COMP_REQUEST_FIXED_COMP_STATELESS || + qat_xform->qat_comp_request_type == + QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) { + /* Compression */ + struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr; + struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr; + + memset(&hw_comp_upper_csr, 0, sizeof(hw_comp_upper_csr)); + memset(&hw_comp_lower_csr, 0, sizeof(hw_comp_lower_csr)); + + hw_comp_lower_csr.lllbd = + ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED; + + if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) { + hw_comp_lower_csr.skip_ctrl = + ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL; + + if (qat_xform->qat_comp_request_type == + QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) { + hw_comp_lower_csr.algo = + ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77; + hw_comp_lower_csr.lllbd = + ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED; + } else { + hw_comp_lower_csr.algo = + ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE; + hw_comp_upper_csr.scb_ctrl = + ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE; + } + + if (op_type == RTE_COMP_OP_STATEFUL) { + hw_comp_upper_csr.som_ctrl = + ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE; + } + } else { + QAT_LOG(ERR, "Compression algorithm not supported"); + return -EINVAL; + } + + switch (xform->compress.level) { + case 1: + case 2: + case 3: + case 4: + case 5: + hw_comp_lower_csr.sd = + ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1; + hw_comp_lower_csr.hash_col = + ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW; + break; + case 6: + case 7: + case 8: + case RTE_COMP_LEVEL_PMD_DEFAULT: + hw_comp_lower_csr.sd = + ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6; + break; + case 9: + case 10: + case 11: + case 12: + hw_comp_lower_csr.sd = + ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9; + break; + default: + QAT_LOG(ERR, "Compression level not supported"); + return -EINVAL; + } + + hw_comp_lower_csr.abd = ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED; + hw_comp_lower_csr.hash_update = + ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW; + hw_comp_lower_csr.edmm = + ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED; + + hw_comp_upper_csr.nice = + ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL; + hw_comp_upper_csr.lazy = + ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL; + + comp_slice_cfg_word[0] = + ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER( + hw_comp_lower_csr); + comp_slice_cfg_word[1] = + ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER( + hw_comp_upper_csr); + } else { + /* Decompression */ + struct icp_qat_hw_decomp_20_config_csr_lower + hw_decomp_lower_csr; + + memset(&hw_decomp_lower_csr, 0, sizeof(hw_decomp_lower_csr)); + + if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) + hw_decomp_lower_csr.algo = + ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE; + else { + QAT_LOG(ERR, "Compression algorithm not supported"); + return -EINVAL; + } + + comp_slice_cfg_word[0] = + ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER( + hw_decomp_lower_csr); + comp_slice_cfg_word[1] = 0; + } + + return 0; +} + +static unsigned int +qat_comp_get_num_im_bufs_required_gen4(void) +{ + return QAT_NUM_INTERM_BUFS_GEN4; +} + + +RTE_INIT(qat_comp_pmd_gen4_init) +{ + qat_comp_gen_dev_ops[QAT_GEN4].compressdev_ops = + &qat_comp_ops_gen4; + qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_capabilities = + qat_comp_cap_get_gen4; + qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_num_im_bufs_required = + qat_comp_get_num_im_bufs_required_gen4; + qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_ram_bank_flags = + qat_comp_get_ram_bank_flags_gen4; + qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_set_slice_cfg_word = + qat_comp_set_slice_cfg_word_gen4; + qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_feature_flags = + qat_comp_get_features_gen1; +} diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h new file mode 100644 index 0000000000..35b75c56f1 --- /dev/null +++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef _QAT_COMP_PMD_GEN1_H_ +#define _QAT_COMP_PMD_GEN1_H_ + +#include +#include +#include + +#include "qat_comp_pmd.h" + +extern const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[]; + +struct qat_comp_capabilities_info +qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev); + +uint16_t qat_comp_get_ram_bank_flags_gen1(void); + +int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform, + const struct rte_comp_xform *xform, + enum rte_comp_op_type op_type, + uint32_t *comp_slice_cfg_word); + +uint64_t qat_comp_get_features_gen1(void); + +extern struct rte_compressdev_ops qat_comp_ops_gen1; + +#endif /* _QAT_COMP_PMD_GEN1_H_ */ From patchwork Fri Oct 22 17:03:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 102690 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E505CA0C43; Fri, 22 Oct 2021 19:04:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AAFE4410F6; Fri, 22 Oct 2021 19:04:38 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 78F7C410FE for ; Fri, 22 Oct 2021 19:04:10 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10145"; a="315546614" X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="315546614" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 10:04:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="569279809" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2021 10:04:08 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang , Arek Kusztal , Kai Ji Date: Fri, 22 Oct 2021 18:03:52 +0100 Message-Id: <20211022170354.13503-8-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022170354.13503-1-roy.fan.zhang@intel.com> References: <20211014161137.1405168-1-roy.fan.zhang@intel.com> <20211022170354.13503-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v4 7/9] crypto/qat: unified device private data structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch unifies the QAT symmetric and asymmetric device private data structures and functions. Signed-off-by: Arek Kusztal Signed-off-by: Fan Zhang Signed-off-by: Kai Ji --- drivers/common/qat/meson.build | 2 +- drivers/common/qat/qat_common.c | 15 ++ drivers/common/qat/qat_common.h | 3 + drivers/common/qat/qat_device.h | 7 +- drivers/crypto/qat/qat_asym_pmd.c | 216 ++++------------------- drivers/crypto/qat/qat_asym_pmd.h | 29 +--- drivers/crypto/qat/qat_crypto.c | 172 ++++++++++++++++++ drivers/crypto/qat/qat_crypto.h | 78 +++++++++ drivers/crypto/qat/qat_sym_pmd.c | 250 +++++---------------------- drivers/crypto/qat/qat_sym_pmd.h | 21 +-- drivers/crypto/qat/qat_sym_session.c | 15 +- 11 files changed, 361 insertions(+), 447 deletions(-) create mode 100644 drivers/crypto/qat/qat_crypto.c create mode 100644 drivers/crypto/qat/qat_crypto.h diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 8a1c6d64e8..29fd0168ea 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -71,7 +71,7 @@ endif if qat_crypto foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c', - 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c'] + 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c'] sources += files(join_paths(qat_crypto_relpath, f)) endforeach deps += ['security'] diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c index 5343a1451e..59e7e02622 100644 --- a/drivers/common/qat/qat_common.c +++ b/drivers/common/qat/qat_common.c @@ -6,6 +6,21 @@ #include "qat_device.h" #include "qat_logs.h" +const char * +qat_service_get_str(enum qat_service_type type) +{ + switch (type) { + case QAT_SERVICE_SYMMETRIC: + return "sym"; + case QAT_SERVICE_ASYMMETRIC: + return "asym"; + case QAT_SERVICE_COMPRESSION: + return "comp"; + default: + return "invalid"; + } +} + int qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset, void *list_in, uint32_t data_len, diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h index a7632e31f8..9411a79301 100644 --- a/drivers/common/qat/qat_common.h +++ b/drivers/common/qat/qat_common.h @@ -91,4 +91,7 @@ void qat_stats_reset(struct qat_pci_device *dev, enum qat_service_type service); +const char * +qat_service_get_str(enum qat_service_type type); + #endif /* _QAT_COMMON_H_ */ diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index e7c7e9af95..85fae7b7c7 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -76,8 +76,7 @@ struct qat_device_info { extern struct qat_device_info qat_pci_devs[]; -struct qat_sym_dev_private; -struct qat_asym_dev_private; +struct qat_cryptodev_private; struct qat_comp_dev_private; /* @@ -106,14 +105,14 @@ struct qat_pci_device { /**< links to qps set up for each service, index same as on API */ /* Data relating to symmetric crypto service */ - struct qat_sym_dev_private *sym_dev; + struct qat_cryptodev_private *sym_dev; /**< link back to cryptodev private data */ int qat_sym_driver_id; /**< Symmetric driver id used by this device */ /* Data relating to asymmetric crypto service */ - struct qat_asym_dev_private *asym_dev; + struct qat_cryptodev_private *asym_dev; /**< link back to cryptodev private data */ int qat_asym_driver_id; diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c index 0944d27a4d..042f39ddcc 100644 --- a/drivers/crypto/qat/qat_asym_pmd.c +++ b/drivers/crypto/qat/qat_asym_pmd.c @@ -6,6 +6,7 @@ #include "qat_logs.h" +#include "qat_crypto.h" #include "qat_asym.h" #include "qat_asym_pmd.h" #include "qat_sym_capabilities.h" @@ -18,190 +19,45 @@ static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = { RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() }; -static int qat_asym_qp_release(struct rte_cryptodev *dev, - uint16_t queue_pair_id); - -static int qat_asym_dev_config(__rte_unused struct rte_cryptodev *dev, - __rte_unused struct rte_cryptodev_config *config) -{ - return 0; -} - -static int qat_asym_dev_start(__rte_unused struct rte_cryptodev *dev) -{ - return 0; -} - -static void qat_asym_dev_stop(__rte_unused struct rte_cryptodev *dev) -{ - -} - -static int qat_asym_dev_close(struct rte_cryptodev *dev) -{ - int i, ret; - - for (i = 0; i < dev->data->nb_queue_pairs; i++) { - ret = qat_asym_qp_release(dev, i); - if (ret < 0) - return ret; - } - - return 0; -} - -static void qat_asym_dev_info_get(struct rte_cryptodev *dev, - struct rte_cryptodev_info *info) -{ - struct qat_asym_dev_private *internals = dev->data->dev_private; - struct qat_pci_device *qat_dev = internals->qat_dev; - - if (info != NULL) { - info->max_nb_queue_pairs = qat_qps_per_service(qat_dev, - QAT_SERVICE_ASYMMETRIC); - info->feature_flags = dev->feature_flags; - info->capabilities = internals->qat_dev_capabilities; - info->driver_id = qat_asym_driver_id; - /* No limit of number of sessions */ - info->sym.max_nb_sessions = 0; - } -} - -static void qat_asym_stats_get(struct rte_cryptodev *dev, - struct rte_cryptodev_stats *stats) -{ - struct qat_common_stats qat_stats = {0}; - struct qat_asym_dev_private *qat_priv; - - if (stats == NULL || dev == NULL) { - QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev); - return; - } - qat_priv = dev->data->dev_private; - - qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_ASYMMETRIC); - stats->enqueued_count = qat_stats.enqueued_count; - stats->dequeued_count = qat_stats.dequeued_count; - stats->enqueue_err_count = qat_stats.enqueue_err_count; - stats->dequeue_err_count = qat_stats.dequeue_err_count; -} - -static void qat_asym_stats_reset(struct rte_cryptodev *dev) +void +qat_asym_init_op_cookie(void *op_cookie) { - struct qat_asym_dev_private *qat_priv; + int j; + struct qat_asym_op_cookie *cookie = op_cookie; - if (dev == NULL) { - QAT_LOG(ERR, "invalid asymmetric cryptodev ptr %p", dev); - return; - } - qat_priv = dev->data->dev_private; + cookie->input_addr = rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + input_params_ptrs); - qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_ASYMMETRIC); -} - -static int qat_asym_qp_release(struct rte_cryptodev *dev, - uint16_t queue_pair_id) -{ - struct qat_asym_dev_private *qat_private = dev->data->dev_private; - enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen; - - QAT_LOG(DEBUG, "Release asym qp %u on device %d", - queue_pair_id, dev->data->dev_id); - - qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][queue_pair_id] - = NULL; - - return qat_qp_release(qat_dev_gen, (struct qat_qp **) - &(dev->data->queue_pairs[queue_pair_id])); -} + cookie->output_addr = rte_mempool_virt2iova(cookie) + + offsetof(struct qat_asym_op_cookie, + output_params_ptrs); -static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, - const struct rte_cryptodev_qp_conf *qp_conf, - int socket_id) -{ - struct qat_qp_config qat_qp_conf; - struct qat_qp *qp; - int ret = 0; - uint32_t i; - - struct qat_qp **qp_addr = - (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); - struct qat_asym_dev_private *qat_private = dev->data->dev_private; - struct qat_pci_device *qat_dev = qat_private->qat_dev; - const struct qat_qp_hw_data *asym_hw_qps = - qat_gen_config[qat_private->qat_dev->qat_dev_gen] - .qp_hw_data[QAT_SERVICE_ASYMMETRIC]; - const struct qat_qp_hw_data *qp_hw_data = asym_hw_qps + qp_id; - - /* If qp is already in use free ring memory and qp metadata. */ - if (*qp_addr != NULL) { - ret = qat_asym_qp_release(dev, qp_id); - if (ret < 0) - return ret; - } - if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC)) { - QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); - return -EINVAL; - } - - qat_qp_conf.hw = qp_hw_data; - qat_qp_conf.cookie_size = sizeof(struct qat_asym_op_cookie); - qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors; - qat_qp_conf.socket_id = socket_id; - qat_qp_conf.service_str = "asym"; - - ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf); - if (ret != 0) - return ret; - - /* store a link to the qp in the qat_pci_device */ - qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][qp_id] - = *qp_addr; - - qp = (struct qat_qp *)*qp_addr; - qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold; - - for (i = 0; i < qp->nb_descriptors; i++) { - int j; - - struct qat_asym_op_cookie __rte_unused *cookie = - qp->op_cookies[i]; - cookie->input_addr = rte_mempool_virt2iova(cookie) + + for (j = 0; j < 8; j++) { + cookie->input_params_ptrs[j] = + rte_mempool_virt2iova(cookie) + offsetof(struct qat_asym_op_cookie, - input_params_ptrs); - - cookie->output_addr = rte_mempool_virt2iova(cookie) + + input_array[j]); + cookie->output_params_ptrs[j] = + rte_mempool_virt2iova(cookie) + offsetof(struct qat_asym_op_cookie, - output_params_ptrs); - - for (j = 0; j < 8; j++) { - cookie->input_params_ptrs[j] = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - input_array[j]); - cookie->output_params_ptrs[j] = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_asym_op_cookie, - output_array[j]); - } + output_array[j]); } - - return ret; } -struct rte_cryptodev_ops crypto_qat_ops = { +static struct rte_cryptodev_ops crypto_qat_ops = { /* Device related operations */ - .dev_configure = qat_asym_dev_config, - .dev_start = qat_asym_dev_start, - .dev_stop = qat_asym_dev_stop, - .dev_close = qat_asym_dev_close, - .dev_infos_get = qat_asym_dev_info_get, + .dev_configure = qat_cryptodev_config, + .dev_start = qat_cryptodev_start, + .dev_stop = qat_cryptodev_stop, + .dev_close = qat_cryptodev_close, + .dev_infos_get = qat_cryptodev_info_get, - .stats_get = qat_asym_stats_get, - .stats_reset = qat_asym_stats_reset, - .queue_pair_setup = qat_asym_qp_setup, - .queue_pair_release = qat_asym_qp_release, + .stats_get = qat_cryptodev_stats_get, + .stats_reset = qat_cryptodev_stats_reset, + .queue_pair_setup = qat_cryptodev_qp_setup, + .queue_pair_release = qat_cryptodev_qp_release, /* Crypto related operations */ .asym_session_get_size = qat_asym_session_get_private_size, @@ -241,15 +97,14 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, struct qat_device_info *qat_dev_instance = &qat_pci_devs[qat_pci_dev->qat_dev_id]; struct rte_cryptodev_pmd_init_params init_params = { - .name = "", - .socket_id = - qat_dev_instance->pci_dev->device.numa_node, - .private_data_size = sizeof(struct qat_asym_dev_private) + .name = "", + .socket_id = qat_dev_instance->pci_dev->device.numa_node, + .private_data_size = sizeof(struct qat_cryptodev_private) }; char name[RTE_CRYPTODEV_NAME_MAX_LEN]; char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; struct rte_cryptodev *cryptodev; - struct qat_asym_dev_private *internals; + struct qat_cryptodev_private *internals; if (qat_pci_dev->qat_dev_gen == QAT_GEN4) { QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx"); @@ -310,8 +165,9 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, internals = cryptodev->data->dev_private; internals->qat_dev = qat_pci_dev; - internals->asym_dev_id = cryptodev->data->dev_id; + internals->dev_id = cryptodev->data->dev_id; internals->qat_dev_capabilities = qat_gen1_asym_capabilities; + internals->service_type = QAT_SERVICE_ASYMMETRIC; internals->capa_mz = rte_memzone_lookup(capa_memz_name); if (internals->capa_mz == NULL) { @@ -347,7 +203,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, rte_cryptodev_pmd_probing_finish(cryptodev); QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d", - cryptodev->data->name, internals->asym_dev_id); + cryptodev->data->name, internals->dev_id); return 0; } @@ -365,7 +221,7 @@ qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev) /* free crypto device */ cryptodev = rte_cryptodev_pmd_get_dev( - qat_pci_dev->asym_dev->asym_dev_id); + qat_pci_dev->asym_dev->dev_id); rte_cryptodev_pmd_destroy(cryptodev); qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL; qat_pci_dev->asym_dev = NULL; diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h index 3b5abddec8..c493796511 100644 --- a/drivers/crypto/qat/qat_asym_pmd.h +++ b/drivers/crypto/qat/qat_asym_pmd.h @@ -15,21 +15,8 @@ extern uint8_t qat_asym_driver_id; -/** private data structure for a QAT device. - * This QAT device is a device offering only asymmetric crypto service, - * there can be one of these on each qat_pci_device (VF). - */ -struct qat_asym_dev_private { - struct qat_pci_device *qat_dev; - /**< The qat pci device hosting the service */ - uint8_t asym_dev_id; - /**< Device instance for this rte_cryptodev */ - const struct rte_cryptodev_capabilities *qat_dev_capabilities; - /* QAT device asymmetric crypto capabilities */ - const struct rte_memzone *capa_mz; - /* Shared memzone for storing capabilities */ - uint16_t min_enq_burst_threshold; -}; +void +qat_asym_init_op_cookie(void *op_cookie); uint16_t qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, @@ -39,16 +26,4 @@ uint16_t qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops); -int qat_asym_session_configure(struct rte_cryptodev *dev, - struct rte_crypto_asym_xform *xform, - struct rte_cryptodev_asym_session *sess, - struct rte_mempool *mempool); - -int -qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, - struct qat_dev_cmd_param *qat_dev_cmd_param); - -int -qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev); - #endif /* _QAT_ASYM_PMD_H_ */ diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c new file mode 100644 index 0000000000..01d2439b93 --- /dev/null +++ b/drivers/crypto/qat/qat_crypto.c @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + +#include "qat_device.h" +#include "qat_qp.h" +#include "qat_crypto.h" +#include "qat_sym.h" +#include "qat_asym.h" + +int +qat_cryptodev_config(__rte_unused struct rte_cryptodev *dev, + __rte_unused struct rte_cryptodev_config *config) +{ + return 0; +} + +int +qat_cryptodev_start(__rte_unused struct rte_cryptodev *dev) +{ + return 0; +} + +void +qat_cryptodev_stop(__rte_unused struct rte_cryptodev *dev) +{ +} + +int +qat_cryptodev_close(struct rte_cryptodev *dev) +{ + int i, ret; + + for (i = 0; i < dev->data->nb_queue_pairs; i++) { + ret = dev->dev_ops->queue_pair_release(dev, i); + if (ret < 0) + return ret; + } + + return 0; +} + +void +qat_cryptodev_info_get(struct rte_cryptodev *dev, + struct rte_cryptodev_info *info) +{ + struct qat_cryptodev_private *qat_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_private->qat_dev; + enum qat_service_type service_type = qat_private->service_type; + + if (info != NULL) { + info->max_nb_queue_pairs = + qat_qps_per_service(qat_dev, service_type); + info->feature_flags = dev->feature_flags; + info->capabilities = qat_private->qat_dev_capabilities; + info->driver_id = qat_sym_driver_id; + /* No limit of number of sessions */ + info->sym.max_nb_sessions = 0; + } +} + +void +qat_cryptodev_stats_get(struct rte_cryptodev *dev, + struct rte_cryptodev_stats *stats) +{ + struct qat_common_stats qat_stats = {0}; + struct qat_cryptodev_private *qat_priv; + + if (stats == NULL || dev == NULL) { + QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev); + return; + } + qat_priv = dev->data->dev_private; + + qat_stats_get(qat_priv->qat_dev, &qat_stats, qat_priv->service_type); + stats->enqueued_count = qat_stats.enqueued_count; + stats->dequeued_count = qat_stats.dequeued_count; + stats->enqueue_err_count = qat_stats.enqueue_err_count; + stats->dequeue_err_count = qat_stats.dequeue_err_count; +} + +void +qat_cryptodev_stats_reset(struct rte_cryptodev *dev) +{ + struct qat_cryptodev_private *qat_priv; + + if (dev == NULL) { + QAT_LOG(ERR, "invalid cryptodev ptr %p", dev); + return; + } + qat_priv = dev->data->dev_private; + + qat_stats_reset(qat_priv->qat_dev, qat_priv->service_type); + +} + +int +qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) +{ + struct qat_cryptodev_private *qat_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_private->qat_dev; + enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen; + enum qat_service_type service_type = qat_private->service_type; + + QAT_LOG(DEBUG, "Release %s qp %u on device %d", + qat_service_get_str(service_type), + queue_pair_id, dev->data->dev_id); + + qat_private->qat_dev->qps_in_use[service_type][queue_pair_id] = NULL; + + return qat_qp_release(qat_dev_gen, (struct qat_qp **) + &(dev->data->queue_pairs[queue_pair_id])); +} + +int +qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, int socket_id) +{ + struct qat_qp **qp_addr = + (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); + struct qat_cryptodev_private *qat_private = dev->data->dev_private; + struct qat_pci_device *qat_dev = qat_private->qat_dev; + enum qat_service_type service_type = qat_private->service_type; + struct qat_qp_config qat_qp_conf = {0}; + struct qat_qp *qp; + int ret = 0; + uint32_t i; + + /* If qp is already in use free ring memory and qp metadata. */ + if (*qp_addr != NULL) { + ret = dev->dev_ops->queue_pair_release(dev, qp_id); + if (ret < 0) + return -EBUSY; + } + if (qp_id >= qat_qps_per_service(qat_dev, service_type)) { + QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); + return -EINVAL; + } + + qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, service_type, + qp_id); + if (qat_qp_conf.hw == NULL) { + QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); + return -EINVAL; + } + + qat_qp_conf.cookie_size = service_type == QAT_SERVICE_SYMMETRIC ? + sizeof(struct qat_sym_op_cookie) : + sizeof(struct qat_asym_op_cookie); + qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors; + qat_qp_conf.socket_id = socket_id; + qat_qp_conf.service_str = qat_service_get_str(service_type); + + ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf); + if (ret != 0) + return ret; + + /* store a link to the qp in the qat_pci_device */ + qat_dev->qps_in_use[service_type][qp_id] = *qp_addr; + + qp = (struct qat_qp *)*qp_addr; + qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold; + + for (i = 0; i < qp->nb_descriptors; i++) { + if (service_type == QAT_SERVICE_SYMMETRIC) + qat_sym_init_op_cookie(qp->op_cookies[i]); + else + qat_asym_init_op_cookie(qp->op_cookies[i]); + } + + return ret; +} diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h new file mode 100644 index 0000000000..3803fef19d --- /dev/null +++ b/drivers/crypto/qat/qat_crypto.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ + + #ifndef _QAT_CRYPTO_H_ + #define _QAT_CRYPTO_H_ + +#include +#ifdef RTE_LIB_SECURITY +#include +#endif + +#include "qat_device.h" + +extern uint8_t qat_sym_driver_id; +extern uint8_t qat_asym_driver_id; + +/** helper macro to set cryptodev capability range **/ +#define CAP_RNG(n, l, r, i) .n = {.min = l, .max = r, .increment = i} + +#define CAP_RNG_ZERO(n) .n = {.min = 0, .max = 0, .increment = 0} +/** helper macro to set cryptodev capability value **/ +#define CAP_SET(n, v) .n = v + +/** private data structure for a QAT device. + * there can be one of these on each qat_pci_device (VF). + */ +struct qat_cryptodev_private { + struct qat_pci_device *qat_dev; + /**< The qat pci device hosting the service */ + uint8_t dev_id; + /**< Device instance for this rte_cryptodev */ + const struct rte_cryptodev_capabilities *qat_dev_capabilities; + /* QAT device symmetric crypto capabilities */ + const struct rte_memzone *capa_mz; + /* Shared memzone for storing capabilities */ + uint16_t min_enq_burst_threshold; + uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */ + enum qat_service_type service_type; +}; + +struct qat_capabilities_info { + struct rte_cryptodev_capabilities *data; + uint64_t size; +}; + +int +qat_cryptodev_config(struct rte_cryptodev *dev, + struct rte_cryptodev_config *config); + +int +qat_cryptodev_start(struct rte_cryptodev *dev); + +void +qat_cryptodev_stop(struct rte_cryptodev *dev); + +int +qat_cryptodev_close(struct rte_cryptodev *dev); + +void +qat_cryptodev_info_get(struct rte_cryptodev *dev, + struct rte_cryptodev_info *info); + +void +qat_cryptodev_stats_get(struct rte_cryptodev *dev, + struct rte_cryptodev_stats *stats); + +void +qat_cryptodev_stats_reset(struct rte_cryptodev *dev); + +int +qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, int socket_id); + +int +qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id); + +#endif diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index 5b8ee4bee6..dec877cfab 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -13,6 +13,7 @@ #endif #include "qat_logs.h" +#include "qat_crypto.h" #include "qat_sym.h" #include "qat_sym_session.h" #include "qat_sym_pmd.h" @@ -59,213 +60,19 @@ static const struct rte_security_capability qat_security_capabilities[] = { }; #endif -static int qat_sym_qp_release(struct rte_cryptodev *dev, - uint16_t queue_pair_id); - -static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev, - __rte_unused struct rte_cryptodev_config *config) -{ - return 0; -} - -static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev) -{ - return 0; -} - -static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev) -{ - return; -} - -static int qat_sym_dev_close(struct rte_cryptodev *dev) -{ - int i, ret; - - for (i = 0; i < dev->data->nb_queue_pairs; i++) { - ret = qat_sym_qp_release(dev, i); - if (ret < 0) - return ret; - } - - return 0; -} - -static void qat_sym_dev_info_get(struct rte_cryptodev *dev, - struct rte_cryptodev_info *info) -{ - struct qat_sym_dev_private *internals = dev->data->dev_private; - struct qat_pci_device *qat_dev = internals->qat_dev; - - if (info != NULL) { - info->max_nb_queue_pairs = - qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC); - info->feature_flags = dev->feature_flags; - info->capabilities = internals->qat_dev_capabilities; - info->driver_id = qat_sym_driver_id; - /* No limit of number of sessions */ - info->sym.max_nb_sessions = 0; - } -} - -static void qat_sym_stats_get(struct rte_cryptodev *dev, - struct rte_cryptodev_stats *stats) -{ - struct qat_common_stats qat_stats = {0}; - struct qat_sym_dev_private *qat_priv; - - if (stats == NULL || dev == NULL) { - QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev); - return; - } - qat_priv = dev->data->dev_private; - - qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_SYMMETRIC); - stats->enqueued_count = qat_stats.enqueued_count; - stats->dequeued_count = qat_stats.dequeued_count; - stats->enqueue_err_count = qat_stats.enqueue_err_count; - stats->dequeue_err_count = qat_stats.dequeue_err_count; -} - -static void qat_sym_stats_reset(struct rte_cryptodev *dev) -{ - struct qat_sym_dev_private *qat_priv; - - if (dev == NULL) { - QAT_LOG(ERR, "invalid cryptodev ptr %p", dev); - return; - } - qat_priv = dev->data->dev_private; - - qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_SYMMETRIC); - -} - -static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) -{ - struct qat_sym_dev_private *qat_private = dev->data->dev_private; - enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen; - - QAT_LOG(DEBUG, "Release sym qp %u on device %d", - queue_pair_id, dev->data->dev_id); - - qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][queue_pair_id] - = NULL; - - return qat_qp_release(qat_dev_gen, (struct qat_qp **) - &(dev->data->queue_pairs[queue_pair_id])); -} - -static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, - const struct rte_cryptodev_qp_conf *qp_conf, - int socket_id) -{ - struct qat_qp *qp; - int ret = 0; - uint32_t i; - struct qat_qp_config qat_qp_conf; - struct qat_qp **qp_addr = - (struct qat_qp **)&(dev->data->queue_pairs[qp_id]); - struct qat_sym_dev_private *qat_private = dev->data->dev_private; - struct qat_pci_device *qat_dev = qat_private->qat_dev; - - /* If qp is already in use free ring memory and qp metadata. */ - if (*qp_addr != NULL) { - ret = qat_sym_qp_release(dev, qp_id); - if (ret < 0) - return ret; - } - if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) { - QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); - return -EINVAL; - } - - qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC, - qp_id); - if (qat_qp_conf.hw == NULL) { - QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id); - return -EINVAL; - } - - qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie); - qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors; - qat_qp_conf.socket_id = socket_id; - qat_qp_conf.service_str = "sym"; - - ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf); - if (ret != 0) - return ret; - - /* store a link to the qp in the qat_pci_device */ - qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id] - = *qp_addr; - - qp = (struct qat_qp *)*qp_addr; - qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold; - - for (i = 0; i < qp->nb_descriptors; i++) { - - struct qat_sym_op_cookie *cookie = - qp->op_cookies[i]; - - cookie->qat_sgl_src_phys_addr = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_sym_op_cookie, - qat_sgl_src); - - cookie->qat_sgl_dst_phys_addr = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_sym_op_cookie, - qat_sgl_dst); - - cookie->opt.spc_gmac.cd_phys_addr = - rte_mempool_virt2iova(cookie) + - offsetof(struct qat_sym_op_cookie, - opt.spc_gmac.cd_cipher); - - } - - /* Get fw version from QAT (GEN2), skip if we've got it already */ - if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities - & QAT_SYM_CAP_VALID)) { - ret = qat_cq_get_fw_version(qp); - - if (ret < 0) { - qat_sym_qp_release(dev, qp_id); - return ret; - } - - if (ret != 0) - QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d", - (ret >> 24) & 0xff, - (ret >> 16) & 0xff, - (ret >> 8) & 0xff); - else - QAT_LOG(DEBUG, "unknown QAT firmware version"); - - /* set capabilities based on the fw version */ - qat_private->internal_capabilities = QAT_SYM_CAP_VALID | - ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? - QAT_SYM_CAP_MIXED_CRYPTO : 0); - ret = 0; - } - - return ret; -} - static struct rte_cryptodev_ops crypto_qat_ops = { /* Device related operations */ - .dev_configure = qat_sym_dev_config, - .dev_start = qat_sym_dev_start, - .dev_stop = qat_sym_dev_stop, - .dev_close = qat_sym_dev_close, - .dev_infos_get = qat_sym_dev_info_get, + .dev_configure = qat_cryptodev_config, + .dev_start = qat_cryptodev_start, + .dev_stop = qat_cryptodev_stop, + .dev_close = qat_cryptodev_close, + .dev_infos_get = qat_cryptodev_info_get, - .stats_get = qat_sym_stats_get, - .stats_reset = qat_sym_stats_reset, - .queue_pair_setup = qat_sym_qp_setup, - .queue_pair_release = qat_sym_qp_release, + .stats_get = qat_cryptodev_stats_get, + .stats_reset = qat_cryptodev_stats_reset, + .queue_pair_setup = qat_cryptodev_qp_setup, + .queue_pair_release = qat_cryptodev_qp_release, /* Crypto related operations */ .sym_session_get_size = qat_sym_session_get_private_size, @@ -295,6 +102,27 @@ static struct rte_security_ops security_qat_ops = { }; #endif +void +qat_sym_init_op_cookie(void *op_cookie) +{ + struct qat_sym_op_cookie *cookie = op_cookie; + + cookie->qat_sgl_src_phys_addr = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_sym_op_cookie, + qat_sgl_src); + + cookie->qat_sgl_dst_phys_addr = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_sym_op_cookie, + qat_sgl_dst); + + cookie->opt.spc_gmac.cd_phys_addr = + rte_mempool_virt2iova(cookie) + + offsetof(struct qat_sym_op_cookie, + opt.spc_gmac.cd_cipher); +} + static uint16_t qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -330,15 +158,14 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, &qat_pci_devs[qat_pci_dev->qat_dev_id]; struct rte_cryptodev_pmd_init_params init_params = { - .name = "", - .socket_id = - qat_dev_instance->pci_dev->device.numa_node, - .private_data_size = sizeof(struct qat_sym_dev_private) + .name = "", + .socket_id = qat_dev_instance->pci_dev->device.numa_node, + .private_data_size = sizeof(struct qat_cryptodev_private) }; char name[RTE_CRYPTODEV_NAME_MAX_LEN]; char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; struct rte_cryptodev *cryptodev; - struct qat_sym_dev_private *internals; + struct qat_cryptodev_private *internals; const struct rte_cryptodev_capabilities *capabilities; uint64_t capa_size; @@ -424,8 +251,9 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, internals = cryptodev->data->dev_private; internals->qat_dev = qat_pci_dev; + internals->service_type = QAT_SERVICE_SYMMETRIC; - internals->sym_dev_id = cryptodev->data->dev_id; + internals->dev_id = cryptodev->data->dev_id; switch (qat_pci_dev->qat_dev_gen) { case QAT_GEN1: capabilities = qat_gen1_sym_capabilities; @@ -480,7 +308,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, qat_pci_dev->sym_dev = internals; QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d", - cryptodev->data->name, internals->sym_dev_id); + cryptodev->data->name, internals->dev_id); rte_cryptodev_pmd_probing_finish(cryptodev); @@ -511,7 +339,7 @@ qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev) rte_memzone_free(qat_pci_dev->sym_dev->capa_mz); /* free crypto device */ - cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->sym_dev_id); + cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id); #ifdef RTE_LIB_SECURITY rte_free(cryptodev->security_ctx); cryptodev->security_ctx = NULL; diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h index e0992cbe27..d49b732ca0 100644 --- a/drivers/crypto/qat/qat_sym_pmd.h +++ b/drivers/crypto/qat/qat_sym_pmd.h @@ -14,6 +14,7 @@ #endif #include "qat_sym_capabilities.h" +#include "qat_crypto.h" #include "qat_device.h" /** Intel(R) QAT Symmetric Crypto PMD driver name */ @@ -25,23 +26,6 @@ extern uint8_t qat_sym_driver_id; -/** private data structure for a QAT device. - * This QAT device is a device offering only symmetric crypto service, - * there can be one of these on each qat_pci_device (VF). - */ -struct qat_sym_dev_private { - struct qat_pci_device *qat_dev; - /**< The qat pci device hosting the service */ - uint8_t sym_dev_id; - /**< Device instance for this rte_cryptodev */ - const struct rte_cryptodev_capabilities *qat_dev_capabilities; - /* QAT device symmetric crypto capabilities */ - const struct rte_memzone *capa_mz; - /* Shared memzone for storing capabilities */ - uint16_t min_enq_burst_threshold; - uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */ -}; - int qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, struct qat_dev_cmd_param *qat_dev_cmd_param); @@ -49,5 +33,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, int qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev); +void +qat_sym_init_op_cookie(void *op_cookie); + #endif #endif /* _QAT_SYM_PMD_H_ */ diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 3f2f6736fc..8ca475ca8b 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -131,7 +131,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo, static int qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo, - struct qat_sym_dev_private *internals) + struct qat_cryptodev_private *internals) { int i = 0; const struct rte_cryptodev_capabilities *capability; @@ -152,7 +152,7 @@ qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo, static int qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo, - struct qat_sym_dev_private *internals) + struct qat_cryptodev_private *internals) { int i = 0; const struct rte_cryptodev_capabilities *capability; @@ -267,7 +267,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev, struct rte_crypto_sym_xform *xform, struct qat_sym_session *session) { - struct qat_sym_dev_private *internals = dev->data->dev_private; + struct qat_cryptodev_private *internals = dev->data->dev_private; struct rte_crypto_cipher_xform *cipher_xform = NULL; enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; @@ -532,7 +532,8 @@ static void qat_sym_session_handle_mixed(const struct rte_cryptodev *dev, struct qat_sym_session *session) { - const struct qat_sym_dev_private *qat_private = dev->data->dev_private; + const struct qat_cryptodev_private *qat_private = + dev->data->dev_private; enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities & QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3; @@ -564,7 +565,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, struct rte_crypto_sym_xform *xform, void *session_private) { struct qat_sym_session *session = session_private; - struct qat_sym_dev_private *internals = dev->data->dev_private; + struct qat_cryptodev_private *internals = dev->data->dev_private; enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; int ret; int qat_cmd_id; @@ -707,7 +708,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, struct qat_sym_session *session) { struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform); - struct qat_sym_dev_private *internals = dev->data->dev_private; + struct qat_cryptodev_private *internals = dev->data->dev_private; const uint8_t *key_data = auth_xform->key.data; uint8_t key_length = auth_xform->key.length; enum qat_device_gen qat_dev_gen = @@ -875,7 +876,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev, { struct rte_crypto_aead_xform *aead_xform = &xform->aead; enum rte_crypto_auth_operation crypto_operation; - struct qat_sym_dev_private *internals = + struct qat_cryptodev_private *internals = dev->data->dev_private; enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; From patchwork Fri Oct 22 17:03:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 102691 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36535A0C43; Fri, 22 Oct 2021 19:05:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B4DEE41152; Fri, 22 Oct 2021 19:04:39 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 98FD84112E for ; Fri, 22 Oct 2021 19:04:12 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10145"; a="315546620" X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="315546620" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 10:04:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="569279816" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2021 10:04:10 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang , Arek Kusztal , Kai Ji Date: Fri, 22 Oct 2021 18:03:53 +0100 Message-Id: <20211022170354.13503-9-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022170354.13503-1-roy.fan.zhang@intel.com> References: <20211014161137.1405168-1-roy.fan.zhang@intel.com> <20211022170354.13503-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v4 8/9] crypto/qat: add gen specific data and function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds the symmetric and asymmetric crypto data structure and function prototypes for different QAT generations. Signed-off-by: Arek Kusztal Signed-off-by: Fan Zhang Signed-off-by: Kai Ji --- drivers/crypto/qat/README | 7 - drivers/crypto/qat/meson.build | 26 - drivers/crypto/qat/qat_asym_capabilities.h | 63 - drivers/crypto/qat/qat_asym_pmd.c | 60 +- drivers/crypto/qat/qat_asym_pmd.h | 25 + drivers/crypto/qat/qat_crypto.h | 16 + drivers/crypto/qat/qat_sym_capabilities.h | 1248 -------------------- drivers/crypto/qat/qat_sym_pmd.c | 186 +-- drivers/crypto/qat/qat_sym_pmd.h | 57 +- 9 files changed, 165 insertions(+), 1523 deletions(-) delete mode 100644 drivers/crypto/qat/README delete mode 100644 drivers/crypto/qat/meson.build delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h diff --git a/drivers/crypto/qat/README b/drivers/crypto/qat/README deleted file mode 100644 index 444ae605f0..0000000000 --- a/drivers/crypto/qat/README +++ /dev/null @@ -1,7 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2015-2018 Intel Corporation - -Makefile for crypto QAT PMD is in common/qat directory. -The build for the QAT driver is done from there as only one library is built for the -whole QAT pci device and that library includes all the services (crypto, compression) -which are enabled on the device. diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build deleted file mode 100644 index b3b2d17258..0000000000 --- a/drivers/crypto/qat/meson.build +++ /dev/null @@ -1,26 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2017-2018 Intel Corporation - -# this does not build the QAT driver, instead that is done in the compression -# driver which comes later. Here we just add our sources files to the list -build = false -reason = '' # sentinal value to suppress printout -dep = dependency('libcrypto', required: false, method: 'pkg-config') -qat_includes += include_directories('.') -qat_deps += 'cryptodev' -qat_deps += 'net' -qat_deps += 'security' -if dep.found() - # Add our sources files to the list - qat_sources += files( - 'qat_asym.c', - 'qat_asym_pmd.c', - 'qat_sym.c', - 'qat_sym_hw_dp.c', - 'qat_sym_pmd.c', - 'qat_sym_session.c', - ) - qat_ext_deps += dep - qat_cflags += '-DBUILD_QAT_SYM' - qat_cflags += '-DBUILD_QAT_ASYM' -endif diff --git a/drivers/crypto/qat/qat_asym_capabilities.h b/drivers/crypto/qat/qat_asym_capabilities.h deleted file mode 100644 index 523b4da6d3..0000000000 --- a/drivers/crypto/qat/qat_asym_capabilities.h +++ /dev/null @@ -1,63 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2019 Intel Corporation - */ - -#ifndef _QAT_ASYM_CAPABILITIES_H_ -#define _QAT_ASYM_CAPABILITIES_H_ - -#define QAT_BASE_GEN1_ASYM_CAPABILITIES \ - { /* modexp */ \ - .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \ - {.asym = { \ - .xform_capa = { \ - .xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX, \ - .op_types = 0, \ - { \ - .modlen = { \ - .min = 1, \ - .max = 512, \ - .increment = 1 \ - }, } \ - } \ - }, \ - } \ - }, \ - { /* modinv */ \ - .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \ - {.asym = { \ - .xform_capa = { \ - .xform_type = RTE_CRYPTO_ASYM_XFORM_MODINV, \ - .op_types = 0, \ - { \ - .modlen = { \ - .min = 1, \ - .max = 512, \ - .increment = 1 \ - }, } \ - } \ - }, \ - } \ - }, \ - { /* RSA */ \ - .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \ - {.asym = { \ - .xform_capa = { \ - .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA, \ - .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | \ - (1 << RTE_CRYPTO_ASYM_OP_VERIFY) | \ - (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | \ - (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)), \ - { \ - .modlen = { \ - /* min length is based on openssl rsa keygen */ \ - .min = 64, \ - /* value 0 symbolizes no limit on max length */ \ - .max = 512, \ - .increment = 64 \ - }, } \ - } \ - }, \ - } \ - } \ - -#endif /* _QAT_ASYM_CAPABILITIES_H_ */ diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c index 042f39ddcc..284b8096fe 100644 --- a/drivers/crypto/qat/qat_asym_pmd.c +++ b/drivers/crypto/qat/qat_asym_pmd.c @@ -9,15 +9,9 @@ #include "qat_crypto.h" #include "qat_asym.h" #include "qat_asym_pmd.h" -#include "qat_sym_capabilities.h" -#include "qat_asym_capabilities.h" uint8_t qat_asym_driver_id; - -static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = { - QAT_BASE_GEN1_ASYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; +struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS]; void qat_asym_init_op_cookie(void *op_cookie) @@ -101,19 +95,22 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, .socket_id = qat_dev_instance->pci_dev->device.numa_node, .private_data_size = sizeof(struct qat_cryptodev_private) }; + struct qat_capabilities_info capa_info; + const struct rte_cryptodev_capabilities *capabilities; + const struct qat_crypto_gen_dev_ops *gen_dev_ops = + &qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen]; char name[RTE_CRYPTODEV_NAME_MAX_LEN]; char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; struct rte_cryptodev *cryptodev; struct qat_cryptodev_private *internals; + uint64_t capa_size; - if (qat_pci_dev->qat_dev_gen == QAT_GEN4) { - QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx"); - return -EFAULT; - } - if (qat_pci_dev->qat_dev_gen == QAT_GEN3) { - QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT c4xxx"); + if (gen_dev_ops->cryptodev_ops == NULL) { + QAT_LOG(ERR, "Device %s does not support asymmetric crypto", + name); return -EFAULT; } + snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s", qat_pci_dev->name, "asym"); QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name); @@ -150,11 +147,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst; cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst; - cryptodev->feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO | - RTE_CRYPTODEV_FF_HW_ACCELERATED | - RTE_CRYPTODEV_FF_ASYM_SESSIONLESS | - RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP | - RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT; + + cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -166,27 +160,29 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev, internals = cryptodev->data->dev_private; internals->qat_dev = qat_pci_dev; internals->dev_id = cryptodev->data->dev_id; - internals->qat_dev_capabilities = qat_gen1_asym_capabilities; internals->service_type = QAT_SERVICE_ASYMMETRIC; + capa_info = gen_dev_ops->get_capabilities(qat_pci_dev); + capabilities = capa_info.data; + capa_size = capa_info.size; + internals->capa_mz = rte_memzone_lookup(capa_memz_name); if (internals->capa_mz == NULL) { internals->capa_mz = rte_memzone_reserve(capa_memz_name, - sizeof(qat_gen1_asym_capabilities), - rte_socket_id(), 0); - } - if (internals->capa_mz == NULL) { - QAT_LOG(DEBUG, - "Error allocating memzone for capabilities, destroying PMD for %s", - name); - rte_cryptodev_pmd_destroy(cryptodev); - memset(&qat_dev_instance->asym_rte_dev, 0, - sizeof(qat_dev_instance->asym_rte_dev)); - return -EFAULT; + capa_size, rte_socket_id(), 0); + if (internals->capa_mz == NULL) { + QAT_LOG(DEBUG, + "Error allocating memzone for capabilities, " + "destroying PMD for %s", + name); + rte_cryptodev_pmd_destroy(cryptodev); + memset(&qat_dev_instance->asym_rte_dev, 0, + sizeof(qat_dev_instance->asym_rte_dev)); + return -EFAULT; + } } - memcpy(internals->capa_mz->addr, qat_gen1_asym_capabilities, - sizeof(qat_gen1_asym_capabilities)); + memcpy(internals->capa_mz->addr, capabilities, capa_size); internals->qat_dev_capabilities = internals->capa_mz->addr; while (1) { diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h index c493796511..fd6b406248 100644 --- a/drivers/crypto/qat/qat_asym_pmd.h +++ b/drivers/crypto/qat/qat_asym_pmd.h @@ -7,14 +7,39 @@ #define _QAT_ASYM_PMD_H_ #include +#include "qat_crypto.h" #include "qat_device.h" /** Intel(R) QAT Asymmetric Crypto PMD driver name */ #define CRYPTODEV_NAME_QAT_ASYM_PMD crypto_qat_asym +/** + * Helper function to add an asym capability + * + **/ +#define QAT_ASYM_CAP(n, o, l, r, i) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \ + {.asym = { \ + .xform_capa = { \ + .xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\ + .op_types = o, \ + { \ + .modlen = { \ + .min = l, \ + .max = r, \ + .increment = i \ + }, } \ + } \ + }, \ + } \ + } + extern uint8_t qat_asym_driver_id; +extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[]; + void qat_asym_init_op_cookie(void *op_cookie); diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index 3803fef19d..0a8afb0b31 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -44,6 +44,22 @@ struct qat_capabilities_info { uint64_t size; }; +typedef struct qat_capabilities_info (*get_capabilities_info_t) + (struct qat_pci_device *qat_dev); + +typedef uint64_t (*get_feature_flags_t)(struct qat_pci_device *qat_dev); + +typedef void * (*create_security_ctx_t)(void *cryptodev); + +struct qat_crypto_gen_dev_ops { + get_feature_flags_t get_feature_flags; + get_capabilities_info_t get_capabilities; + struct rte_cryptodev_ops *cryptodev_ops; +#ifdef RTE_LIB_SECURITY + create_security_ctx_t create_security_ctx; +#endif +}; + int qat_cryptodev_config(struct rte_cryptodev *dev, struct rte_cryptodev_config *config); diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h deleted file mode 100644 index cfb176ca94..0000000000 --- a/drivers/crypto/qat/qat_sym_capabilities.h +++ /dev/null @@ -1,1248 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2017-2019 Intel Corporation - */ - -#ifndef _QAT_SYM_CAPABILITIES_H_ -#define _QAT_SYM_CAPABILITIES_H_ - -#define QAT_BASE_GEN1_SYM_CAPABILITIES \ - { /* SHA1 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA1, \ - .block_size = 64, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 20, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA224 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA224, \ - .block_size = 64, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 28, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA256 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA256, \ - .block_size = 64, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 32, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA384 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA384, \ - .block_size = 128, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 48, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA512 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA512, \ - .block_size = 128, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA1 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA1_HMAC, \ - .block_size = 64, \ - .key_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 20, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA224 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA224_HMAC, \ - .block_size = 64, \ - .key_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 28, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA256 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, \ - .block_size = 64, \ - .key_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 32, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA384 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA384_HMAC, \ - .block_size = 128, \ - .key_size = { \ - .min = 1, \ - .max = 128, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 48, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA512 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA512_HMAC, \ - .block_size = 128, \ - .key_size = { \ - .min = 1, \ - .max = 128, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* MD5 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_MD5_HMAC, \ - .block_size = 64, \ - .key_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 16, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* AES XCBC MAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 12, \ - .max = 12, \ - .increment = 0 \ - }, \ - .aad_size = { 0 }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* AES CMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_AES_CMAC, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 4, \ - .max = 16, \ - .increment = 4 \ - } \ - }, } \ - }, } \ - }, \ - { /* AES CCM */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ - {.aead = { \ - .algo = RTE_CRYPTO_AEAD_AES_CCM, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 4, \ - .max = 16, \ - .increment = 2 \ - }, \ - .aad_size = { \ - .min = 0, \ - .max = 224, \ - .increment = 1 \ - }, \ - .iv_size = { \ - .min = 7, \ - .max = 13, \ - .increment = 1 \ - }, \ - }, } \ - }, } \ - }, \ - { /* AES GCM */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ - {.aead = { \ - .algo = RTE_CRYPTO_AEAD_AES_GCM, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 8 \ - }, \ - .digest_size = { \ - .min = 8, \ - .max = 16, \ - .increment = 4 \ - }, \ - .aad_size = { \ - .min = 0, \ - .max = 240, \ - .increment = 1 \ - }, \ - .iv_size = { \ - .min = 0, \ - .max = 12, \ - .increment = 12 \ - }, \ - }, } \ - }, } \ - }, \ - { /* AES GMAC (AUTH) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_AES_GMAC, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 8 \ - }, \ - .digest_size = { \ - .min = 8, \ - .max = 16, \ - .increment = 4 \ - }, \ - .iv_size = { \ - .min = 0, \ - .max = 12, \ - .increment = 12 \ - } \ - }, } \ - }, } \ - }, \ - { /* SNOW 3G (UIA2) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 4, \ - .max = 4, \ - .increment = 0 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* AES CBC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_AES_CBC, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 8 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* AES XTS */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_AES_XTS, \ - .block_size = 16, \ - .key_size = { \ - .min = 32, \ - .max = 64, \ - .increment = 32 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* AES DOCSIS BPI */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 16 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* SNOW 3G (UEA2) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* AES CTR */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_AES_CTR, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 8 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* NULL (AUTH) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_NULL, \ - .block_size = 1, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .iv_size = { 0 } \ - }, }, \ - }, }, \ - }, \ - { /* NULL (CIPHER) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_NULL, \ - .block_size = 1, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .iv_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - } \ - }, }, \ - }, } \ - }, \ - { /* KASUMI (F8) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_KASUMI_F8, \ - .block_size = 8, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .iv_size = { \ - .min = 8, \ - .max = 8, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* KASUMI (F9) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_KASUMI_F9, \ - .block_size = 8, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 4, \ - .max = 4, \ - .increment = 0 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* 3DES CBC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_3DES_CBC, \ - .block_size = 8, \ - .key_size = { \ - .min = 8, \ - .max = 24, \ - .increment = 8 \ - }, \ - .iv_size = { \ - .min = 8, \ - .max = 8, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* 3DES CTR */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_3DES_CTR, \ - .block_size = 8, \ - .key_size = { \ - .min = 16, \ - .max = 24, \ - .increment = 8 \ - }, \ - .iv_size = { \ - .min = 8, \ - .max = 8, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* DES CBC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_DES_CBC, \ - .block_size = 8, \ - .key_size = { \ - .min = 8, \ - .max = 8, \ - .increment = 0 \ - }, \ - .iv_size = { \ - .min = 8, \ - .max = 8, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* DES DOCSISBPI */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_DES_DOCSISBPI,\ - .block_size = 8, \ - .key_size = { \ - .min = 8, \ - .max = 8, \ - .increment = 0 \ - }, \ - .iv_size = { \ - .min = 8, \ - .max = 8, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - } - -#define QAT_EXTRA_GEN2_SYM_CAPABILITIES \ - { /* ZUC (EEA3) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* ZUC (EIA3) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_ZUC_EIA3, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 4, \ - .max = 4, \ - .increment = 0 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - } - -#define QAT_EXTRA_GEN3_SYM_CAPABILITIES \ - { /* Chacha20-Poly1305 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ - {.aead = { \ - .algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \ - .block_size = 64, \ - .key_size = { \ - .min = 32, \ - .max = 32, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .aad_size = { \ - .min = 0, \ - .max = 240, \ - .increment = 1 \ - }, \ - .iv_size = { \ - .min = 12, \ - .max = 12, \ - .increment = 0 \ - }, \ - }, } \ - }, } \ - } - -#define QAT_BASE_GEN4_SYM_CAPABILITIES \ - { /* AES CBC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_AES_CBC, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 8 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* SHA1 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA1_HMAC, \ - .block_size = 64, \ - .key_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 20, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA224 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA224_HMAC, \ - .block_size = 64, \ - .key_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 28, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA256 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, \ - .block_size = 64, \ - .key_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 32, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA384 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA384_HMAC, \ - .block_size = 128, \ - .key_size = { \ - .min = 1, \ - .max = 128, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 48, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA512 HMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA512_HMAC, \ - .block_size = 128, \ - .key_size = { \ - .min = 1, \ - .max = 128, \ - .increment = 1 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* AES XCBC MAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 12, \ - .max = 12, \ - .increment = 0 \ - }, \ - .aad_size = { 0 }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* AES CMAC */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_AES_CMAC, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 4, \ - .max = 16, \ - .increment = 4 \ - } \ - }, } \ - }, } \ - }, \ - { /* AES DOCSIS BPI */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 16 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* NULL (AUTH) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_NULL, \ - .block_size = 1, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .iv_size = { 0 } \ - }, }, \ - }, }, \ - }, \ - { /* NULL (CIPHER) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_NULL, \ - .block_size = 1, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .iv_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - } \ - }, }, \ - }, } \ - }, \ - { /* SHA1 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA1, \ - .block_size = 64, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 20, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA224 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA224, \ - .block_size = 64, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 28, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA256 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA256, \ - .block_size = 64, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 32, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA384 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA384, \ - .block_size = 128, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 48, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* SHA512 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_SHA512, \ - .block_size = 128, \ - .key_size = { \ - .min = 0, \ - .max = 0, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 1, \ - .max = 64, \ - .increment = 1 \ - }, \ - .iv_size = { 0 } \ - }, } \ - }, } \ - }, \ - { /* AES CTR */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_AES_CTR, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 8 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - }, \ - { /* AES GCM */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ - {.aead = { \ - .algo = RTE_CRYPTO_AEAD_AES_GCM, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 8 \ - }, \ - .digest_size = { \ - .min = 8, \ - .max = 16, \ - .increment = 4 \ - }, \ - .aad_size = { \ - .min = 0, \ - .max = 240, \ - .increment = 1 \ - }, \ - .iv_size = { \ - .min = 0, \ - .max = 12, \ - .increment = 12 \ - }, \ - }, } \ - }, } \ - }, \ - { /* AES CCM */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ - {.aead = { \ - .algo = RTE_CRYPTO_AEAD_AES_CCM, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 4, \ - .max = 16, \ - .increment = 2 \ - }, \ - .aad_size = { \ - .min = 0, \ - .max = 224, \ - .increment = 1 \ - }, \ - .iv_size = { \ - .min = 7, \ - .max = 13, \ - .increment = 1 \ - }, \ - }, } \ - }, } \ - }, \ - { /* Chacha20-Poly1305 */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ - {.aead = { \ - .algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \ - .block_size = 64, \ - .key_size = { \ - .min = 32, \ - .max = 32, \ - .increment = 0 \ - }, \ - .digest_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - }, \ - .aad_size = { \ - .min = 0, \ - .max = 240, \ - .increment = 1 \ - }, \ - .iv_size = { \ - .min = 12, \ - .max = 12, \ - .increment = 0 \ - }, \ - }, } \ - }, } \ - }, \ - { /* AES GMAC (AUTH) */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ - {.auth = { \ - .algo = RTE_CRYPTO_AUTH_AES_GMAC, \ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 8 \ - }, \ - .digest_size = { \ - .min = 8, \ - .max = 16, \ - .increment = 4 \ - }, \ - .iv_size = { \ - .min = 0, \ - .max = 12, \ - .increment = 12 \ - } \ - }, } \ - }, } \ - } \ - - - -#ifdef RTE_LIB_SECURITY -#define QAT_SECURITY_SYM_CAPABILITIES \ - { /* AES DOCSIS BPI */ \ - .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ - {.sym = { \ - .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ - {.cipher = { \ - .algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\ - .block_size = 16, \ - .key_size = { \ - .min = 16, \ - .max = 32, \ - .increment = 16 \ - }, \ - .iv_size = { \ - .min = 16, \ - .max = 16, \ - .increment = 0 \ - } \ - }, } \ - }, } \ - } - -#define QAT_SECURITY_CAPABILITIES(sym) \ - [0] = { /* DOCSIS Uplink */ \ - .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ - .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ - .docsis = { \ - .direction = RTE_SECURITY_DOCSIS_UPLINK \ - }, \ - .crypto_capabilities = (sym) \ - }, \ - [1] = { /* DOCSIS Downlink */ \ - .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ - .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ - .docsis = { \ - .direction = RTE_SECURITY_DOCSIS_DOWNLINK \ - }, \ - .crypto_capabilities = (sym) \ - } -#endif - -#endif /* _QAT_SYM_CAPABILITIES_H_ */ diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c index dec877cfab..b835245f17 100644 --- a/drivers/crypto/qat/qat_sym_pmd.c +++ b/drivers/crypto/qat/qat_sym_pmd.c @@ -22,85 +22,7 @@ uint8_t qat_sym_driver_id; -static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = { - QAT_BASE_GEN1_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = { - QAT_BASE_GEN1_SYM_CAPABILITIES, - QAT_EXTRA_GEN2_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = { - QAT_BASE_GEN1_SYM_CAPABILITIES, - QAT_EXTRA_GEN2_SYM_CAPABILITIES, - QAT_EXTRA_GEN3_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = { - QAT_BASE_GEN4_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -#ifdef RTE_LIB_SECURITY -static const struct rte_cryptodev_capabilities - qat_security_sym_capabilities[] = { - QAT_SECURITY_SYM_CAPABILITIES, - RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() -}; - -static const struct rte_security_capability qat_security_capabilities[] = { - QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities), - { - .action = RTE_SECURITY_ACTION_TYPE_NONE - } -}; -#endif - -static struct rte_cryptodev_ops crypto_qat_ops = { - - /* Device related operations */ - .dev_configure = qat_cryptodev_config, - .dev_start = qat_cryptodev_start, - .dev_stop = qat_cryptodev_stop, - .dev_close = qat_cryptodev_close, - .dev_infos_get = qat_cryptodev_info_get, - - .stats_get = qat_cryptodev_stats_get, - .stats_reset = qat_cryptodev_stats_reset, - .queue_pair_setup = qat_cryptodev_qp_setup, - .queue_pair_release = qat_cryptodev_qp_release, - - /* Crypto related operations */ - .sym_session_get_size = qat_sym_session_get_private_size, - .sym_session_configure = qat_sym_session_configure, - .sym_session_clear = qat_sym_session_clear, - - /* Raw data-path API related operations */ - .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, - .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, -}; - -#ifdef RTE_LIB_SECURITY -static const struct rte_security_capability * -qat_security_cap_get(void *device __rte_unused) -{ - return qat_security_capabilities; -} - -static struct rte_security_ops security_qat_ops = { - - .session_create = qat_security_session_create, - .session_update = NULL, - .session_stats_get = NULL, - .session_destroy = qat_security_session_destroy, - .set_pkt_metadata = NULL, - .capabilities_get = qat_security_cap_get -}; -#endif +struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS]; void qat_sym_init_op_cookie(void *op_cookie) @@ -156,7 +78,6 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, int i = 0, ret = 0; struct qat_device_info *qat_dev_instance = &qat_pci_devs[qat_pci_dev->qat_dev_id]; - struct rte_cryptodev_pmd_init_params init_params = { .name = "", .socket_id = qat_dev_instance->pci_dev->device.numa_node, @@ -166,13 +87,22 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN]; struct rte_cryptodev *cryptodev; struct qat_cryptodev_private *internals; + struct qat_capabilities_info capa_info; const struct rte_cryptodev_capabilities *capabilities; + const struct qat_crypto_gen_dev_ops *gen_dev_ops = + &qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen]; uint64_t capa_size; snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s", qat_pci_dev->name, "sym"); QAT_LOG(DEBUG, "Creating QAT SYM device %s", name); + if (gen_dev_ops->cryptodev_ops == NULL) { + QAT_LOG(ERR, "Device %s does not support symmetric crypto", + name); + return -EFAULT; + } + /* * All processes must use same driver id so they can share sessions. * Store driver_id so we can validate that all processes have the same @@ -206,92 +136,56 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, qat_dev_instance->sym_rte_dev.name = cryptodev->data->name; cryptodev->driver_id = qat_sym_driver_id; - cryptodev->dev_ops = &crypto_qat_ops; + cryptodev->dev_ops = gen_dev_ops->cryptodev_ops; cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst; cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst; - cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | - RTE_CRYPTODEV_FF_HW_ACCELERATED | - RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | - RTE_CRYPTODEV_FF_IN_PLACE_SGL | - RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | - RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | - RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | - RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED; - - if (qat_pci_dev->qat_dev_gen < QAT_GEN4) - cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SYM_RAW_DP; + cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; - snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN, - "QAT_SYM_CAPA_GEN_%d", - qat_pci_dev->qat_dev_gen); - #ifdef RTE_LIB_SECURITY - struct rte_security_ctx *security_instance; - security_instance = rte_malloc("qat_sec", - sizeof(struct rte_security_ctx), - RTE_CACHE_LINE_SIZE); - if (security_instance == NULL) { - QAT_LOG(ERR, "rte_security_ctx memory alloc failed"); - ret = -ENOMEM; - goto error; - } + if (gen_dev_ops->create_security_ctx) { + cryptodev->security_ctx = + gen_dev_ops->create_security_ctx((void *)cryptodev); + if (cryptodev->security_ctx == NULL) { + QAT_LOG(ERR, "rte_security_ctx memory alloc failed"); + ret = -ENOMEM; + goto error; + } + + cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY; + QAT_LOG(INFO, "Device %s rte_security support enabled", name); + } else + QAT_LOG(INFO, "Device %s rte_security support disabled", name); - security_instance->device = (void *)cryptodev; - security_instance->ops = &security_qat_ops; - security_instance->sess_cnt = 0; - cryptodev->security_ctx = security_instance; - cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY; #endif + snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN, + "QAT_SYM_CAPA_GEN_%d", + qat_pci_dev->qat_dev_gen); internals = cryptodev->data->dev_private; internals->qat_dev = qat_pci_dev; internals->service_type = QAT_SERVICE_SYMMETRIC; - internals->dev_id = cryptodev->data->dev_id; - switch (qat_pci_dev->qat_dev_gen) { - case QAT_GEN1: - capabilities = qat_gen1_sym_capabilities; - capa_size = sizeof(qat_gen1_sym_capabilities); - break; - case QAT_GEN2: - capabilities = qat_gen2_sym_capabilities; - capa_size = sizeof(qat_gen2_sym_capabilities); - break; - case QAT_GEN3: - capabilities = qat_gen3_sym_capabilities; - capa_size = sizeof(qat_gen3_sym_capabilities); - break; - case QAT_GEN4: - capabilities = qat_gen4_sym_capabilities; - capa_size = sizeof(qat_gen4_sym_capabilities); - break; - default: - QAT_LOG(DEBUG, - "QAT gen %d capabilities unknown", - qat_pci_dev->qat_dev_gen); - ret = -(EINVAL); - goto error; - } + + capa_info = gen_dev_ops->get_capabilities(qat_pci_dev); + capabilities = capa_info.data; + capa_size = capa_info.size; internals->capa_mz = rte_memzone_lookup(capa_memz_name); if (internals->capa_mz == NULL) { internals->capa_mz = rte_memzone_reserve(capa_memz_name, - capa_size, - rte_socket_id(), 0); - } - if (internals->capa_mz == NULL) { - QAT_LOG(DEBUG, - "Error allocating memzone for capabilities, destroying " - "PMD for %s", - name); - ret = -EFAULT; - goto error; + capa_size, rte_socket_id(), 0); + if (internals->capa_mz == NULL) { + QAT_LOG(DEBUG, + "Error allocating capability memzon for %s", + name); + ret = -EFAULT; + goto error; + } } memcpy(internals->capa_mz->addr, capabilities, capa_size); diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h index d49b732ca0..0dc0c6f0d9 100644 --- a/drivers/crypto/qat/qat_sym_pmd.h +++ b/drivers/crypto/qat/qat_sym_pmd.h @@ -13,7 +13,6 @@ #include #endif -#include "qat_sym_capabilities.h" #include "qat_crypto.h" #include "qat_device.h" @@ -24,8 +23,64 @@ #define QAT_SYM_CAP_MIXED_CRYPTO (1 << 0) #define QAT_SYM_CAP_VALID (1 << 31) +/** + * Macro to add a sym capability + * helper function to add an sym capability + * + * + **/ +#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_##n, \ + b, d \ + }, } \ + }, } \ + } + +#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, \ + {.auth = { \ + .algo = RTE_CRYPTO_AUTH_##n, \ + b, k, d, a, i \ + }, } \ + }, } \ + } + +#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, \ + {.aead = { \ + .algo = RTE_CRYPTO_AEAD_##n, \ + b, k, d, a, i \ + }, } \ + }, } \ + } + +#define QAT_SYM_CIPHER_CAP(n, b, k, i) \ + { \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ + {.cipher = { \ + .algo = RTE_CRYPTO_CIPHER_##n, \ + b, k, i \ + }, } \ + }, } \ + } + extern uint8_t qat_sym_driver_id; +extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[]; + int qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, struct qat_dev_cmd_param *qat_dev_cmd_param); From patchwork Fri Oct 22 17:03:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Zhang X-Patchwork-Id: 102692 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A12BA0C43; Fri, 22 Oct 2021 19:05:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C773C4115E; Fri, 22 Oct 2021 19:04:40 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 9D05C41144 for ; Fri, 22 Oct 2021 19:04:14 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10145"; a="315546624" X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="315546624" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2021 10:04:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,173,1631602800"; d="scan'208";a="569279824" Received: from silpixa00400885.ir.intel.com ([10.243.23.122]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2021 10:04:12 -0700 From: Fan Zhang To: dev@dpdk.org Cc: gakhil@marvell.com, Fan Zhang , Arek Kusztal , Kai Ji Date: Fri, 22 Oct 2021 18:03:54 +0100 Message-Id: <20211022170354.13503-10-roy.fan.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211022170354.13503-1-roy.fan.zhang@intel.com> References: <20211014161137.1405168-1-roy.fan.zhang@intel.com> <20211022170354.13503-1-roy.fan.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [dpdk-dev v4 9/9] crypto/qat: add gen specific implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch replaces the mixed QAT symmetric and asymmetric support implementation by separate files with shared or individual implementation for specific QAT generation. Signed-off-by: Arek Kusztal Signed-off-by: Fan Zhang Signed-off-by: Kai Ji Acked-by: Ciara Power --- drivers/common/qat/meson.build | 7 +- drivers/crypto/qat/dev/qat_asym_pmd_gen1.c | 76 +++++ drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 224 +++++++++++++++ drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 164 +++++++++++ drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 124 ++++++++ drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 36 +++ drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 283 +++++++++++++++++++ drivers/crypto/qat/qat_crypto.h | 3 - 8 files changed, 913 insertions(+), 4 deletions(-) create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index 29fd0168ea..ce9959d103 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -71,7 +71,12 @@ endif if qat_crypto foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c', - 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c'] + 'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c', + 'dev/qat_sym_pmd_gen1.c', + 'dev/qat_asym_pmd_gen1.c', + 'dev/qat_crypto_pmd_gen2.c', + 'dev/qat_crypto_pmd_gen3.c', + 'dev/qat_crypto_pmd_gen4.c'] sources += files(join_paths(qat_crypto_relpath, f)) endforeach deps += ['security'] diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c new file mode 100644 index 0000000000..9ed1f21d9d --- /dev/null +++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#include +#include "qat_asym.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" +#include "qat_pke_functionality_arrays.h" + +struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = { + /* Device related operations */ + .dev_configure = qat_cryptodev_config, + .dev_start = qat_cryptodev_start, + .dev_stop = qat_cryptodev_stop, + .dev_close = qat_cryptodev_close, + .dev_infos_get = qat_cryptodev_info_get, + + .stats_get = qat_cryptodev_stats_get, + .stats_reset = qat_cryptodev_stats_reset, + .queue_pair_setup = qat_cryptodev_qp_setup, + .queue_pair_release = qat_cryptodev_qp_release, + + /* Crypto related operations */ + .asym_session_get_size = qat_asym_session_get_private_size, + .asym_session_configure = qat_asym_session_configure, + .asym_session_clear = qat_asym_session_clear +}; + +static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen1[] = { + QAT_ASYM_CAP(MODEX, + 0, 1, 512, 1), + QAT_ASYM_CAP(MODINV, + 0, 1, 512, 1), + QAT_ASYM_CAP(RSA, + ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | + (1 << RTE_CRYPTO_ASYM_OP_VERIFY) | + (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | + (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)), + 64, 512, 64), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + + +struct qat_capabilities_info +qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_asym_crypto_caps_gen1; + capa_info.size = sizeof(qat_asym_crypto_caps_gen1); + return capa_info; +} + +uint64_t +qat_asym_crypto_feature_flags_get_gen1( + struct qat_pci_device *qat_dev __rte_unused) +{ + uint64_t feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_HW_ACCELERATED | + RTE_CRYPTODEV_FF_ASYM_SESSIONLESS | + RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP | + RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT; + + return feature_flags; +} + +RTE_INIT(qat_asym_crypto_gen1_init) +{ + qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops = + &qat_asym_crypto_ops_gen1; + qat_asym_gen_dev_ops[QAT_GEN1].get_capabilities = + qat_asym_crypto_cap_get_gen1; + qat_asym_gen_dev_ops[QAT_GEN1].get_feature_flags = + qat_asym_crypto_feature_flags_get_gen1; +} diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c new file mode 100644 index 0000000000..b4ec440e05 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c @@ -0,0 +1,224 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_asym.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" + +#define MIXED_CRYPTO_MIN_FW_VER 0x04090000 + +static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen2[] = { + QAT_SYM_PLAIN_AUTH_CAP(SHA1, + CAP_SET(block_size, 64), + CAP_RNG(digest_size, 1, 20, 1)), + QAT_SYM_AEAD_CAP(AES_GCM, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), + QAT_SYM_AEAD_CAP(AES_CCM, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), + CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), + QAT_SYM_AUTH_CAP(AES_GMAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), + QAT_SYM_AUTH_CAP(AES_CMAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA224, + CAP_SET(block_size, 64), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA256, + CAP_SET(block_size, 64), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA384, + CAP_SET(block_size, 128), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA512, + CAP_SET(block_size, 128), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA1_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA224_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA256_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA384_HMAC, + CAP_SET(block_size, 128), + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA512_HMAC, + CAP_SET(block_size, 128), + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(MD5_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(AES_XCBC_MAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SNOW3G_UIA2, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_AUTH_CAP(KASUMI_F9, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(NULL, + CAP_SET(block_size, 1), + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_CIPHER_CAP(AES_CBC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(AES_CTR, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(AES_XTS, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(AES_DOCSISBPI, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(SNOW3G_UEA2, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(KASUMI_F8, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(NULL, + CAP_SET(block_size, 1), + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_CIPHER_CAP(3DES_CBC, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(3DES_CTR, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(DES_CBC, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(DES_DOCSISBPI, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(ZUC_EEA3, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_AUTH_CAP(ZUC_EIA3, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static int +qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, int socket_id) +{ + struct qat_cryptodev_private *qat_sym_private = dev->data->dev_private; + struct qat_qp *qp; + int ret; + + if (qat_cryptodev_qp_setup(dev, qp_id, qp_conf, socket_id)) { + QAT_LOG(DEBUG, "QAT qp setup failed"); + return -1; + } + + qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]; + ret = qat_cq_get_fw_version(qp); + if (ret < 0) { + qat_cryptodev_qp_release(dev, qp_id); + return ret; + } + + if (ret != 0) + QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d", + (ret >> 24) & 0xff, + (ret >> 16) & 0xff, + (ret >> 8) & 0xff); + else + QAT_LOG(DEBUG, "unknown QAT firmware version"); + + /* set capabilities based on the fw version */ + qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID | + ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? + QAT_SYM_CAP_MIXED_CRYPTO : 0); + return 0; +} + +struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = { + + /* Device related operations */ + .dev_configure = qat_cryptodev_config, + .dev_start = qat_cryptodev_start, + .dev_stop = qat_cryptodev_stop, + .dev_close = qat_cryptodev_close, + .dev_infos_get = qat_cryptodev_info_get, + + .stats_get = qat_cryptodev_stats_get, + .stats_reset = qat_cryptodev_stats_reset, + .queue_pair_setup = qat_sym_crypto_qp_setup_gen2, + .queue_pair_release = qat_cryptodev_qp_release, + + /* Crypto related operations */ + .sym_session_get_size = qat_sym_session_get_private_size, + .sym_session_configure = qat_sym_session_configure, + .sym_session_clear = qat_sym_session_clear, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, +}; + +static struct qat_capabilities_info +qat_sym_crypto_cap_get_gen2(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_sym_crypto_caps_gen2; + capa_info.size = sizeof(qat_sym_crypto_caps_gen2); + return capa_info; +} + +RTE_INIT(qat_sym_crypto_gen2_init) +{ + qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2; + qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities = + qat_sym_crypto_cap_get_gen2; + qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags = + qat_sym_crypto_feature_flags_get_gen1; + +#ifdef RTE_LIB_SECURITY + qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx = + qat_sym_create_security_gen1; +#endif +} + +RTE_INIT(qat_asym_crypto_gen2_init) +{ + qat_asym_gen_dev_ops[QAT_GEN2].cryptodev_ops = + &qat_asym_crypto_ops_gen1; + qat_asym_gen_dev_ops[QAT_GEN2].get_capabilities = + qat_asym_crypto_cap_get_gen1; + qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags = + qat_asym_crypto_feature_flags_get_gen1; +} diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c new file mode 100644 index 0000000000..d3336cf4a1 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c @@ -0,0 +1,164 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_asym.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" + +static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = { + QAT_SYM_PLAIN_AUTH_CAP(SHA1, + CAP_SET(block_size, 64), + CAP_RNG(digest_size, 1, 20, 1)), + QAT_SYM_AEAD_CAP(AES_GCM, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), + QAT_SYM_AEAD_CAP(AES_CCM, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), + CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), + QAT_SYM_AUTH_CAP(AES_GMAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), + QAT_SYM_AUTH_CAP(AES_CMAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA224, + CAP_SET(block_size, 64), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA256, + CAP_SET(block_size, 64), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA384, + CAP_SET(block_size, 128), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA512, + CAP_SET(block_size, 128), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA1_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA224_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA256_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA384_HMAC, + CAP_SET(block_size, 128), + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA512_HMAC, + CAP_SET(block_size, 128), + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(MD5_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(AES_XCBC_MAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SNOW3G_UIA2, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_AUTH_CAP(KASUMI_F9, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(NULL, + CAP_SET(block_size, 1), + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_CIPHER_CAP(AES_CBC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(AES_CTR, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(AES_XTS, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(AES_DOCSISBPI, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(SNOW3G_UEA2, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(KASUMI_F8, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(NULL, + CAP_SET(block_size, 1), + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_CIPHER_CAP(3DES_CBC, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(3DES_CTR, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(DES_CBC, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(DES_DOCSISBPI, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(ZUC_EEA3, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_AUTH_CAP(ZUC_EIA3, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_AEAD_CAP(CHACHA20_POLY1305, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 32, 32, 0), + CAP_RNG(digest_size, 16, 16, 0), + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static struct qat_capabilities_info +qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_sym_crypto_caps_gen3; + capa_info.size = sizeof(qat_sym_crypto_caps_gen3); + return capa_info; +} + +RTE_INIT(qat_sym_crypto_gen3_init) +{ + qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1; + qat_sym_gen_dev_ops[QAT_GEN3].get_capabilities = + qat_sym_crypto_cap_get_gen3; + qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags = + qat_sym_crypto_feature_flags_get_gen1; +#ifdef RTE_LIB_SECURITY + qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx = + qat_sym_create_security_gen1; +#endif +} + +RTE_INIT(qat_asym_crypto_gen3_init) +{ + qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL; + qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL; + qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL; +} diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c new file mode 100644 index 0000000000..37a58c026f --- /dev/null +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c @@ -0,0 +1,124 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#include +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_asym.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" + +static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = { + QAT_SYM_CIPHER_CAP(AES_CBC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_AUTH_CAP(SHA1_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA224_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA256_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA384_HMAC, + CAP_SET(block_size, 128), + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA512_HMAC, + CAP_SET(block_size, 128), + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(AES_XCBC_MAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(AES_CMAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_CIPHER_CAP(AES_DOCSISBPI, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_AUTH_CAP(NULL, + CAP_SET(block_size, 1), + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_CIPHER_CAP(NULL, + CAP_SET(block_size, 1), + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_PLAIN_AUTH_CAP(SHA1, + CAP_SET(block_size, 64), + CAP_RNG(digest_size, 1, 20, 1)), + QAT_SYM_AUTH_CAP(SHA224, + CAP_SET(block_size, 64), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA256, + CAP_SET(block_size, 64), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA384, + CAP_SET(block_size, 128), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA512, + CAP_SET(block_size, 128), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_CIPHER_CAP(AES_CTR, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_AEAD_CAP(AES_GCM, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), + QAT_SYM_AEAD_CAP(AES_CCM, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), + CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), + QAT_SYM_AUTH_CAP(AES_GMAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), + QAT_SYM_AEAD_CAP(CHACHA20_POLY1305, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 32, 32, 0), + CAP_RNG(digest_size, 16, 16, 0), + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static struct qat_capabilities_info +qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_sym_crypto_caps_gen4; + capa_info.size = sizeof(qat_sym_crypto_caps_gen4); + return capa_info; +} + +RTE_INIT(qat_sym_crypto_gen4_init) +{ + qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1; + qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities = + qat_sym_crypto_cap_get_gen4; + qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags = + qat_sym_crypto_feature_flags_get_gen1; +#ifdef RTE_LIB_SECURITY + qat_sym_gen_dev_ops[QAT_GEN4].create_security_ctx = + qat_sym_create_security_gen1; +#endif +} + +RTE_INIT(qat_asym_crypto_gen4_init) +{ + qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL; + qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL; + qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL; +} diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h new file mode 100644 index 0000000000..67a4d2cb2c --- /dev/null +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#ifndef _QAT_CRYPTO_PMD_GENS_H_ +#define _QAT_CRYPTO_PMD_GENS_H_ + +#include +#include "qat_crypto.h" +#include "qat_sym_session.h" + +extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1; +extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1; + +/* -----------------GENx control path APIs ---------------- */ +uint64_t +qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev); + +void +qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session, + uint8_t hash_flag); + +struct qat_capabilities_info +qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev); + +uint64_t +qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev); + +#ifdef RTE_LIB_SECURITY +extern struct rte_security_ops security_qat_ops_gen1; + +void * +qat_sym_create_security_gen1(void *cryptodev); +#endif + +#endif diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c new file mode 100644 index 0000000000..e156f194e2 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -0,0 +1,283 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2021 Intel Corporation + */ + +#include +#ifdef RTE_LIB_SECURITY +#include +#endif + +#include "adf_transport_access_macros.h" +#include "icp_qat_fw.h" +#include "icp_qat_fw_la.h" + +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_sym_session.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" + +static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen1[] = { + QAT_SYM_PLAIN_AUTH_CAP(SHA1, + CAP_SET(block_size, 64), + CAP_RNG(digest_size, 1, 20, 1)), + QAT_SYM_AEAD_CAP(AES_GCM, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), + QAT_SYM_AEAD_CAP(AES_CCM, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), + CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), + QAT_SYM_AUTH_CAP(AES_GMAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), + QAT_SYM_AUTH_CAP(AES_CMAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA224, + CAP_SET(block_size, 64), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA256, + CAP_SET(block_size, 64), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA384, + CAP_SET(block_size, 128), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA512, + CAP_SET(block_size, 128), + CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA1_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA224_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA256_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA384_HMAC, + CAP_SET(block_size, 128), + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SHA512_HMAC, + CAP_SET(block_size, 128), + CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(MD5_HMAC, + CAP_SET(block_size, 64), + CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(AES_XCBC_MAC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(SNOW3G_UIA2, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_AUTH_CAP(KASUMI_F9, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_AUTH_CAP(NULL, + CAP_SET(block_size, 1), + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), + CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_CIPHER_CAP(AES_CBC, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(AES_CTR, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(AES_XTS, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(AES_DOCSISBPI, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(SNOW3G_UEA2, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), + QAT_SYM_CIPHER_CAP(KASUMI_F8, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(NULL, + CAP_SET(block_size, 1), + CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), + QAT_SYM_CIPHER_CAP(3DES_CBC, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(3DES_CTR, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(DES_CBC, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), + QAT_SYM_CIPHER_CAP(DES_DOCSISBPI, + CAP_SET(block_size, 8), + CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = { + + /* Device related operations */ + .dev_configure = qat_cryptodev_config, + .dev_start = qat_cryptodev_start, + .dev_stop = qat_cryptodev_stop, + .dev_close = qat_cryptodev_close, + .dev_infos_get = qat_cryptodev_info_get, + + .stats_get = qat_cryptodev_stats_get, + .stats_reset = qat_cryptodev_stats_reset, + .queue_pair_setup = qat_cryptodev_qp_setup, + .queue_pair_release = qat_cryptodev_qp_release, + + /* Crypto related operations */ + .sym_session_get_size = qat_sym_session_get_private_size, + .sym_session_configure = qat_sym_session_configure, + .sym_session_clear = qat_sym_session_clear, + + /* Raw data-path API related operations */ + .sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size, + .sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx, +}; + +static struct qat_capabilities_info +qat_sym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused) +{ + struct qat_capabilities_info capa_info; + capa_info.data = qat_sym_crypto_caps_gen1; + capa_info.size = sizeof(qat_sym_crypto_caps_gen1); + return capa_info; +} + +uint64_t +qat_sym_crypto_feature_flags_get_gen1( + struct qat_pci_device *qat_dev __rte_unused) +{ + uint64_t feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_HW_ACCELERATED | + RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | + RTE_CRYPTODEV_FF_IN_PLACE_SGL | + RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | + RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | + RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED | + RTE_CRYPTODEV_FF_SYM_RAW_DP; + + return feature_flags; +} + +#ifdef RTE_LIB_SECURITY + +#define QAT_SECURITY_SYM_CAPABILITIES \ + { /* AES DOCSIS BPI */ \ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, \ + {.sym = { \ + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, \ + {.cipher = { \ + .algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\ + .block_size = 16, \ + .key_size = { \ + .min = 16, \ + .max = 32, \ + .increment = 16 \ + }, \ + .iv_size = { \ + .min = 16, \ + .max = 16, \ + .increment = 0 \ + } \ + }, } \ + }, } \ + } + +#define QAT_SECURITY_CAPABILITIES(sym) \ + [0] = { /* DOCSIS Uplink */ \ + .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ + .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ + .docsis = { \ + .direction = RTE_SECURITY_DOCSIS_UPLINK \ + }, \ + .crypto_capabilities = (sym) \ + }, \ + [1] = { /* DOCSIS Downlink */ \ + .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL, \ + .protocol = RTE_SECURITY_PROTOCOL_DOCSIS, \ + .docsis = { \ + .direction = RTE_SECURITY_DOCSIS_DOWNLINK \ + }, \ + .crypto_capabilities = (sym) \ + } + +static const struct rte_cryptodev_capabilities + qat_security_sym_capabilities[] = { + QAT_SECURITY_SYM_CAPABILITIES, + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static const struct rte_security_capability qat_security_capabilities_gen1[] = { + QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities), + { + .action = RTE_SECURITY_ACTION_TYPE_NONE + } +}; + +static const struct rte_security_capability * +qat_security_cap_get_gen1(void *dev __rte_unused) +{ + return qat_security_capabilities_gen1; +} + +struct rte_security_ops security_qat_ops_gen1 = { + .session_create = qat_security_session_create, + .session_update = NULL, + .session_stats_get = NULL, + .session_destroy = qat_security_session_destroy, + .set_pkt_metadata = NULL, + .capabilities_get = qat_security_cap_get_gen1 +}; + +void * +qat_sym_create_security_gen1(void *cryptodev) +{ + struct rte_security_ctx *security_instance; + + security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx), + RTE_CACHE_LINE_SIZE); + if (security_instance == NULL) + return NULL; + + security_instance->device = cryptodev; + security_instance->ops = &security_qat_ops_gen1; + security_instance->sess_cnt = 0; + + return (void *)security_instance; +} + +#endif + +RTE_INIT(qat_sym_crypto_gen1_init) +{ + qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1; + qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities = + qat_sym_crypto_cap_get_gen1; + qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags = + qat_sym_crypto_feature_flags_get_gen1; +#ifdef RTE_LIB_SECURITY + qat_sym_gen_dev_ops[QAT_GEN1].create_security_ctx = + qat_sym_create_security_gen1; +#endif +} diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index 0a8afb0b31..6eaa15b975 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -6,9 +6,6 @@ #define _QAT_CRYPTO_H_ #include -#ifdef RTE_LIB_SECURITY -#include -#endif #include "qat_device.h"