From patchwork Wed Jan 20 11:29:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86959 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3BABA0A05; Wed, 20 Jan 2021 12:29:57 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DEC99140D01; Wed, 20 Jan 2021 12:29:57 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id B72D7140D01 for ; Wed, 20 Jan 2021 12:29:56 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:29:51 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfX8001381; Wed, 20 Jan 2021 13:29:51 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:25 +0000 Message-Id: <1611142175-409485-2-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 01/11] common/mlx5: add DevX attributes for compress X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the DevX attributes for compress related engines: - compress - decompress - dma Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/common/mlx5/mlx5_devx_cmds.c | 10 ++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 7 +++++++ drivers/common/mlx5/mlx5_prm.h | 18 ++++++++++++++++-- 3 files changed, 33 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index d5859c2..33acd73 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -732,6 +732,16 @@ struct mlx5_devx_obj * attr->log_max_pd = MLX5_GET(cmd_hca_cap, hcattr, log_max_pd); attr->log_max_srq = MLX5_GET(cmd_hca_cap, hcattr, log_max_srq); attr->log_max_srq_sz = MLX5_GET(cmd_hca_cap, hcattr, log_max_srq_sz); + attr->mmo_dma_en = MLX5_GET(cmd_hca_cap, hcattr, dma_mmo); + attr->mmo_compress_en = MLX5_GET(cmd_hca_cap, hcattr, compress); + attr->mmo_decompress_en = MLX5_GET(cmd_hca_cap, hcattr, decompress); + attr->compress_min_block_size = MLX5_GET(cmd_hca_cap, hcattr, + compress_min_block_size); + attr->log_max_mmo_dma = MLX5_GET(cmd_hca_cap, hcattr, log_dma_mmo_size); + attr->log_max_mmo_compress = MLX5_GET(cmd_hca_cap, hcattr, + log_compress_mmo_size); + attr->log_max_mmo_decompress = MLX5_GET(cmd_hca_cap, hcattr, + log_decompress_mmo_size); if (attr->qos.sup) { MLX5_SET(query_hca_cap_in, in, op_mod, MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP | diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index bf83a90..696a981 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -130,6 +130,13 @@ struct mlx5_hca_attr { uint32_t log_max_srq; uint32_t log_max_srq_sz; uint32_t rss_ind_tbl_cap; + uint32_t mmo_dma_en:1; + uint32_t mmo_compress_en:1; + uint32_t mmo_decompress_en:1; + uint32_t compress_min_block_size:4; + uint32_t log_max_mmo_dma:5; + uint32_t log_max_mmo_compress:5; + uint32_t log_max_mmo_decompress:5; }; struct mlx5_devx_wq_attr { diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index c9eba22..72c843f 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1127,7 +1127,15 @@ enum { struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_0[0x30]; u8 vhca_id[0x10]; - u8 reserved_at_40[0x40]; + u8 reserved_at_40[0x20]; + u8 reserved_at_60[0x3]; + u8 log_regexp_scatter_gather_size[0x5]; + u8 reserved_at_68[0x3]; + u8 log_dma_mmo_size[5]; + u8 reserved_at_70[0x3]; + u8 log_compress_mmo_size[5]; + u8 reserved_at_78[0x3]; + u8 log_decompress_mmo_size[5]; u8 log_max_srq_sz[0x8]; u8 log_max_qp_sz[0x8]; u8 reserved_at_90[0x9]; @@ -1175,7 +1183,13 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 log_max_ra_res_dc[0x6]; u8 reserved_at_140[0xa]; u8 log_max_ra_req_qp[0x6]; - u8 reserved_at_150[0xa]; + u8 rtr2rts_qp_counters_set_id[1]; + u8 rts2rts_udp_sport[1]; + u8 rts2rts_lag_tx_port_affinity[1]; + u8 dma_mmo[1]; + u8 compress_min_block_size[4]; + u8 compress[1]; + u8 decompress[1]; u8 log_max_ra_res_qp[0x6]; u8 end_pad[0x1]; u8 cc_query_allowed[0x1]; From patchwork Wed Jan 20 11:29:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86960 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3A043A0A05; Wed, 20 Jan 2021 12:30:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 258A8140CFB; Wed, 20 Jan 2021 12:30:08 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id CC3AB140CD9 for ; Wed, 20 Jan 2021 12:30:06 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:30:02 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfX9001381; Wed, 20 Jan 2021 13:30:02 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:26 +0000 Message-Id: <1611142175-409485-3-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 02/11] drivers: introduce mlx5 compress PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a new compress PMD for Mellanox devices. The MLX5 compress driver library provides support for Mellanox BlueField 2 families of 25/50/100/200 Gb/s adapters. GGAs (Generic Global Accelerators) are offload engines that can be used to do memory to memory tasks on data. These engines are part of the ARM complex of the BlueField 2 chip, and as such they do not use NIC related resources (e.g. RX/TX bandwidth). They do share the same PCI and memory bandwidth. So, using the BlueField 2 device, the compress class operations can be run in parallel to the net, vdpa, and regex class operations. This driver is depending on rdma-core like the other mlx5 PMDs, also it is going to use mlx5 DevX to create HW objects directly by the FW. Add the probing functions, PCI bus connectivity, HW capabilities checks and some basic objects preparations. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- MAINTAINERS | 4 + drivers/common/mlx5/mlx5_common.h | 1 + drivers/common/mlx5/mlx5_common_pci.c | 7 + drivers/common/mlx5/mlx5_common_pci.h | 36 ++-- drivers/compress/meson.build | 2 +- drivers/compress/mlx5/meson.build | 26 +++ drivers/compress/mlx5/mlx5_compress.c | 298 ++++++++++++++++++++++++++++ drivers/compress/mlx5/mlx5_compress_utils.h | 20 ++ drivers/compress/mlx5/version.map | 3 + 9 files changed, 378 insertions(+), 19 deletions(-) create mode 100644 drivers/compress/mlx5/meson.build create mode 100644 drivers/compress/mlx5/mlx5_compress.c create mode 100644 drivers/compress/mlx5/mlx5_compress_utils.h create mode 100644 drivers/compress/mlx5/version.map diff --git a/MAINTAINERS b/MAINTAINERS index aa973a3..4b1bdf3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1132,6 +1132,10 @@ F: drivers/compress/zlib/ F: doc/guides/compressdevs/zlib.rst F: doc/guides/compressdevs/features/zlib.ini +Mellanox mlx5 +M: Matan Azrad +F: drivers/compress/mlx5/ + RegEx Drivers ------------- diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index e35188d..3855582 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -217,6 +217,7 @@ enum mlx5_class { MLX5_CLASS_NET = RTE_BIT64(0), MLX5_CLASS_VDPA = RTE_BIT64(1), MLX5_CLASS_REGEX = RTE_BIT64(2), + MLX5_CLASS_COMPRESS = RTE_BIT64(3), }; #define MLX5_DBR_SIZE RTE_CACHE_LINE_SIZE diff --git a/drivers/common/mlx5/mlx5_common_pci.c b/drivers/common/mlx5/mlx5_common_pci.c index 5208972..2b65768 100644 --- a/drivers/common/mlx5/mlx5_common_pci.c +++ b/drivers/common/mlx5/mlx5_common_pci.c @@ -28,14 +28,21 @@ static TAILQ_HEAD(mlx5_pci_devices_head, mlx5_pci_device) devices_list = { .name = "vdpa", .driver_class = MLX5_CLASS_VDPA }, { .name = "net", .driver_class = MLX5_CLASS_NET }, { .name = "regex", .driver_class = MLX5_CLASS_REGEX }, + { .name = "compress", .driver_class = MLX5_CLASS_COMPRESS }, }; static const unsigned int mlx5_class_combinations[] = { MLX5_CLASS_NET, MLX5_CLASS_VDPA, MLX5_CLASS_REGEX, + MLX5_CLASS_COMPRESS, MLX5_CLASS_NET | MLX5_CLASS_REGEX, MLX5_CLASS_VDPA | MLX5_CLASS_REGEX, + MLX5_CLASS_NET | MLX5_CLASS_COMPRESS, + MLX5_CLASS_VDPA | MLX5_CLASS_COMPRESS, + MLX5_CLASS_REGEX | MLX5_CLASS_COMPRESS, + MLX5_CLASS_NET | MLX5_CLASS_REGEX | MLX5_CLASS_COMPRESS, + MLX5_CLASS_VDPA | MLX5_CLASS_REGEX | MLX5_CLASS_COMPRESS, /* New class combination should be added here. */ }; diff --git a/drivers/common/mlx5/mlx5_common_pci.h b/drivers/common/mlx5/mlx5_common_pci.h index 41b73e1..de89bb9 100644 --- a/drivers/common/mlx5/mlx5_common_pci.h +++ b/drivers/common/mlx5/mlx5_common_pci.h @@ -9,26 +9,26 @@ * @file * * RTE Mellanox PCI Driver Interface - * Mellanox ConnectX PCI device supports multiple class (net/vdpa/regex) - * devices. This layer enables creating such multiple class of devices on a - * single PCI device by allowing to bind multiple class specific device + * Mellanox ConnectX PCI device supports multiple class: net,vdpa,regex and + * compress devices. This layer enables creating such multiple class of devices + * on a single PCI device by allowing to bind multiple class specific device * driver to attach to mlx5_pci driver. * - * ----------- ------------ ------------- - * | mlx5 | | mlx5 | | mlx5 | - * | net pmd | | vdpa pmd | | regex pmd | - * ----------- ------------ ------------- - * \ | / - * \ | / - * \ -------------- / - * \______| mlx5 |_____ / - * | pci common | - * -------------- - * | - * ----------- - * | mlx5 | - * | pci dev | - * ----------- + * ----------- ------------ ------------- ---------------- + * | mlx5 | | mlx5 | | mlx5 | | mlx5 | + * | net pmd | | vdpa pmd | | regex pmd | | compress pmd | + * ----------- ------------ ------------- ---------------- + * \ \ / / + * \ \ / / + * \ \_--------------_/ / + * \_______________| mlx5 |_______________/ + * | pci common | + * -------------- + * | + * ----------- + * | mlx5 | + * | pci dev | + * ----------- * * - mlx5 pci driver binds to mlx5 PCI devices defined by PCI * ID table of all related mlx5 PCI devices. diff --git a/drivers/compress/meson.build b/drivers/compress/meson.build index cd96def..49fa02d 100644 --- a/drivers/compress/meson.build +++ b/drivers/compress/meson.build @@ -5,6 +5,6 @@ if is_windows subdir_done() endif -drivers = ['isal', 'octeontx', 'zlib'] +drivers = ['isal', 'mlx5', 'octeontx', 'zlib'] std_deps = ['compressdev'] # compressdev pulls in all other needed deps diff --git a/drivers/compress/mlx5/meson.build b/drivers/compress/mlx5/meson.build new file mode 100644 index 0000000..2a6dc3f --- /dev/null +++ b/drivers/compress/mlx5/meson.build @@ -0,0 +1,26 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2021 Mellanox Technologies, Ltd + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +fmt_name = 'mlx5_compress' +deps += ['common_mlx5', 'eal', 'compressdev'] +sources = files( + 'mlx5_compress.c', +) +cflags_options = [ + '-std=c11', + '-Wno-strict-prototypes', + '-D_BSD_SOURCE', + '-D_DEFAULT_SOURCE', + '-D_XOPEN_SOURCE=600' +] +foreach option:cflags_options + if cc.has_argument(option) + cflags += option + endif +endforeach diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c new file mode 100644 index 0000000..604044c --- /dev/null +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -0,0 +1,298 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 Mellanox Technologies, Ltd + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "mlx5_compress_utils.h" + +#define MLX5_COMPRESS_DRIVER_NAME mlx5_compress +#define MLX5_COMPRESS_LOG_NAME pmd.compress.mlx5 + +struct mlx5_compress_priv { + TAILQ_ENTRY(mlx5_compress_priv) next; + struct ibv_context *ctx; /* Device context. */ + struct rte_pci_device *pci_dev; + struct rte_compressdev *cdev; + void *uar; + uint32_t pdn; /* Protection Domain number. */ + uint8_t min_block_size; + /* Minimum huffman block size supported by the device. */ + struct ibv_pd *pd; +}; + +TAILQ_HEAD(mlx5_compress_privs, mlx5_compress_priv) mlx5_compress_priv_list = + TAILQ_HEAD_INITIALIZER(mlx5_compress_priv_list); +static pthread_mutex_t priv_list_lock = PTHREAD_MUTEX_INITIALIZER; + +int mlx5_compress_logtype; + +static struct rte_compressdev_ops mlx5_compress_ops = { + .dev_configure = NULL, + .dev_start = NULL, + .dev_stop = NULL, + .dev_close = NULL, + .dev_infos_get = NULL, + .stats_get = NULL, + .stats_reset = NULL, + .queue_pair_setup = NULL, + .queue_pair_release = NULL, + .private_xform_create = NULL, + .private_xform_free = NULL, + .stream_create = NULL, + .stream_free = NULL, +}; + +static struct ibv_device * +mlx5_compress_get_ib_device_match(struct rte_pci_addr *addr) +{ + int n; + struct ibv_device **ibv_list = mlx5_glue->get_device_list(&n); + struct ibv_device *ibv_match = NULL; + + if (ibv_list == NULL) { + rte_errno = ENOSYS; + return NULL; + } + while (n-- > 0) { + struct rte_pci_addr paddr; + + DRV_LOG(DEBUG, "Checking device \"%s\"..", ibv_list[n]->name); + if (mlx5_dev_to_pci_addr(ibv_list[n]->ibdev_path, &paddr) != 0) + continue; + if (rte_pci_addr_cmp(addr, &paddr) != 0) + continue; + ibv_match = ibv_list[n]; + break; + } + if (ibv_match == NULL) + rte_errno = ENOENT; + mlx5_glue->free_device_list(ibv_list); + return ibv_match; +} + +static void +mlx5_compress_hw_global_release(struct mlx5_compress_priv *priv) +{ + if (priv->pd != NULL) { + claim_zero(mlx5_glue->dealloc_pd(priv->pd)); + priv->pd = NULL; + } + if (priv->uar != NULL) { + mlx5_glue->devx_free_uar(priv->uar); + priv->uar = NULL; + } +} + +static int +mlx5_compress_pd_create(struct mlx5_compress_priv *priv) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + struct mlx5dv_obj obj; + struct mlx5dv_pd pd_info; + int ret; + + priv->pd = mlx5_glue->alloc_pd(priv->ctx); + if (priv->pd == NULL) { + DRV_LOG(ERR, "Failed to allocate PD."); + return errno ? -errno : -ENOMEM; + } + obj.pd.in = priv->pd; + obj.pd.out = &pd_info; + ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); + if (ret != 0) { + DRV_LOG(ERR, "Fail to get PD object info."); + mlx5_glue->dealloc_pd(priv->pd); + priv->pd = NULL; + return -errno; + } + priv->pdn = pd_info.pdn; + return 0; +#else + (void)priv; + DRV_LOG(ERR, "Cannot get pdn - no DV support."); + return -ENOTSUP; +#endif /* HAVE_IBV_FLOW_DV_SUPPORT */ +} + +static int +mlx5_compress_hw_global_prepare(struct mlx5_compress_priv *priv) +{ + if (mlx5_compress_pd_create(priv) != 0) + return -1; + priv->uar = mlx5_devx_alloc_uar(priv->ctx, -1); + if (priv->uar == NULL || mlx5_os_get_devx_uar_reg_addr(priv->uar) == + NULL) { + rte_errno = errno; + claim_zero(mlx5_glue->dealloc_pd(priv->pd)); + DRV_LOG(ERR, "Failed to allocate UAR."); + return -1; + } + return 0; +} + +/** + * DPDK callback to register a PCI device. + * + * This function spawns compress device out of a given PCI device. + * + * @param[in] pci_drv + * PCI driver structure (mlx5_compress_driver). + * @param[in] pci_dev + * PCI device information. + * + * @return + * 0 on success, 1 to skip this driver, a negative errno value otherwise + * and rte_errno is set. + */ +static int +mlx5_compress_pci_probe(struct rte_pci_driver *pci_drv, + struct rte_pci_device *pci_dev) +{ + struct ibv_device *ibv; + struct rte_compressdev *cdev; + struct ibv_context *ctx; + struct mlx5_compress_priv *priv; + struct mlx5_hca_attr att = { 0 }; + struct rte_compressdev_pmd_init_params init_params = { + .name = "", + .socket_id = pci_dev->device.numa_node, + }; + + RTE_SET_USED(pci_drv); + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + DRV_LOG(ERR, "Non-primary process type is not supported."); + rte_errno = ENOTSUP; + return -rte_errno; + } + ibv = mlx5_compress_get_ib_device_match(&pci_dev->addr); + if (ibv == NULL) { + DRV_LOG(ERR, "No matching IB device for PCI slot " + PCI_PRI_FMT ".", pci_dev->addr.domain, + pci_dev->addr.bus, pci_dev->addr.devid, + pci_dev->addr.function); + return -rte_errno; + } + DRV_LOG(INFO, "PCI information matches for device \"%s\".", ibv->name); + ctx = mlx5_glue->dv_open_device(ibv); + if (ctx == NULL) { + DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); + rte_errno = ENODEV; + return -rte_errno; + } + if (mlx5_devx_cmd_query_hca_attr(ctx, &att) != 0 || + att.mmo_compress_en == 0 || att.mmo_decompress_en == 0 || + att.mmo_dma_en == 0) { + DRV_LOG(ERR, "Not enough capabilities to support compress " + "operations, maybe old FW/OFED version?"); + claim_zero(mlx5_glue->close_device(ctx)); + rte_errno = ENOTSUP; + return -ENOTSUP; + } + cdev = rte_compressdev_pmd_create(ibv->name, &pci_dev->device, + sizeof(*priv), &init_params); + if (cdev == NULL) { + DRV_LOG(ERR, "Failed to create device \"%s\".", ibv->name); + claim_zero(mlx5_glue->close_device(ctx)); + return -ENODEV; + } + DRV_LOG(INFO, + "Compress device %s was created successfully.", ibv->name); + cdev->dev_ops = &mlx5_compress_ops; + cdev->dequeue_burst = NULL; + cdev->enqueue_burst = NULL; + cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; + priv = cdev->data->dev_private; + priv->ctx = ctx; + priv->pci_dev = pci_dev; + priv->cdev = cdev; + priv->min_block_size = att.compress_min_block_size; + if (mlx5_compress_hw_global_prepare(priv) != 0) { + rte_compressdev_pmd_destroy(priv->cdev); + claim_zero(mlx5_glue->close_device(priv->ctx)); + return -1; + } + pthread_mutex_lock(&priv_list_lock); + TAILQ_INSERT_TAIL(&mlx5_compress_priv_list, priv, next); + pthread_mutex_unlock(&priv_list_lock); + return 0; +} + +/** + * DPDK callback to remove a PCI device. + * + * This function removes all compress devices belong to a given PCI device. + * + * @param[in] pci_dev + * Pointer to the PCI device. + * + * @return + * 0 on success, the function cannot fail. + */ +static int +mlx5_compress_pci_remove(struct rte_pci_device *pdev) +{ + struct mlx5_compress_priv *priv = NULL; + + pthread_mutex_lock(&priv_list_lock); + TAILQ_FOREACH(priv, &mlx5_compress_priv_list, next) + if (rte_pci_addr_cmp(&priv->pci_dev->addr, &pdev->addr) != 0) + break; + if (priv) + TAILQ_REMOVE(&mlx5_compress_priv_list, priv, next); + pthread_mutex_unlock(&priv_list_lock); + if (priv) { + mlx5_compress_hw_global_release(priv); + rte_compressdev_pmd_destroy(priv->cdev); + claim_zero(mlx5_glue->close_device(priv->ctx)); + } + return 0; +} + +static const struct rte_pci_id mlx5_compress_pci_id_map[] = { + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX, + PCI_DEVICE_ID_MELLANOX_CONNECTX6DXBF) + }, + { + .vendor_id = 0 + } +}; + +static struct mlx5_pci_driver mlx5_compress_driver = { + .driver_class = MLX5_CLASS_COMPRESS, + .pci_driver = { + .driver = { + .name = RTE_STR(MLX5_COMPRESS_DRIVER_NAME), + }, + .id_table = mlx5_compress_pci_id_map, + .probe = mlx5_compress_pci_probe, + .remove = mlx5_compress_pci_remove, + .drv_flags = 0, + }, +}; + +RTE_INIT(rte_mlx5_compress_init) +{ + mlx5_common_init(); + if (mlx5_glue != NULL) + mlx5_pci_driver_register(&mlx5_compress_driver); +} + +RTE_LOG_REGISTER(mlx5_compress_logtype, MLX5_COMPRESS_LOG_NAME, NOTICE) +RTE_PMD_EXPORT_NAME(MLX5_COMPRESS_DRIVER_NAME, __COUNTER__); +RTE_PMD_REGISTER_PCI_TABLE(MLX5_COMPRESS_DRIVER_NAME, mlx5_compress_pci_id_map); +RTE_PMD_REGISTER_KMOD_DEP(MLX5_COMPRESS_DRIVER_NAME, "* ib_uverbs & mlx5_core & mlx5_ib"); diff --git a/drivers/compress/mlx5/mlx5_compress_utils.h b/drivers/compress/mlx5/mlx5_compress_utils.h new file mode 100644 index 0000000..f93244f --- /dev/null +++ b/drivers/compress/mlx5/mlx5_compress_utils.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 Mellanox Technologies, Ltd + */ + +#ifndef RTE_PMD_MLX5_COMPRESS_UTILS_H_ +#define RTE_PMD_MLX5_COMPRESS_UTILS_H_ + +#include + + +extern int mlx5_compress_logtype; + +#define MLX5_COMPRESS_LOG_PREFIX "mlx5_compress" +/* Generic printf()-like logging macro with automatic line feed. */ +#define DRV_LOG(level, ...) \ + PMD_DRV_LOG_(level, mlx5_compress_logtype, MLX5_COMPRESS_LOG_PREFIX, \ + __VA_ARGS__ PMD_DRV_LOG_STRIP PMD_DRV_LOG_OPAREN, \ + PMD_DRV_LOG_CPAREN) + +#endif /* RTE_PMD_MLX5_COMPRESS_UTILS_H_ */ diff --git a/drivers/compress/mlx5/version.map b/drivers/compress/mlx5/version.map new file mode 100644 index 0000000..4a76d1d --- /dev/null +++ b/drivers/compress/mlx5/version.map @@ -0,0 +1,3 @@ +DPDK_21 { + local: *; +}; From patchwork Wed Jan 20 11:29:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86961 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A880A0A05; Wed, 20 Jan 2021 12:30:23 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 787DD140D03; Wed, 20 Jan 2021 12:30:23 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 1FA2C140D07 for ; Wed, 20 Jan 2021 12:30:22 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:30:18 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfXA001381; Wed, 20 Jan 2021 13:30:17 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:27 +0000 Message-Id: <1611142175-409485-4-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 03/11] compress/mlx5: support basic control operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add initial support for the next operations: - dev_configure - dev_close - dev_infos_get Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/compress/mlx5/mlx5_compress.c | 43 ++++++++++++++++++++++++++++++++--- 1 file changed, 40 insertions(+), 3 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 604044c..efb77d0 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -21,6 +21,7 @@ #define MLX5_COMPRESS_DRIVER_NAME mlx5_compress #define MLX5_COMPRESS_LOG_NAME pmd.compress.mlx5 +#define MLX5_COMPRESS_MAX_QPS 1024 struct mlx5_compress_priv { TAILQ_ENTRY(mlx5_compress_priv) next; @@ -32,6 +33,7 @@ struct mlx5_compress_priv { uint8_t min_block_size; /* Minimum huffman block size supported by the device. */ struct ibv_pd *pd; + struct rte_compressdev_config dev_config; }; TAILQ_HEAD(mlx5_compress_privs, mlx5_compress_priv) mlx5_compress_priv_list = @@ -40,12 +42,47 @@ struct mlx5_compress_priv { int mlx5_compress_logtype; +const struct rte_compressdev_capabilities mlx5_caps[RTE_COMP_ALGO_LIST_END]; + + +static void +mlx5_compress_dev_info_get(struct rte_compressdev *dev, + struct rte_compressdev_info *info) +{ + RTE_SET_USED(dev); + if (info != NULL) { + info->max_nb_queue_pairs = MLX5_COMPRESS_MAX_QPS; + info->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; + info->capabilities = mlx5_caps; + } +} + +static int +mlx5_compress_dev_configure(struct rte_compressdev *dev, + struct rte_compressdev_config *config) +{ + struct mlx5_compress_priv *priv; + + if (dev == NULL || config == NULL) + return -EINVAL; + priv = dev->data->dev_private; + priv->dev_config = *config; + return 0; +} + +static int +mlx5_compress_dev_close(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); + return 0; +} + static struct rte_compressdev_ops mlx5_compress_ops = { - .dev_configure = NULL, + .dev_configure = mlx5_compress_dev_configure, .dev_start = NULL, .dev_stop = NULL, - .dev_close = NULL, - .dev_infos_get = NULL, + .dev_close = mlx5_compress_dev_close, + .dev_infos_get = mlx5_compress_dev_info_get, .stats_get = NULL, .stats_reset = NULL, .queue_pair_setup = NULL, From patchwork Wed Jan 20 11:29:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86962 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 84CDFA0A05; Wed, 20 Jan 2021 12:34:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4EE92140CF1; Wed, 20 Jan 2021 12:34:00 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 4D2BA140CF0 for ; Wed, 20 Jan 2021 12:33:58 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:33:55 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfXB001381; Wed, 20 Jan 2021 13:33:55 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:28 +0000 Message-Id: <1611142175-409485-5-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 04/11] common/mlx5: add compress primitives X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the GGA compress WQE related structures and definitions. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/common/mlx5/mlx5_prm.h | 41 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 72c843f..8809b92 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -412,10 +412,23 @@ struct mlx5_cqe_ts { uint8_t op_own; }; +/* GGA */ /* MMO metadata segment */ -#define MLX5_OPCODE_MMO 0x2f -#define MLX5_OPC_MOD_MMO_REGEX 0x4 +#define MLX5_OPCODE_MMO 0x2fu +#define MLX5_OPC_MOD_MMO_REGEX 0x4u +#define MLX5_OPC_MOD_MMO_COMP 0x2u +#define MLX5_OPC_MOD_MMO_DECOMP 0x3u +#define MLX5_OPC_MOD_MMO_DMA 0x1u + +#define WQE_GGA_COMP_WIN_SIZE_OFFSET 12u +#define WQE_GGA_COMP_BLOCK_SIZE_OFFSET 16u +#define WQE_GGA_COMP_DYNAMIC_SIZE_OFFSET 20u +#define MLX5_GGA_COMP_WIN_SIZE_UNITS 1024u +#define MLX5_GGA_COMP_WIN_SIZE_MAX (32u * MLX5_GGA_COMP_WIN_SIZE_UNITS) +#define MLX5_GGA_COMP_LOG_BLOCK_SIZE_MAX 15u +#define MLX5_GGA_COMP_LOG_DYNAMIC_SIZE_MAX 15u +#define MLX5_GGA_COMP_LOG_DYNAMIC_SIZE_MIN 0u struct mlx5_wqe_metadata_seg { uint32_t mmo_control_31_0; /* mmo_control_63_32 is in ctrl_seg.imm */ @@ -423,6 +436,30 @@ struct mlx5_wqe_metadata_seg { uint64_t addr; }; +struct mlx5_gga_wqe { + uint32_t opcode; + uint32_t sq_ds; + uint32_t flags; + uint32_t gga_ctrl1; /* ws 12-15, bs 16-19, dyns 20-23. */ + uint32_t gga_ctrl2; + uint32_t opaque_lkey; + uint64_t opaque_vaddr; + struct mlx5_wqe_dseg gather; + struct mlx5_wqe_dseg scatter; +} __rte_packed; + +struct mlx5_gga_compress_opaque { + uint32_t syndrom; + uint32_t reserved0; + uint32_t scattered_length; + uint32_t gathered_length; + uint64_t scatter_crc; + uint64_t gather_crc; + uint32_t crc32; + uint32_t adler32; + uint8_t reserved1[216]; +} __rte_packed; + struct mlx5_ifc_regexp_mmo_control_bits { uint8_t reserved_at_31[0x2]; uint8_t le[0x1]; From patchwork Wed Jan 20 11:29:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86963 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 75AB3A0A05; Wed, 20 Jan 2021 12:34:14 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E165140D01; Wed, 20 Jan 2021 12:34:14 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 928A4140D01 for ; Wed, 20 Jan 2021 12:34:12 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:34:08 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfXC001381; Wed, 20 Jan 2021 13:34:07 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:29 +0000 Message-Id: <1611142175-409485-6-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 05/11] compress/mlx5: support queue pair operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for the next operations: - queue_pair_setup - queue_pair_release Create and initialize DevX SQ and CQ for each compress API queue-pair. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/compress/mlx5/mlx5_compress.c | 148 +++++++++++++++++++++++++++++++++- 1 file changed, 146 insertions(+), 2 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index efb77d0..a2507c2 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -15,6 +15,8 @@ #include #include #include +#include +#include #include #include "mlx5_compress_utils.h" @@ -36,6 +38,20 @@ struct mlx5_compress_priv { struct rte_compressdev_config dev_config; }; +struct mlx5_compress_qp { + uint16_t qp_id; + uint16_t entries_n; + uint16_t pi; + uint16_t ci; + volatile uint64_t *uar_addr; + int socket_id; + struct mlx5_devx_cq cq; + struct mlx5_devx_sq sq; + struct mlx5_pmd_mr opaque_mr; + struct rte_comp_op **ops; + struct mlx5_compress_priv *priv; +}; + TAILQ_HEAD(mlx5_compress_privs, mlx5_compress_priv) mlx5_compress_priv_list = TAILQ_HEAD_INITIALIZER(mlx5_compress_priv_list); static pthread_mutex_t priv_list_lock = PTHREAD_MUTEX_INITIALIZER; @@ -77,6 +93,134 @@ struct mlx5_compress_priv { return 0; } +static int +mlx5_compress_qp_release(struct rte_compressdev *dev, uint16_t qp_id) +{ + struct mlx5_compress_qp *qp = dev->data->queue_pairs[qp_id]; + + if (qp->sq.sq != NULL) + mlx5_devx_sq_destroy(&qp->sq); + if (qp->cq.cq != NULL) + mlx5_devx_cq_destroy(&qp->cq); + if (qp->opaque_mr.obj != NULL) { + void *opaq = qp->opaque_mr.addr; + + mlx5_common_verbs_dereg_mr(&qp->opaque_mr); + if (opaq != NULL) + rte_free(opaq); + } + rte_free(qp); + dev->data->queue_pairs[qp_id] = NULL; + return 0; +} + +static void +mlx5_compress_init_sq(struct mlx5_compress_qp *qp) +{ + volatile struct mlx5_gga_wqe *restrict wqe = + (volatile struct mlx5_gga_wqe *)qp->sq.wqes; + volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; + const uint32_t sq_ds = rte_cpu_to_be_32((qp->sq.sq->id << 8) | 4u); + const uint32_t flags = RTE_BE32(MLX5_COMP_ALWAYS << + MLX5_COMP_MODE_OFFSET); + const uint32_t opaq_lkey = rte_cpu_to_be_32(qp->opaque_mr.lkey); + int i; + + /* All the next fields state should stay constant. */ + for (i = 0; i < qp->entries_n; ++i, ++wqe) { + wqe->sq_ds = sq_ds; + wqe->flags = flags; + wqe->opaque_lkey = opaq_lkey; + wqe->opaque_vaddr = rte_cpu_to_be_64 + ((uint64_t)(uintptr_t)&opaq[i]); + } +} + +static int +mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, + uint32_t max_inflight_ops, int socket_id) +{ + struct mlx5_compress_priv *priv = dev->data->dev_private; + struct mlx5_compress_qp *qp; + struct mlx5_devx_cq_attr cq_attr = { + .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar), + }; + struct mlx5_devx_create_sq_attr sq_attr = { + .user_index = qp_id, + .wq_attr = (struct mlx5_devx_wq_attr){ + .pd = priv->pdn, + .uar_page = mlx5_os_get_devx_uar_page_id(priv->uar), + }, + }; + struct mlx5_devx_modify_sq_attr modify_attr = { + .state = MLX5_SQC_STATE_RDY, + }; + uint32_t log_ops_n = rte_log2_u32(max_inflight_ops); + uint32_t alloc_size = sizeof(*qp); + void *opaq_buf; + int ret; + + alloc_size = RTE_ALIGN(alloc_size, RTE_CACHE_LINE_SIZE); + alloc_size += sizeof(struct rte_comp_op *) * (1u << log_ops_n); + qp = rte_zmalloc_socket(__func__, alloc_size, RTE_CACHE_LINE_SIZE, + socket_id); + if (qp == NULL) { + DRV_LOG(ERR, "Failed to allocate qp memory."); + rte_errno = ENOMEM; + return -rte_errno; + } + dev->data->queue_pairs[qp_id] = qp; + opaq_buf = rte_calloc(__func__, 1u << log_ops_n, + sizeof(struct mlx5_gga_compress_opaque), + sizeof(struct mlx5_gga_compress_opaque)); + if (opaq_buf == NULL) { + DRV_LOG(ERR, "Failed to allocate opaque memory."); + rte_errno = ENOMEM; + goto err; + } + qp->entries_n = 1 << log_ops_n; + qp->socket_id = socket_id; + qp->qp_id = qp_id; + qp->priv = priv; + qp->ops = (struct rte_comp_op **)RTE_ALIGN((uintptr_t)(qp + 1), + RTE_CACHE_LINE_SIZE); + qp->uar_addr = mlx5_os_get_devx_uar_reg_addr(priv->uar); + MLX5_ASSERT(qp->uar_addr); + if (mlx5_common_verbs_reg_mr(priv->pd, opaq_buf, qp->entries_n * + sizeof(struct mlx5_gga_compress_opaque), + &qp->opaque_mr) != 0) { + rte_free(opaq_buf); + DRV_LOG(ERR, "Failed to register opaque MR."); + rte_errno = ENOMEM; + goto err; + } + ret = mlx5_devx_cq_create(priv->ctx, &qp->cq, log_ops_n, &cq_attr, + socket_id); + if (ret != 0) { + DRV_LOG(ERR, "Failed to create CQ."); + goto err; + } + sq_attr.cqn = qp->cq.cq->id; + ret = mlx5_devx_sq_create(priv->ctx, &qp->sq, log_ops_n, &sq_attr, + socket_id); + if (ret != 0) { + DRV_LOG(ERR, "Failed to create SQ."); + goto err; + } + mlx5_compress_init_sq(qp); + ret = mlx5_devx_cmd_modify_sq(qp->sq.sq, &modify_attr); + if (ret != 0) { + DRV_LOG(ERR, "Can't change SQ state to ready."); + goto err; + } + DRV_LOG(INFO, "QP %u: SQN=0x%X CQN=0x%X entries num = %u\n", + (uint32_t)qp_id, qp->sq.sq->id, qp->cq.cq->id, qp->entries_n); + return 0; +err: + mlx5_compress_qp_release(dev, qp_id); + return -1; +} + static struct rte_compressdev_ops mlx5_compress_ops = { .dev_configure = mlx5_compress_dev_configure, .dev_start = NULL, @@ -85,8 +229,8 @@ struct mlx5_compress_priv { .dev_infos_get = mlx5_compress_dev_info_get, .stats_get = NULL, .stats_reset = NULL, - .queue_pair_setup = NULL, - .queue_pair_release = NULL, + .queue_pair_setup = mlx5_compress_qp_setup, + .queue_pair_release = mlx5_compress_qp_release, .private_xform_create = NULL, .private_xform_free = NULL, .stream_create = NULL, From patchwork Wed Jan 20 11:29:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86964 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 81985A0A05; Wed, 20 Jan 2021 12:34:22 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EFB09140D1B; Wed, 20 Jan 2021 12:34:18 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 9EE64140D1B for ; Wed, 20 Jan 2021 12:34:17 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:34:15 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfXD001381; Wed, 20 Jan 2021 13:34:15 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:30 +0000 Message-Id: <1611142175-409485-7-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 06/11] compress/mlx5: add transformation operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for the next operations: - private_xform_create - private_xform_free The driver transformation structure includes preparations for the next GGA WQE fields used by the enqueue function: opcode. compress specific fields (window size, block size and dynamic size) checksum type and compress type. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/compress/mlx5/mlx5_compress.c | 121 +++++++++++++++++++++++++++++++++- 1 file changed, 119 insertions(+), 2 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index a2507c2..cc9be72 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -24,6 +25,15 @@ #define MLX5_COMPRESS_DRIVER_NAME mlx5_compress #define MLX5_COMPRESS_LOG_NAME pmd.compress.mlx5 #define MLX5_COMPRESS_MAX_QPS 1024 +#define MLX5_COMP_MAX_WIN_SIZE_CONF 6u + +struct mlx5_compress_xform { + LIST_ENTRY(mlx5_compress_xform) next; + enum rte_comp_xform_type type; + enum rte_comp_checksum_type csum_type; + uint32_t opcode; + uint32_t gga_ctrl1; /* BE. */ +}; struct mlx5_compress_priv { TAILQ_ENTRY(mlx5_compress_priv) next; @@ -36,6 +46,8 @@ struct mlx5_compress_priv { /* Minimum huffman block size supported by the device. */ struct ibv_pd *pd; struct rte_compressdev_config dev_config; + LIST_HEAD(xform_list, mlx5_compress_xform) xform_list; + rte_spinlock_t xform_sl; }; struct mlx5_compress_qp { @@ -221,6 +233,111 @@ struct mlx5_compress_qp { return -1; } +static int +mlx5_compress_xform_free(struct rte_compressdev *dev, void *xform) +{ + struct mlx5_compress_priv *priv = dev->data->dev_private; + + rte_spinlock_lock(&priv->xform_sl); + LIST_REMOVE((struct mlx5_compress_xform *)xform, next); + rte_spinlock_unlock(&priv->xform_sl); + rte_free(xform); + return 0; +} + +static int +mlx5_compress_xform_create(struct rte_compressdev *dev, + const struct rte_comp_xform *xform, + void **private_xform) +{ + struct mlx5_compress_priv *priv = dev->data->dev_private; + struct mlx5_compress_xform *xfrm; + uint32_t size; + + if (xform->type == RTE_COMP_COMPRESS && xform->compress.level == + RTE_COMP_LEVEL_NONE) { + DRV_LOG(ERR, "Non-compressed block is not supported."); + return -ENOTSUP; + } + if ((xform->type == RTE_COMP_COMPRESS && xform->compress.hash_algo != + RTE_COMP_HASH_ALGO_NONE) || (xform->type == RTE_COMP_DECOMPRESS && + xform->decompress.hash_algo != RTE_COMP_HASH_ALGO_NONE)) { + DRV_LOG(ERR, "SHA is not supported."); + return -ENOTSUP; + } + xfrm = rte_zmalloc_socket(__func__, sizeof(*xfrm), 0, + priv->dev_config.socket_id); + if (xfrm == NULL) + return -ENOMEM; + xfrm->opcode = MLX5_OPCODE_MMO; + xfrm->type = xform->type; + switch (xform->type) { + case RTE_COMP_COMPRESS: + switch (xform->compress.algo) { + case RTE_COMP_ALGO_NULL: + xfrm->opcode += MLX5_OPC_MOD_MMO_DMA << + WQE_CSEG_OPC_MOD_OFFSET; + break; + case RTE_COMP_ALGO_DEFLATE: + size = 1 << xform->compress.window_size; + size /= MLX5_GGA_COMP_WIN_SIZE_UNITS; + xfrm->gga_ctrl1 += RTE_MIN(rte_log2_u32(size), + MLX5_COMP_MAX_WIN_SIZE_CONF) << + WQE_GGA_COMP_WIN_SIZE_OFFSET; + if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT) + size = MLX5_GGA_COMP_LOG_BLOCK_SIZE_MAX; + else + size = priv->min_block_size - 1 + + xform->compress.level; + xfrm->gga_ctrl1 += RTE_MIN(size, + MLX5_GGA_COMP_LOG_BLOCK_SIZE_MAX) << + WQE_GGA_COMP_BLOCK_SIZE_OFFSET; + xfrm->opcode += MLX5_OPC_MOD_MMO_COMP << + WQE_CSEG_OPC_MOD_OFFSET; + size = xform->compress.deflate.huffman == + RTE_COMP_HUFFMAN_DYNAMIC ? + MLX5_GGA_COMP_LOG_DYNAMIC_SIZE_MAX : + MLX5_GGA_COMP_LOG_DYNAMIC_SIZE_MIN; + xfrm->gga_ctrl1 += size << + WQE_GGA_COMP_DYNAMIC_SIZE_OFFSET; + break; + default: + goto err; + } + xfrm->csum_type = xform->compress.chksum; + break; + case RTE_COMP_DECOMPRESS: + switch (xform->decompress.algo) { + case RTE_COMP_ALGO_NULL: + xfrm->opcode += MLX5_OPC_MOD_MMO_DMA << + WQE_CSEG_OPC_MOD_OFFSET; + break; + case RTE_COMP_ALGO_DEFLATE: + xfrm->opcode += MLX5_OPC_MOD_MMO_DECOMP << + WQE_CSEG_OPC_MOD_OFFSET; + break; + default: + goto err; + } + xfrm->csum_type = xform->decompress.chksum; + break; + default: + DRV_LOG(ERR, "Algorithm %u is not supported.", xform->type); + goto err; + } + DRV_LOG(DEBUG, "New xform: gga ctrl1 = 0x%08X opcode = 0x%08X csum " + "type = %d.", xfrm->gga_ctrl1, xfrm->opcode, xfrm->csum_type); + xfrm->gga_ctrl1 = rte_cpu_to_be_32(xfrm->gga_ctrl1); + rte_spinlock_lock(&priv->xform_sl); + LIST_INSERT_HEAD(&priv->xform_list, xfrm, next); + rte_spinlock_unlock(&priv->xform_sl); + *private_xform = xfrm; + return 0; +err: + rte_free(xfrm); + return -ENOTSUP; +} + static struct rte_compressdev_ops mlx5_compress_ops = { .dev_configure = mlx5_compress_dev_configure, .dev_start = NULL, @@ -231,8 +348,8 @@ struct mlx5_compress_qp { .stats_reset = NULL, .queue_pair_setup = mlx5_compress_qp_setup, .queue_pair_release = mlx5_compress_qp_release, - .private_xform_create = NULL, - .private_xform_free = NULL, + .private_xform_create = mlx5_compress_xform_create, + .private_xform_free = mlx5_compress_xform_free, .stream_create = NULL, .stream_free = NULL, }; From patchwork Wed Jan 20 11:29:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86965 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74D4EA0A05; Wed, 20 Jan 2021 12:34:34 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 62A57140D02; Wed, 20 Jan 2021 12:34:34 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id D3528140CF7 for ; Wed, 20 Jan 2021 12:34:32 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:34:27 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfXE001381; Wed, 20 Jan 2021 13:34:27 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:31 +0000 Message-Id: <1611142175-409485-8-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 07/11] compress/mlx5: add memory region management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Mellanox user space drivers don't deal with physical addresses, that's why any mbuf virtual address moved directly to the HW descriptor(WQE). The mapping between the virtual address to the physical address is saved in MR configured by the kernel to the HW. Each MR has a key that should also be moved to the WQE by the SW. When the SW see address which is not mapped, it extends the address range and creates a MR using a system call. Add memory region cache management: - 2 level cache per queue-pair - no locks. - 1 shared cache between all the queues using a lock. Using this way, the MR key search per data-path address is optimized. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/compress/mlx5/mlx5_compress.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index cc9be72..467b80a 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -48,6 +48,7 @@ struct mlx5_compress_priv { struct rte_compressdev_config dev_config; LIST_HEAD(xform_list, mlx5_compress_xform) xform_list; rte_spinlock_t xform_sl; + struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ }; struct mlx5_compress_qp { @@ -56,6 +57,7 @@ struct mlx5_compress_qp { uint16_t pi; uint16_t ci; volatile uint64_t *uar_addr; + struct mlx5_mr_ctrl mr_ctrl; int socket_id; struct mlx5_devx_cq cq; struct mlx5_devx_sq sq; @@ -121,6 +123,7 @@ struct mlx5_compress_qp { if (opaq != NULL) rte_free(opaq); } + mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); rte_free(qp); dev->data->queue_pairs[qp_id] = NULL; return 0; @@ -190,6 +193,13 @@ struct mlx5_compress_qp { rte_errno = ENOMEM; goto err; } + if (mlx5_mr_btree_init(&qp->mr_ctrl.cache_bh, MLX5_MR_BTREE_CACHE_N, + priv->dev_config.socket_id)) { + DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", + (uint32_t)qp_id); + rte_errno = ENOMEM; + goto err; + } qp->entries_n = 1 << log_ops_n; qp->socket_id = socket_id; qp->qp_id = qp_id; @@ -523,6 +533,17 @@ struct mlx5_compress_qp { claim_zero(mlx5_glue->close_device(priv->ctx)); return -1; } + if (mlx5_mr_btree_init(&priv->mr_scache.cache, + MLX5_MR_BTREE_CACHE_N * 2, rte_socket_id()) != 0) { + DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); + mlx5_compress_hw_global_release(priv); + rte_compressdev_pmd_destroy(priv->cdev); + claim_zero(mlx5_glue->close_device(priv->ctx)); + rte_errno = ENOMEM; + return -rte_errno; + } + priv->mr_scache.reg_mr_cb = mlx5_common_verbs_reg_mr; + priv->mr_scache.dereg_mr_cb = mlx5_common_verbs_dereg_mr; pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&mlx5_compress_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); @@ -553,6 +574,7 @@ struct mlx5_compress_qp { TAILQ_REMOVE(&mlx5_compress_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (priv) { + mlx5_mr_release_cache(&priv->mr_scache); mlx5_compress_hw_global_release(priv); rte_compressdev_pmd_destroy(priv->cdev); claim_zero(mlx5_glue->close_device(priv->ctx)); From patchwork Wed Jan 20 11:29:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86966 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27A80A0A05; Wed, 20 Jan 2021 12:34:41 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EBA37140D5D; Wed, 20 Jan 2021 12:34:38 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id C76EA140D5B for ; Wed, 20 Jan 2021 12:34:37 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:34:37 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfXF001381; Wed, 20 Jan 2021 13:34:37 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:32 +0000 Message-Id: <1611142175-409485-9-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 08/11] compress/mlx5: add data-path functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add implementation for the next compress data-path functions: - dequeue_burst - enqueue_burst Add the next operation for starting \ stopping data-path: - dev_stop - dev_close Each compress API enqueued operation is translated to a WQE. Once WQE is done, the HW sends CQE to the CQ, when SW see the CQE the operation will be updated and dequeued. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/compress/mlx5/mlx5_compress.c | 219 +++++++++++++++++++++++++++++++++- 1 file changed, 215 insertions(+), 4 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 467b80a..17aa206 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include "mlx5_compress_utils.h" @@ -348,10 +349,23 @@ struct mlx5_compress_qp { return -ENOTSUP; } +static void +mlx5_compress_dev_stop(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); +} + +static int +mlx5_compress_dev_start(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); + return 0; +} + static struct rte_compressdev_ops mlx5_compress_ops = { .dev_configure = mlx5_compress_dev_configure, - .dev_start = NULL, - .dev_stop = NULL, + .dev_start = mlx5_compress_dev_start, + .dev_stop = mlx5_compress_dev_stop, .dev_close = mlx5_compress_dev_close, .dev_infos_get = mlx5_compress_dev_info_get, .stats_get = NULL, @@ -364,6 +378,203 @@ struct mlx5_compress_qp { .stream_free = NULL, }; +static __rte_always_inline uint32_t +mlx5_compress_dseg_set(struct mlx5_compress_qp *qp, + volatile struct mlx5_wqe_dseg *restrict dseg, + struct rte_mbuf *restrict mbuf, + uint32_t offset, uint32_t len) +{ + uintptr_t addr = rte_pktmbuf_mtod_offset(mbuf, uintptr_t, offset); + + dseg->bcount = rte_cpu_to_be_32(len); + dseg->lkey = mlx5_mr_addr2mr_bh(qp->priv->pd, 0, &qp->priv->mr_scache, + &qp->mr_ctrl, addr, + !!(mbuf->ol_flags & EXT_ATTACHED_MBUF)); + dseg->pbuf = rte_cpu_to_be_64(addr); + return dseg->lkey; +} + +static uint16_t +mlx5_compress_enqueue_burst(void *queue_pair, struct rte_comp_op **ops, + uint16_t nb_ops) +{ + struct mlx5_compress_qp *qp = queue_pair; + volatile struct mlx5_gga_wqe *wqes = (volatile struct mlx5_gga_wqe *) + qp->sq.wqes, *wqe; + struct mlx5_compress_xform *xform; + struct rte_comp_op *op; + uint16_t mask = qp->entries_n - 1; + uint16_t remain = qp->entries_n - (qp->pi - qp->ci); + uint16_t idx; + bool invalid; + + if (remain < nb_ops) + nb_ops = remain; + else + remain = nb_ops; + if (unlikely(remain == 0)) + return 0; + do { + idx = qp->pi & mask; + wqe = &wqes[idx]; + rte_prefetch0(&wqes[(qp->pi + 1) & mask]); + op = *ops++; + xform = op->private_xform; + /* + * Check operation arguments and error cases: + * - Operation type must be state-less. + * - Compress operation flush flag must be FULL or FINAL. + * - Source and destination buffers must be mapped internally. + */ + invalid = op->op_type != RTE_COMP_OP_STATELESS || + (xform->type == RTE_COMP_COMPRESS && + op->flush_flag < RTE_COMP_FLUSH_FULL); + if (unlikely(invalid || + (mlx5_compress_dseg_set(qp, &wqe->gather, + op->m_src, + op->src.offset, + op->src.length) == + UINT32_MAX) || + (mlx5_compress_dseg_set(qp, &wqe->scatter, + op->m_dst, + op->dst.offset, + rte_pktmbuf_pkt_len(op->m_dst) - + op->dst.offset) == + UINT32_MAX))) { + op->status = invalid ? RTE_COMP_OP_STATUS_INVALID_ARGS : + RTE_COMP_OP_STATUS_ERROR; + nb_ops -= remain; + if (unlikely(nb_ops == 0)) + return 0; + break; + } + wqe->gga_ctrl1 = xform->gga_ctrl1; + wqe->opcode = rte_cpu_to_be_32(xform->opcode + (qp->pi << 8)); + qp->ops[idx] = op; + qp->pi++; + } while (--remain); + rte_io_wmb(); + qp->sq.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->pi); + rte_wmb(); + *qp->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH.*/ + rte_wmb(); + return nb_ops; +} + +static void +mlx5_compress_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe, + volatile uint32_t *opaq) +{ + size_t i; + + DRV_LOG(ERR, "Error cqe:"); + for (i = 0; i < sizeof(struct mlx5_err_cqe) >> 2; i += 4) + DRV_LOG(ERR, "%08X %08X %08X %08X", cqe[i], cqe[i + 1], + cqe[i + 2], cqe[i + 3]); + DRV_LOG(ERR, "\nError wqe:"); + for (i = 0; i < sizeof(struct mlx5_gga_wqe) >> 2; i += 4) + DRV_LOG(ERR, "%08X %08X %08X %08X", wqe[i], wqe[i + 1], + wqe[i + 2], wqe[i + 3]); + DRV_LOG(ERR, "\nError opaq:"); + for (i = 0; i < sizeof(struct mlx5_gga_compress_opaque) >> 2; i += 4) + DRV_LOG(ERR, "%08X %08X %08X %08X", opaq[i], opaq[i + 1], + opaq[i + 2], opaq[i + 3]); +} + +static void +mlx5_compress_cqe_err_handle(struct mlx5_compress_qp *qp, + struct rte_comp_op *op) +{ + const uint32_t idx = qp->ci & (qp->entries_n - 1); + volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *) + &qp->cq.cqes[idx]; + volatile struct mlx5_gga_wqe *wqes = (volatile struct mlx5_gga_wqe *) + qp->sq.wqes; + volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; + + op->status = RTE_COMP_OP_STATUS_ERROR; + op->consumed = 0; + op->produced = 0; + op->output_chksum = 0; + op->debug_status = rte_be_to_cpu_32(opaq[idx].syndrom) | + ((uint64_t)rte_be_to_cpu_32(cqe->syndrome) << 32); + mlx5_compress_dump_err_objs((volatile uint32_t *)cqe, + (volatile uint32_t *)&wqes[idx], + (volatile uint32_t *)&opaq[idx]); +} + +static uint16_t +mlx5_compress_dequeue_burst(void *queue_pair, struct rte_comp_op **ops, + uint16_t nb_ops) +{ + struct mlx5_compress_qp *qp = queue_pair; + volatile struct mlx5_compress_xform *restrict xform; + volatile struct mlx5_cqe *restrict cqe; + volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; + struct rte_comp_op *restrict op; + const unsigned int cq_size = qp->entries_n; + const unsigned int mask = cq_size - 1; + uint32_t idx; + uint32_t next_idx = qp->ci & mask; + const uint16_t max = RTE_MIN((uint16_t)(qp->pi - qp->ci), nb_ops); + uint16_t i = 0; + int ret; + + if (unlikely(max == 0)) + return 0; + do { + idx = next_idx; + next_idx = (qp->ci + 1) & mask; + rte_prefetch0(&qp->cq.cqes[next_idx]); + rte_prefetch0(qp->ops[next_idx]); + op = qp->ops[idx]; + cqe = &qp->cq.cqes[idx]; + ret = check_cqe(cqe, cq_size, qp->ci); + /* + * Be sure owner read is done before any other cookie field or + * opaque field. + */ + rte_io_rmb(); + if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { + if (likely(ret == MLX5_CQE_STATUS_HW_OWN)) + break; + mlx5_compress_cqe_err_handle(qp, op); + } else { + xform = op->private_xform; + op->status = RTE_COMP_OP_STATUS_SUCCESS; + op->consumed = op->src.length; + op->produced = rte_be_to_cpu_32(cqe->byte_cnt); + MLX5_ASSERT(cqe->byte_cnt == + qp->opaque_buf[idx].scattered_length); + switch (xform->csum_type) { + case RTE_COMP_CHECKSUM_CRC32: + op->output_chksum = (uint64_t)rte_be_to_cpu_32 + (opaq[idx].crc32); + break; + case RTE_COMP_CHECKSUM_ADLER32: + op->output_chksum = (uint64_t)rte_be_to_cpu_32 + (opaq[idx].adler32) << 32; + break; + case RTE_COMP_CHECKSUM_CRC32_ADLER32: + op->output_chksum = (uint64_t)rte_be_to_cpu_32 + (opaq[idx].crc32) | + ((uint64_t)rte_be_to_cpu_32 + (opaq[idx].adler32) << 32); + break; + default: + break; + } + } + ops[i++] = op; + qp->ci++; + } while (i < max); + if (likely(i != 0)) { + rte_io_wmb(); + qp->cq.db_rec[0] = rte_cpu_to_be_32(qp->ci); + } + return i; +} + static struct ibv_device * mlx5_compress_get_ib_device_match(struct rte_pci_addr *addr) { @@ -520,8 +731,8 @@ struct mlx5_compress_qp { DRV_LOG(INFO, "Compress device %s was created successfully.", ibv->name); cdev->dev_ops = &mlx5_compress_ops; - cdev->dequeue_burst = NULL; - cdev->enqueue_burst = NULL; + cdev->dequeue_burst = mlx5_compress_dequeue_burst; + cdev->enqueue_burst = mlx5_compress_enqueue_burst; cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; priv = cdev->data->dev_private; priv->ctx = ctx; From patchwork Wed Jan 20 11:29:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86967 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8284CA0A05; Wed, 20 Jan 2021 12:34:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 703E6140CF7; Wed, 20 Jan 2021 12:34:49 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 23790140CF1 for ; Wed, 20 Jan 2021 12:34:48 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:34:47 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfXG001381; Wed, 20 Jan 2021 13:34:47 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:33 +0000 Message-Id: <1611142175-409485-10-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 09/11] compress/mlx5: add statistics operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for the next statistics operations: - stats_get - stats_reset These statistics are counted by the SW data-path. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/compress/mlx5/mlx5_compress.c | 36 +++++++++++++++++++++++++++++++++-- 1 file changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 17aa206..7a43d9e 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -65,6 +65,7 @@ struct mlx5_compress_qp { struct mlx5_pmd_mr opaque_mr; struct rte_comp_op **ops; struct mlx5_compress_priv *priv; + struct rte_compressdev_stats stats; }; TAILQ_HEAD(mlx5_compress_privs, mlx5_compress_priv) mlx5_compress_priv_list = @@ -362,14 +363,42 @@ struct mlx5_compress_qp { return 0; } +static void +mlx5_compress_stats_get(struct rte_compressdev *dev, + struct rte_compressdev_stats *stats) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct mlx5_compress_qp *qp = dev->data->queue_pairs[qp_id]; + + stats->enqueued_count += qp->stats.enqueued_count; + stats->dequeued_count += qp->stats.dequeued_count; + stats->enqueue_err_count += qp->stats.enqueue_err_count; + stats->dequeue_err_count += qp->stats.dequeue_err_count; + } +} + +static void +mlx5_compress_stats_reset(struct rte_compressdev *dev) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct mlx5_compress_qp *qp = dev->data->queue_pairs[qp_id]; + + memset(&qp->stats, 0, sizeof(qp->stats)); + } +} + static struct rte_compressdev_ops mlx5_compress_ops = { .dev_configure = mlx5_compress_dev_configure, .dev_start = mlx5_compress_dev_start, .dev_stop = mlx5_compress_dev_stop, .dev_close = mlx5_compress_dev_close, .dev_infos_get = mlx5_compress_dev_info_get, - .stats_get = NULL, - .stats_reset = NULL, + .stats_get = mlx5_compress_stats_get, + .stats_reset = mlx5_compress_stats_reset, .queue_pair_setup = mlx5_compress_qp_setup, .queue_pair_release = mlx5_compress_qp_release, .private_xform_create = mlx5_compress_xform_create, @@ -453,6 +482,7 @@ struct mlx5_compress_qp { qp->ops[idx] = op; qp->pi++; } while (--remain); + qp->stats.enqueued_count += nb_ops; rte_io_wmb(); qp->sq.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->pi); rte_wmb(); @@ -501,6 +531,7 @@ struct mlx5_compress_qp { mlx5_compress_dump_err_objs((volatile uint32_t *)cqe, (volatile uint32_t *)&wqes[idx], (volatile uint32_t *)&opaq[idx]); + qp->stats.dequeue_err_count++; } static uint16_t @@ -571,6 +602,7 @@ struct mlx5_compress_qp { if (likely(i != 0)) { rte_io_wmb(); qp->cq.db_rec[0] = rte_cpu_to_be_32(qp->ci); + qp->stats.dequeued_count += i; } return i; } From patchwork Wed Jan 20 11:29:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86968 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7360FA0A05; Wed, 20 Jan 2021 12:35:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CC2EE140D01; Wed, 20 Jan 2021 12:34:58 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 090CF140CF8 for ; Wed, 20 Jan 2021 12:34:57 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:34:57 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfXH001381; Wed, 20 Jan 2021 13:34:57 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:34 +0000 Message-Id: <1611142175-409485-11-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 10/11] compress/mlx5: support 32-bit systems X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to support 32-bit systems, the 8B doorbell write should be done by 2 4B stores. The order between the store is important, that's why memory barrier should be used between them. The doorbell address is shared between all the queues, that's why a lock should wrap the 2 stores. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/compress/mlx5/mlx5_compress.c | 32 ++++++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 7a43d9e..f7ef913 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -50,6 +50,10 @@ struct mlx5_compress_priv { LIST_HEAD(xform_list, mlx5_compress_xform) xform_list; rte_spinlock_t xform_sl; struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ + volatile uint64_t *uar_addr; +#ifndef RTE_ARCH_64 + rte_spinlock_t uar32_sl; +#endif /* RTE_ARCH_64 */ }; struct mlx5_compress_qp { @@ -57,7 +61,6 @@ struct mlx5_compress_qp { uint16_t entries_n; uint16_t pi; uint16_t ci; - volatile uint64_t *uar_addr; struct mlx5_mr_ctrl mr_ctrl; int socket_id; struct mlx5_devx_cq cq; @@ -208,8 +211,6 @@ struct mlx5_compress_qp { qp->priv = priv; qp->ops = (struct rte_comp_op **)RTE_ALIGN((uintptr_t)(qp + 1), RTE_CACHE_LINE_SIZE); - qp->uar_addr = mlx5_os_get_devx_uar_reg_addr(priv->uar); - MLX5_ASSERT(qp->uar_addr); if (mlx5_common_verbs_reg_mr(priv->pd, opaq_buf, qp->entries_n * sizeof(struct mlx5_gga_compress_opaque), &qp->opaque_mr) != 0) { @@ -423,6 +424,24 @@ struct mlx5_compress_qp { return dseg->lkey; } +/* + * Provide safe 64bit store operation to mlx5 UAR region for both 32bit and + * 64bit architectures. + */ +static __rte_always_inline void +mlx5_compress_uar_write(uint64_t val, struct mlx5_compress_priv *priv) +{ +#ifdef RTE_ARCH_64 + *priv->uar_addr = val; +#else /* !RTE_ARCH_64 */ + rte_spinlock_lock(&priv->uar32_sl); + *(volatile uint32_t *)priv->uar_addr = val; + rte_io_wmb(); + *((volatile uint32_t *)priv->uar_addr + 1) = val >> 32; + rte_spinlock_unlock(&priv->uar32_sl); +#endif +} + static uint16_t mlx5_compress_enqueue_burst(void *queue_pair, struct rte_comp_op **ops, uint16_t nb_ops) @@ -486,7 +505,7 @@ struct mlx5_compress_qp { rte_io_wmb(); qp->sq.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->pi); rte_wmb(); - *qp->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH.*/ + mlx5_compress_uar_write(*(volatile uint64_t *)wqe, qp->priv); rte_wmb(); return nb_ops; } @@ -692,6 +711,11 @@ struct mlx5_compress_qp { DRV_LOG(ERR, "Failed to allocate UAR."); return -1; } + priv->uar_addr = mlx5_os_get_devx_uar_reg_addr(priv->uar); + MLX5_ASSERT(qp->uar_addr); +#ifndef RTE_ARCH_64 + rte_spinlock_init(&priv->uar32_sl); +#endif /* RTE_ARCH_64 */ return 0; } From patchwork Wed Jan 20 11:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86969 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 00F4AA0A05; Wed, 20 Jan 2021 12:35:11 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7910C140CF9; Wed, 20 Jan 2021 12:35:09 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 03E93140CF8 for ; Wed, 20 Jan 2021 12:35:07 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 20 Jan 2021 13:35:05 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10KBTfXI001381; Wed, 20 Jan 2021 13:35:05 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 20 Jan 2021 11:29:35 +0000 Message-Id: <1611142175-409485-12-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1611142175-409485-1-git-send-email-matan@nvidia.com> References: <1610554690-411627-1-git-send-email-matan@nvidia.com> <1611142175-409485-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v3 11/11] compress/mlx5: add the supported capabilities X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add all the capabilities supported by the device. Add the driver documentations. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- doc/guides/compressdevs/features/mlx5.ini | 13 +++++ doc/guides/compressdevs/index.rst | 1 + doc/guides/compressdevs/mlx5.rst | 94 +++++++++++++++++++++++++++++++ doc/guides/rel_notes/release_21_02.rst | 6 ++ drivers/compress/mlx5/mlx5_compress.c | 24 +++++++- 5 files changed, 136 insertions(+), 2 deletions(-) create mode 100644 doc/guides/compressdevs/features/mlx5.ini create mode 100644 doc/guides/compressdevs/mlx5.rst diff --git a/doc/guides/compressdevs/features/mlx5.ini b/doc/guides/compressdevs/features/mlx5.ini new file mode 100644 index 0000000..891ce47 --- /dev/null +++ b/doc/guides/compressdevs/features/mlx5.ini @@ -0,0 +1,13 @@ +; +; Refer to default.ini for the full list of available PMD features. +; +; Supported features of 'MLX5' compression driver. +; +[Features] +HW Accelerated = Y +Deflate = Y +Adler32 = Y +Crc32 = Y +Adler32&Crc32 = Y +Fixed = Y +Dynamic = Y diff --git a/doc/guides/compressdevs/index.rst b/doc/guides/compressdevs/index.rst index 1f37e26..54a3ef4 100644 --- a/doc/guides/compressdevs/index.rst +++ b/doc/guides/compressdevs/index.rst @@ -11,6 +11,7 @@ Compression Device Drivers overview isal + mlx5 octeontx qat_comp zlib diff --git a/doc/guides/compressdevs/mlx5.rst b/doc/guides/compressdevs/mlx5.rst new file mode 100644 index 0000000..c5433e9 --- /dev/null +++ b/doc/guides/compressdevs/mlx5.rst @@ -0,0 +1,94 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2021 Mellanox Technologies, Ltd + +.. include:: + +MLX5 compress driver +==================== + +The MLX5 compress driver library +(**librte_compress_mlx5**) provides support for **Mellanox BlueField 2** +families of 25/50/100/200 Gb/s adapters. + +Design +------ + +This PMD is configuring the compress, decompress amd DMA engines. + +GGAs (Generic Global Accelerators) are offload engines that can be used +to do memory to memory tasks on data. +These engines are part of the ARM complex of the BlueField chip, and as +such they do not use NIC related resources (e.g. RX/TX bandwidth). +They do share the same PCI and memory bandwidth. + +So, using the BlueField device (starting from BlueField 2), the compress +class operations can be supported in parallel to the net, vDPA and +RegEx class operations. + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel, +combined with hardware specifications that allow to handle virtual memory +addresses directly, ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +The PMD uses libibverbs and libmlx5 to access the device firmware +or directly the hardware components. +There are different levels of objects and bypassing abilities +to get the best performances: + +- Verbs is a complete high-level generic API. +- Direct Verbs is a device-specific API. +- DevX allows to access firmware objects. + +Enabling librte_compress_mlx5 causes DPDK applications to be linked against +libibverbs. + +Mellanox mlx5 pci device can be probed by number of different pci devices, +for example net / vDPA / RegEx. To select the compress PMD ``class=compress`` +should be specified as device parameter. The compress device can be probed and +used with other Mellanox classes, by adding more options in the class. +For example: ``class=net:compress`` will probe both the net PMD and the compress +PMD. + +Features +-------- + +Compress mlx5 PMD has support for: + +Compression/Decompression algorithm: + +* DEFLATE. + +NULL algorithm for DMA operations. + +Huffman code type: + +* FIXED. +* DYNAMIC. + +Window size support: + +1KB, 2KB, 4KB, 8KB, 16KB and 32KB. + +Shareable transformation. + +Checksum generation: + +* CRC32, Adler32 and combined checksum. + +Limitations +----------- + +* Scatter-Gather, SHA and Stateful are not supported. +* Non-compressed block is not supported in compress (supported in decompress). + +Supported NICs +-------------- + +* Mellanox\ |reg| BlueField 2 SmartNIC + +Prerequisites +------------- + +- Mellanox OFED version: **5.2** + see :doc:`../../nics/mlx5` guide for more Mellanox OFED details. diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst index ae36b6a..3550f4e 100644 --- a/doc/guides/rel_notes/release_21_02.rst +++ b/doc/guides/rel_notes/release_21_02.rst @@ -51,6 +51,12 @@ New Features * Other libs * Apps, Examples, Tools (if significant) +* **Added mlx5 compress PMD.** + + Added a new compress PMD driver for Bluefield 2 adapters. + + See the :doc:`../compressdevs/mlx5` for more details. + This section is a comment. Do not overwrite or remove it. Also, make sure to start the actual text at the margin. ======================================================= diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index f7ef913..a274366 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -77,8 +77,28 @@ struct mlx5_compress_qp { int mlx5_compress_logtype; -const struct rte_compressdev_capabilities mlx5_caps[RTE_COMP_ALGO_LIST_END]; - +static const struct rte_compressdev_capabilities mlx5_caps[] = { + { + .algo = RTE_COMP_ALGO_NULL, + .comp_feature_flags = RTE_COMP_FF_ADLER32_CHECKSUM | + RTE_COMP_FF_CRC32_CHECKSUM | + RTE_COMP_FF_CRC32_ADLER32_CHECKSUM | + RTE_COMP_FF_SHAREABLE_PRIV_XFORM, + }, + { + .algo = RTE_COMP_ALGO_DEFLATE, + .comp_feature_flags = RTE_COMP_FF_ADLER32_CHECKSUM | + RTE_COMP_FF_CRC32_CHECKSUM | + RTE_COMP_FF_CRC32_ADLER32_CHECKSUM | + RTE_COMP_FF_SHAREABLE_PRIV_XFORM | + RTE_COMP_FF_HUFFMAN_FIXED | + RTE_COMP_FF_HUFFMAN_DYNAMIC, + .window_size = {.min = 10, .max = 15, .increment = 1}, + }, + { + .algo = RTE_COMP_ALGO_LIST_END, + } +}; static void mlx5_compress_dev_info_get(struct rte_compressdev *dev,