From patchwork Wed Jan 13 16:18:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86474 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06159A04B5; Wed, 13 Jan 2021 17:18:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A02E5140DC2; Wed, 13 Jan 2021 17:18:26 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 8472C140DAF for ; Wed, 13 Jan 2021 17:18:23 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:22 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2d001884; Wed, 13 Jan 2021 18:18:22 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:01 +0000 Message-Id: <1610554690-411627-2-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 01/10] common/mlx5: add DevX attributes for compress X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the DevX attributes for compress related engiens: compress decompress dma Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/common/mlx5/mlx5_devx_cmds.c | 10 ++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 7 +++++++ drivers/common/mlx5/mlx5_prm.h | 18 ++++++++++++++++-- 3 files changed, 33 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 4d01f52..f3ed789 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -725,6 +725,16 @@ struct mlx5_devx_obj * attr->log_max_pd = MLX5_GET(cmd_hca_cap, hcattr, log_max_pd); attr->log_max_srq = MLX5_GET(cmd_hca_cap, hcattr, log_max_srq); attr->log_max_srq_sz = MLX5_GET(cmd_hca_cap, hcattr, log_max_srq_sz); + attr->mmo_dma_en = MLX5_GET(cmd_hca_cap, hcattr, dma_mmo); + attr->mmo_compress_en = MLX5_GET(cmd_hca_cap, hcattr, compress); + attr->mmo_decompress_en = MLX5_GET(cmd_hca_cap, hcattr, decompress); + attr->compress_min_block_size = MLX5_GET(cmd_hca_cap, hcattr, + compress_min_block_size); + attr->log_max_mmo_dma = MLX5_GET(cmd_hca_cap, hcattr, log_dma_mmo_size); + attr->log_max_mmo_compress = MLX5_GET(cmd_hca_cap, hcattr, + log_compress_mmo_size); + attr->log_max_mmo_decompress = MLX5_GET(cmd_hca_cap, hcattr, + log_decompress_mmo_size); if (attr->qos.sup) { MLX5_SET(query_hca_cap_in, in, op_mod, MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP | diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index 8d993df..31ad18a 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -127,6 +127,13 @@ struct mlx5_hca_attr { uint32_t log_max_srq; uint32_t log_max_srq_sz; uint32_t rss_ind_tbl_cap; + uint32_t mmo_dma_en:1; + uint32_t mmo_compress_en:1; + uint32_t mmo_decompress_en:1; + uint32_t compress_min_block_size:4; + uint32_t log_max_mmo_dma:5; + uint32_t log_max_mmo_compress:5; + uint32_t log_max_mmo_decompress:5; }; struct mlx5_devx_wq_attr { diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 9e2d1d0..e489a0a 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1120,7 +1120,15 @@ enum { struct mlx5_ifc_cmd_hca_cap_bits { u8 reserved_at_0[0x30]; u8 vhca_id[0x10]; - u8 reserved_at_40[0x40]; + u8 reserved_at_40[0x20]; + u8 reserved_at_60[0x3]; + u8 log_regexp_scatter_gather_size[0x5]; + u8 reserved_at_68[0x3]; + u8 log_dma_mmo_size[5]; + u8 reserved_at_70[0x3]; + u8 log_compress_mmo_size[5]; + u8 reserved_at_78[0x3]; + u8 log_decompress_mmo_size[5]; u8 log_max_srq_sz[0x8]; u8 log_max_qp_sz[0x8]; u8 reserved_at_90[0x9]; @@ -1168,7 +1176,13 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 log_max_ra_res_dc[0x6]; u8 reserved_at_140[0xa]; u8 log_max_ra_req_qp[0x6]; - u8 reserved_at_150[0xa]; + u8 rtr2rts_qp_counters_set_id[1]; + u8 rts2rts_udp_sport[1]; + u8 rts2rts_lag_tx_port_affinity[1]; + u8 dma_mmo[1]; + u8 compress_min_block_size[4]; + u8 compress[1]; + u8 decompress[1]; u8 log_max_ra_res_qp[0x6]; u8 end_pad[0x1]; u8 cc_query_allowed[0x1]; From patchwork Wed Jan 13 16:18:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86475 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 08553A04B5; Wed, 13 Jan 2021 17:18:41 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F28A9140DCD; Wed, 13 Jan 2021 17:18:29 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 793B8140DCC for ; Wed, 13 Jan 2021 17:18:27 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:24 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2e001884; Wed, 13 Jan 2021 18:18:24 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:02 +0000 Message-Id: <1610554690-411627-3-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 02/10] drivers: introduce mlx5 compress PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a new compress PMD for Mellanox devices. The MLX5 compress driver library provides support for Mellanox BlueField 2 families of 25/50/100/200 Gb/s adapters. Using the BlueField 2 device, the compress class operations can be run in parallel to the net, vdpa, and regex class operations. This driver is depending on rdma-core like the other mlx5 PMDs, also it is going to use mlx5 DevX to create HW objects directly by the FW. Add the probing functions, PCI bus connectivity, HW capabilities checks and some basic objects preparations. Signed-off-by: Matan Azrad --- MAINTAINERS | 4 + drivers/common/mlx5/mlx5_common.h | 1 + drivers/common/mlx5/mlx5_common_pci.c | 7 + drivers/common/mlx5/mlx5_common_pci.h | 36 ++-- drivers/compress/meson.build | 2 +- drivers/compress/mlx5/meson.build | 26 +++ drivers/compress/mlx5/mlx5_compress.c | 297 ++++++++++++++++++++++++++++ drivers/compress/mlx5/mlx5_compress_utils.h | 20 ++ drivers/compress/mlx5/version.map | 3 + 9 files changed, 377 insertions(+), 19 deletions(-) create mode 100644 drivers/compress/mlx5/meson.build create mode 100644 drivers/compress/mlx5/mlx5_compress.c create mode 100644 drivers/compress/mlx5/mlx5_compress_utils.h create mode 100644 drivers/compress/mlx5/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 6787b15..f2badd9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1133,6 +1133,10 @@ F: drivers/compress/zlib/ F: doc/guides/compressdevs/zlib.rst F: doc/guides/compressdevs/features/zlib.ini +Mellanox mlx5 +M: Matan Azrad +F: drivers/compress/mlx5/ + RegEx Drivers ------------- diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index e35188d..3855582 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -217,6 +217,7 @@ enum mlx5_class { MLX5_CLASS_NET = RTE_BIT64(0), MLX5_CLASS_VDPA = RTE_BIT64(1), MLX5_CLASS_REGEX = RTE_BIT64(2), + MLX5_CLASS_COMPRESS = RTE_BIT64(3), }; #define MLX5_DBR_SIZE RTE_CACHE_LINE_SIZE diff --git a/drivers/common/mlx5/mlx5_common_pci.c b/drivers/common/mlx5/mlx5_common_pci.c index 5208972..2b65768 100644 --- a/drivers/common/mlx5/mlx5_common_pci.c +++ b/drivers/common/mlx5/mlx5_common_pci.c @@ -28,14 +28,21 @@ static TAILQ_HEAD(mlx5_pci_devices_head, mlx5_pci_device) devices_list = { .name = "vdpa", .driver_class = MLX5_CLASS_VDPA }, { .name = "net", .driver_class = MLX5_CLASS_NET }, { .name = "regex", .driver_class = MLX5_CLASS_REGEX }, + { .name = "compress", .driver_class = MLX5_CLASS_COMPRESS }, }; static const unsigned int mlx5_class_combinations[] = { MLX5_CLASS_NET, MLX5_CLASS_VDPA, MLX5_CLASS_REGEX, + MLX5_CLASS_COMPRESS, MLX5_CLASS_NET | MLX5_CLASS_REGEX, MLX5_CLASS_VDPA | MLX5_CLASS_REGEX, + MLX5_CLASS_NET | MLX5_CLASS_COMPRESS, + MLX5_CLASS_VDPA | MLX5_CLASS_COMPRESS, + MLX5_CLASS_REGEX | MLX5_CLASS_COMPRESS, + MLX5_CLASS_NET | MLX5_CLASS_REGEX | MLX5_CLASS_COMPRESS, + MLX5_CLASS_VDPA | MLX5_CLASS_REGEX | MLX5_CLASS_COMPRESS, /* New class combination should be added here. */ }; diff --git a/drivers/common/mlx5/mlx5_common_pci.h b/drivers/common/mlx5/mlx5_common_pci.h index 41b73e1..de89bb9 100644 --- a/drivers/common/mlx5/mlx5_common_pci.h +++ b/drivers/common/mlx5/mlx5_common_pci.h @@ -9,26 +9,26 @@ * @file * * RTE Mellanox PCI Driver Interface - * Mellanox ConnectX PCI device supports multiple class (net/vdpa/regex) - * devices. This layer enables creating such multiple class of devices on a - * single PCI device by allowing to bind multiple class specific device + * Mellanox ConnectX PCI device supports multiple class: net,vdpa,regex and + * compress devices. This layer enables creating such multiple class of devices + * on a single PCI device by allowing to bind multiple class specific device * driver to attach to mlx5_pci driver. * - * ----------- ------------ ------------- - * | mlx5 | | mlx5 | | mlx5 | - * | net pmd | | vdpa pmd | | regex pmd | - * ----------- ------------ ------------- - * \ | / - * \ | / - * \ -------------- / - * \______| mlx5 |_____ / - * | pci common | - * -------------- - * | - * ----------- - * | mlx5 | - * | pci dev | - * ----------- + * ----------- ------------ ------------- ---------------- + * | mlx5 | | mlx5 | | mlx5 | | mlx5 | + * | net pmd | | vdpa pmd | | regex pmd | | compress pmd | + * ----------- ------------ ------------- ---------------- + * \ \ / / + * \ \ / / + * \ \_--------------_/ / + * \_______________| mlx5 |_______________/ + * | pci common | + * -------------- + * | + * ----------- + * | mlx5 | + * | pci dev | + * ----------- * * - mlx5 pci driver binds to mlx5 PCI devices defined by PCI * ID table of all related mlx5 PCI devices. diff --git a/drivers/compress/meson.build b/drivers/compress/meson.build index 33f5e33..d8f3ddb 100644 --- a/drivers/compress/meson.build +++ b/drivers/compress/meson.build @@ -5,7 +5,7 @@ if is_windows subdir_done() endif -drivers = ['isal', 'octeontx', 'zlib'] +drivers = ['isal', 'octeontx', 'zlib', 'mlx5'] std_deps = ['compressdev'] # compressdev pulls in all other needed deps config_flag_fmt = 'RTE_LIBRTE_PMD_@0@' diff --git a/drivers/compress/mlx5/meson.build b/drivers/compress/mlx5/meson.build new file mode 100644 index 0000000..2a6dc3f --- /dev/null +++ b/drivers/compress/mlx5/meson.build @@ -0,0 +1,26 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2021 Mellanox Technologies, Ltd + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +fmt_name = 'mlx5_compress' +deps += ['common_mlx5', 'eal', 'compressdev'] +sources = files( + 'mlx5_compress.c', +) +cflags_options = [ + '-std=c11', + '-Wno-strict-prototypes', + '-D_BSD_SOURCE', + '-D_DEFAULT_SOURCE', + '-D_XOPEN_SOURCE=600' +] +foreach option:cflags_options + if cc.has_argument(option) + cflags += option + endif +endforeach diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c new file mode 100644 index 0000000..639dd61 --- /dev/null +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -0,0 +1,297 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 Mellanox Technologies, Ltd + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "mlx5_compress_utils.h" + +#define MLX5_COMPRESS_DRIVER_NAME mlx5_compress +#define MLX5_COMPRESS_LOG_NAME pmd.compress.mlx5 + +struct mlx5_compress_priv { + TAILQ_ENTRY(mlx5_compress_priv) next; + struct ibv_context *ctx; /* Device context. */ + struct rte_pci_device *pci_dev; + struct rte_compressdev *cdev; + void *uar; + uint32_t pdn; /* Protection Domain number. */ + uint8_t min_block_size; + /* Minimum huffman block size supported by the device. */ + struct ibv_pd *pd; +}; + +TAILQ_HEAD(mlx5_compress_privs, mlx5_compress_priv) mlx5_compress_priv_list = + TAILQ_HEAD_INITIALIZER(mlx5_compress_priv_list); +static pthread_mutex_t priv_list_lock = PTHREAD_MUTEX_INITIALIZER; + +int mlx5_compress_logtype; + +static struct rte_compressdev_ops mlx5_compress_ops = { + .dev_configure = NULL, + .dev_start = NULL, + .dev_stop = NULL, + .dev_close = NULL, + .dev_infos_get = NULL, + .stats_get = NULL, + .stats_reset = NULL, + .queue_pair_setup = NULL, + .queue_pair_release = NULL, + .private_xform_create = NULL, + .private_xform_free = NULL, + .stream_create = NULL, + .stream_free = NULL, +}; + +static struct ibv_device * +mlx5_compress_get_ib_device_match(struct rte_pci_addr *addr) +{ + int n; + struct ibv_device **ibv_list = mlx5_glue->get_device_list(&n); + struct ibv_device *ibv_match = NULL; + + if (ibv_list == NULL) { + rte_errno = ENOSYS; + return NULL; + } + while (n-- > 0) { + struct rte_pci_addr paddr; + + DRV_LOG(DEBUG, "Checking device \"%s\"..", ibv_list[n]->name); + if (mlx5_dev_to_pci_addr(ibv_list[n]->ibdev_path, &paddr) != 0) + continue; + if (rte_pci_addr_cmp(addr, &paddr) != 0) + continue; + ibv_match = ibv_list[n]; + break; + } + if (ibv_match == NULL) + rte_errno = ENOENT; + mlx5_glue->free_device_list(ibv_list); + return ibv_match; +} + +static void +mlx5_compress_hw_global_release(struct mlx5_compress_priv *priv) +{ + if (priv->pd != NULL) { + claim_zero(mlx5_glue->dealloc_pd(priv->pd)); + priv->pd = NULL; + } + if (priv->uar != NULL) { + mlx5_glue->devx_free_uar(priv->uar); + priv->uar = NULL; + } +} + +static int +mlx5_compress_pd_create(struct mlx5_compress_priv *priv) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + struct mlx5dv_obj obj; + struct mlx5dv_pd pd_info; + int ret; + + priv->pd = mlx5_glue->alloc_pd(priv->ctx); + if (priv->pd == NULL) { + DRV_LOG(ERR, "Failed to allocate PD."); + return errno ? -errno : -ENOMEM; + } + obj.pd.in = priv->pd; + obj.pd.out = &pd_info; + ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); + if (ret != 0) { + DRV_LOG(ERR, "Fail to get PD object info."); + mlx5_glue->dealloc_pd(priv->pd); + priv->pd = NULL; + return -errno; + } + priv->pdn = pd_info.pdn; + return 0; +#else + (void)priv; + DRV_LOG(ERR, "Cannot get pdn - no DV support."); + return -ENOTSUP; +#endif /* HAVE_IBV_FLOW_DV_SUPPORT */ +} + +static int +mlx5_compress_hw_global_prepare(struct mlx5_compress_priv *priv) +{ + if (mlx5_compress_pd_create(priv) != 0) + return -1; + priv->uar = mlx5_devx_alloc_uar(priv->ctx, -1); + if (priv->uar == NULL || mlx5_os_get_devx_uar_reg_addr(priv->uar) == + NULL) { + rte_errno = errno; + claim_zero(mlx5_glue->dealloc_pd(priv->pd)); + DRV_LOG(ERR, "Failed to allocate UAR."); + return -1; + } + return 0; +} + +/** + * DPDK callback to register a PCI device. + * + * This function spawns compress device out of a given PCI device. + * + * @param[in] pci_drv + * PCI driver structure (mlx5_compress_driver). + * @param[in] pci_dev + * PCI device information. + * + * @return + * 0 on success, 1 to skip this driver, a negative errno value otherwise + * and rte_errno is set. + */ +static int +mlx5_compress_pci_probe(struct rte_pci_driver *pci_drv, + struct rte_pci_device *pci_dev) +{ + struct ibv_device *ibv; + struct rte_compressdev *cdev; + struct ibv_context *ctx; + struct mlx5_compress_priv *priv; + struct mlx5_hca_attr att = { 0 }; + struct rte_compressdev_pmd_init_params init_params = { + .name = "", + .socket_id = pci_dev->device.numa_node, + }; + + RTE_SET_USED(pci_drv); + ibv = mlx5_compress_get_ib_device_match(&pci_dev->addr); + if (ibv == NULL) { + DRV_LOG(ERR, "No matching IB device for PCI slot " + PCI_PRI_FMT ".", pci_dev->addr.domain, + pci_dev->addr.bus, pci_dev->addr.devid, + pci_dev->addr.function); + return -rte_errno; + } + DRV_LOG(INFO, "PCI information matches for device \"%s\".", ibv->name); + ctx = mlx5_glue->dv_open_device(ibv); + if (ctx == NULL) { + DRV_LOG(ERR, "Failed to open IB device \"%s\".", ibv->name); + rte_errno = ENODEV; + return -rte_errno; + } + if (mlx5_devx_cmd_query_hca_attr(ctx, &att) != 0 || + att.mmo_compress_en == 0 || att.mmo_decompress_en == 0 || + att.mmo_dma_en == 0) { + DRV_LOG(ERR, "Not enough capabilities to support compress " + "operations, maybe old FW/OFED version?"); + claim_zero(mlx5_glue->close_device(ctx)); + rte_errno = ENOTSUP; + return -ENOTSUP; + } + cdev = rte_compressdev_pmd_create(ibv->name, &pci_dev->device, + sizeof(*priv), &init_params); + if (cdev == NULL) { + DRV_LOG(ERR, "Failed to create device \"%s\".", ibv->name); + claim_zero(mlx5_glue->close_device(ctx)); + return -ENODEV; + } + DRV_LOG(INFO, + "Compress device %s was created successfully.", ibv->name); + cdev->dev_ops = &mlx5_compress_ops; + cdev->dequeue_burst = NULL; + cdev->enqueue_burst = NULL; + cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; + priv = cdev->data->dev_private; + priv->ctx = ctx; + priv->pci_dev = pci_dev; + priv->cdev = cdev; + priv->min_block_size = att.compress_min_block_size; + if (mlx5_compress_hw_global_prepare(priv) != 0) { + rte_compressdev_pmd_destroy(priv->cdev); + claim_zero(mlx5_glue->close_device(priv->ctx)); + return -1; + } + pthread_mutex_lock(&priv_list_lock); + TAILQ_INSERT_TAIL(&mlx5_compress_priv_list, priv, next); + pthread_mutex_unlock(&priv_list_lock); + return 0; +} + +/** + * DPDK callback to remove a PCI device. + * + * This function removes all compress devices belong to a given PCI device. + * + * @param[in] pci_dev + * Pointer to the PCI device. + * + * @return + * 0 on success, the function cannot fail. + */ +static int +mlx5_compress_pci_remove(struct rte_pci_device *pdev) +{ + struct mlx5_compress_priv *priv = NULL; + int found = 0; + + pthread_mutex_lock(&priv_list_lock); + TAILQ_FOREACH(priv, &mlx5_compress_priv_list, next) { + if (rte_pci_addr_cmp(&priv->pci_dev->addr, &pdev->addr) != 0) { + found = 1; + break; + } + } + if (found != 0) + TAILQ_REMOVE(&mlx5_compress_priv_list, priv, next); + pthread_mutex_unlock(&priv_list_lock); + if (found != 0) { + mlx5_compress_hw_global_release(priv); + rte_compressdev_pmd_destroy(priv->cdev); + claim_zero(mlx5_glue->close_device(priv->ctx)); + } + return 0; +} + +static const struct rte_pci_id mlx5_compress_pci_id_map[] = { + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_MELLANOX, + PCI_DEVICE_ID_MELLANOX_CONNECTX6DXBF) + }, + { + .vendor_id = 0 + } +}; + +static struct mlx5_pci_driver mlx5_compress_driver = { + .driver_class = MLX5_CLASS_COMPRESS, + .pci_driver = { + .driver = { + .name = RTE_STR(MLX5_COMPRESS_DRIVER_NAME), + }, + .id_table = mlx5_compress_pci_id_map, + .probe = mlx5_compress_pci_probe, + .remove = mlx5_compress_pci_remove, + .drv_flags = 0, + }, +}; + +RTE_INIT(rte_mlx5_compress_init) +{ + mlx5_common_init(); + if (mlx5_glue != NULL) + mlx5_pci_driver_register(&mlx5_compress_driver); +} + +RTE_LOG_REGISTER(mlx5_compress_logtype, MLX5_COMPRESS_LOG_NAME, NOTICE) +RTE_PMD_EXPORT_NAME(MLX5_COMPRESS_DRIVER_NAME, __COUNTER__); +RTE_PMD_REGISTER_PCI_TABLE(MLX5_COMPRESS_DRIVER_NAME, mlx5_compress_pci_id_map); +RTE_PMD_REGISTER_KMOD_DEP(MLX5_COMPRESS_DRIVER_NAME, "* ib_uverbs & mlx5_core & mlx5_ib"); diff --git a/drivers/compress/mlx5/mlx5_compress_utils.h b/drivers/compress/mlx5/mlx5_compress_utils.h new file mode 100644 index 0000000..f93244f --- /dev/null +++ b/drivers/compress/mlx5/mlx5_compress_utils.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 Mellanox Technologies, Ltd + */ + +#ifndef RTE_PMD_MLX5_COMPRESS_UTILS_H_ +#define RTE_PMD_MLX5_COMPRESS_UTILS_H_ + +#include + + +extern int mlx5_compress_logtype; + +#define MLX5_COMPRESS_LOG_PREFIX "mlx5_compress" +/* Generic printf()-like logging macro with automatic line feed. */ +#define DRV_LOG(level, ...) \ + PMD_DRV_LOG_(level, mlx5_compress_logtype, MLX5_COMPRESS_LOG_PREFIX, \ + __VA_ARGS__ PMD_DRV_LOG_STRIP PMD_DRV_LOG_OPAREN, \ + PMD_DRV_LOG_CPAREN) + +#endif /* RTE_PMD_MLX5_COMPRESS_UTILS_H_ */ diff --git a/drivers/compress/mlx5/version.map b/drivers/compress/mlx5/version.map new file mode 100644 index 0000000..4a76d1d --- /dev/null +++ b/drivers/compress/mlx5/version.map @@ -0,0 +1,3 @@ +DPDK_21 { + local: *; +}; From patchwork Wed Jan 13 16:18:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86476 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BD783A04B5; Wed, 13 Jan 2021 17:18:50 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 492DA140DD8; Wed, 13 Jan 2021 17:18:31 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 8DF87140DCD for ; Wed, 13 Jan 2021 17:18:27 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:26 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2f001884; Wed, 13 Jan 2021 18:18:26 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:03 +0000 Message-Id: <1610554690-411627-4-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 03/10] compress/mlx5: support basic control operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add initial support for the next operations: dev_configure dev_close dev_infos_get Signed-off-by: Matan Azrad --- drivers/compress/mlx5/mlx5_compress.c | 41 ++++++++++++++++++++++++++++++++--- 1 file changed, 38 insertions(+), 3 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 639dd61..7148798 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -32,20 +32,55 @@ struct mlx5_compress_priv { uint8_t min_block_size; /* Minimum huffman block size supported by the device. */ struct ibv_pd *pd; + struct rte_compressdev_config dev_config; }; +#define MLX5_COMPRESS_MAX_QPS 1024 + TAILQ_HEAD(mlx5_compress_privs, mlx5_compress_priv) mlx5_compress_priv_list = TAILQ_HEAD_INITIALIZER(mlx5_compress_priv_list); static pthread_mutex_t priv_list_lock = PTHREAD_MUTEX_INITIALIZER; int mlx5_compress_logtype; +const struct rte_compressdev_capabilities mlx5_caps[RTE_COMP_ALGO_LIST_END]; + + +static void +mlx5_compress_dev_info_get(struct rte_compressdev *dev, + struct rte_compressdev_info *info) +{ + RTE_SET_USED(dev); + if (info != NULL) { + info->max_nb_queue_pairs = MLX5_COMPRESS_MAX_QPS; + info->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; + info->capabilities = mlx5_caps; + } +} + +static int +mlx5_compress_dev_configure(struct rte_compressdev *dev, + struct rte_compressdev_config *config) +{ + struct mlx5_compress_priv *priv = dev->data->dev_private; + + priv->dev_config = *config; + return 0; +} + +static int +mlx5_compress_dev_close(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); + return 0; +} + static struct rte_compressdev_ops mlx5_compress_ops = { - .dev_configure = NULL, + .dev_configure = mlx5_compress_dev_configure, .dev_start = NULL, .dev_stop = NULL, - .dev_close = NULL, - .dev_infos_get = NULL, + .dev_close = mlx5_compress_dev_close, + .dev_infos_get = mlx5_compress_dev_info_get, .stats_get = NULL, .stats_reset = NULL, .queue_pair_setup = NULL, From patchwork Wed Jan 13 16:18:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86477 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2ABB4A04B5; Wed, 13 Jan 2021 17:19:03 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D5050140DE5; Wed, 13 Jan 2021 17:18:34 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id A48E2140DE0 for ; Wed, 13 Jan 2021 17:18:32 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:28 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2g001884; Wed, 13 Jan 2021 18:18:28 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:04 +0000 Message-Id: <1610554690-411627-5-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 04/10] common/mlx5: add compress primitives X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the GGA compress WQE related structures and definitions. Signed-off-by: Matan Azrad --- drivers/common/mlx5/mlx5_prm.h | 41 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index e489a0a..15ef0dc 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -412,10 +412,23 @@ struct mlx5_cqe_ts { uint8_t op_own; }; +/* GGA */ /* MMO metadata segment */ -#define MLX5_OPCODE_MMO 0x2f -#define MLX5_OPC_MOD_MMO_REGEX 0x4 +#define MLX5_OPCODE_MMO 0x2fu +#define MLX5_OPC_MOD_MMO_REGEX 0x4u +#define MLX5_OPC_MOD_MMO_COMP 0x2u +#define MLX5_OPC_MOD_MMO_DECOMP 0x3u +#define MLX5_OPC_MOD_MMO_DMA 0x1u + +#define WQE_GGA_COMP_WIN_SIZE_OFFSET 12u +#define WQE_GGA_COMP_BLOCK_SIZE_OFFSET 16u +#define WQE_GGA_COMP_DYNAMIC_SIZE_OFFSET 20u +#define MLX5_GGA_COMP_WIN_SIZE_UNITS 1024u +#define MLX5_GGA_COMP_WIN_SIZE_MAX (32u * MLX5_GGA_COMP_WIN_SIZE_UNITS) +#define MLX5_GGA_COMP_LOG_BLOCK_SIZE_MAX 15u +#define MLX5_GGA_COMP_LOG_DYNAMIC_SIZE_MAX 15u +#define MLX5_GGA_COMP_LOG_DYNAMIC_SIZE_MIN 0u struct mlx5_wqe_metadata_seg { uint32_t mmo_control_31_0; /* mmo_control_63_32 is in ctrl_seg.imm */ @@ -423,6 +436,30 @@ struct mlx5_wqe_metadata_seg { uint64_t addr; }; +struct mlx5_gga_wqe { + uint32_t opcode; + uint32_t sq_ds; + uint32_t flags; + uint32_t gga_ctrl1; /* ws 12-15, bs 16-19, dyns 20-23. */ + uint32_t gga_ctrl2; + uint32_t opaque_lkey; + uint64_t opaque_vaddr; + struct mlx5_wqe_dseg gather; + struct mlx5_wqe_dseg scatter; +} __rte_packed; + +struct mlx5_gga_compress_opaque { + uint32_t syndrom; + uint32_t reserved0; + uint32_t scattered_length; + uint32_t gathered_length; + uint64_t scatter_crc; + uint64_t gather_crc; + uint32_t crc32; + uint32_t adler32; + uint8_t reserved1[216]; +} __rte_packed; + struct mlx5_ifc_regexp_mmo_control_bits { uint8_t reserved_at_31[0x2]; uint8_t le[0x1]; From patchwork Wed Jan 13 16:18:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86478 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6C5AA04B5; Wed, 13 Jan 2021 17:19:12 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 17A46140DF2; Wed, 13 Jan 2021 17:18:36 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id AFF5B140DE1 for ; Wed, 13 Jan 2021 17:18:32 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:29 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2h001884; Wed, 13 Jan 2021 18:18:29 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:05 +0000 Message-Id: <1610554690-411627-6-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 05/10] compress/mlx5: support queue pair operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for the next operations: queue_pair_setup queue_pair_release Create and initialize DevX SQ and CQ for each compress API queue-pair. Signed-off-by: Matan Azrad --- drivers/compress/mlx5/mlx5_compress.c | 144 +++++++++++++++++++++++++++++++++- 1 file changed, 142 insertions(+), 2 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 7148798..ffd866a 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -15,6 +15,8 @@ #include #include #include +#include +#include #include #include "mlx5_compress_utils.h" @@ -35,6 +37,20 @@ struct mlx5_compress_priv { struct rte_compressdev_config dev_config; }; +struct mlx5_compress_qp { + uint16_t qp_id; + uint16_t entries_n; + uint16_t pi; + uint16_t ci; + volatile uint64_t *uar_addr; + int socket_id; + struct mlx5_devx_cq cq; + struct mlx5_devx_sq sq; + struct mlx5_pmd_mr opaque_mr; + struct rte_comp_op **ops; + struct mlx5_compress_priv *priv; +}; + #define MLX5_COMPRESS_MAX_QPS 1024 TAILQ_HEAD(mlx5_compress_privs, mlx5_compress_priv) mlx5_compress_priv_list = @@ -75,6 +91,130 @@ struct mlx5_compress_priv { return 0; } +static int +mlx5_compress_qp_release(struct rte_compressdev *dev, uint16_t qp_id) +{ + struct mlx5_compress_qp *qp = dev->data->queue_pairs[qp_id]; + + if (qp->sq.sq != NULL) + mlx5_devx_sq_destroy(&qp->sq); + if (qp->cq.cq != NULL) + mlx5_devx_cq_destroy(&qp->cq); + if (qp->opaque_mr.obj != NULL) { + void *opaq = qp->opaque_mr.addr; + + mlx5_common_verbs_dereg_mr(&qp->opaque_mr); + if (opaq != NULL) + rte_free(opaq); + } + rte_free(qp); + dev->data->queue_pairs[qp_id] = NULL; + return 0; +} + +static void +mlx5_compress_init_sq(struct mlx5_compress_qp *qp) +{ + volatile struct mlx5_gga_wqe *restrict wqe = + (volatile struct mlx5_gga_wqe *)qp->sq.wqes; + volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; + int i; + + /* All the next fields state should stay constant. */ + for (i = 0; i < qp->entries_n; ++i, ++wqe) { + wqe->sq_ds = rte_cpu_to_be_32((qp->sq.sq->id << 8) | 4u); + wqe->flags = RTE_BE32(MLX5_COMP_ALWAYS << + MLX5_COMP_MODE_OFFSET); + wqe->opaque_lkey = rte_cpu_to_be_32(qp->opaque_mr.lkey); + wqe->opaque_vaddr = rte_cpu_to_be_64 + ((uint64_t)(uintptr_t)&opaq[i]); + } +} + +static int +mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, + uint32_t max_inflight_ops, int socket_id) +{ + struct mlx5_compress_priv *priv = dev->data->dev_private; + struct mlx5_compress_qp *qp; + struct mlx5_devx_cq_attr cq_attr = { + .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar), + }; + struct mlx5_devx_create_sq_attr sq_attr = { + .user_index = qp_id, + .wq_attr = (struct mlx5_devx_wq_attr){ + .pd = priv->pdn, + .uar_page = mlx5_os_get_devx_uar_page_id(priv->uar), + }, + }; + struct mlx5_devx_modify_sq_attr modify_attr = { + .state = MLX5_SQC_STATE_RDY, + }; + uint32_t log_ops_n = rte_log2_u32(max_inflight_ops); + uint32_t alloc_size = sizeof(*qp); + void *opaq_buf; + int ret; + + alloc_size = RTE_ALIGN(alloc_size, RTE_CACHE_LINE_SIZE); + alloc_size += sizeof(struct rte_comp_op *) * (1u << log_ops_n); + qp = rte_zmalloc_socket(__func__, alloc_size, RTE_CACHE_LINE_SIZE, + socket_id); + if (qp == NULL) { + DRV_LOG(ERR, "Failed to allocate qp memory."); + rte_errno = ENOMEM; + return -rte_errno; + } + dev->data->queue_pairs[qp_id] = qp; + opaq_buf = rte_calloc(__func__, 1u << log_ops_n, + sizeof(struct mlx5_gga_compress_opaque), + sizeof(struct mlx5_gga_compress_opaque)); + if (opaq_buf == NULL) { + DRV_LOG(ERR, "Failed to allocate opaque memory."); + rte_errno = ENOMEM; + goto err; + } + qp->entries_n = 1 << log_ops_n; + qp->socket_id = socket_id; + qp->qp_id = qp_id; + qp->priv = priv; + qp->ops = (struct rte_comp_op **)RTE_ALIGN((uintptr_t)(qp + 1), + RTE_CACHE_LINE_SIZE); + qp->uar_addr = mlx5_os_get_devx_uar_reg_addr(priv->uar); + MLX5_ASSERT(qp->uar_addr); + if (mlx5_common_verbs_reg_mr(priv->pd, opaq_buf, qp->entries_n * + sizeof(struct mlx5_gga_compress_opaque), + &qp->opaque_mr) != 0) { + DRV_LOG(ERR, "Failed to register opaque MR."); + rte_errno = ENOMEM; + goto err; + } + ret = mlx5_devx_cq_create(priv->ctx, &qp->cq, log_ops_n, &cq_attr, + socket_id); + if (ret != 0) { + DRV_LOG(ERR, "Failed to create CQ."); + goto err; + } + sq_attr.cqn = qp->cq.cq->id; + ret = mlx5_devx_sq_create(priv->ctx, &qp->sq, log_ops_n, &sq_attr, + socket_id); + if (ret != 0) { + DRV_LOG(ERR, "Failed to create SQ."); + goto err; + } + mlx5_compress_init_sq(qp); + ret = mlx5_devx_cmd_modify_sq(qp->sq.sq, &modify_attr); + if (ret != 0) { + DRV_LOG(ERR, "Can't change SQ state to ready."); + goto err; + } + DRV_LOG(INFO, "QP %u: SQN=0x%X CQN=0x%X entries num = %u\n", + (uint32_t)qp_id, qp->sq.sq->id, qp->cq.cq->id, qp->entries_n); + return 0; +err: + mlx5_compress_qp_release(dev, qp_id); + return -1; +} + static struct rte_compressdev_ops mlx5_compress_ops = { .dev_configure = mlx5_compress_dev_configure, .dev_start = NULL, @@ -83,8 +223,8 @@ struct mlx5_compress_priv { .dev_infos_get = mlx5_compress_dev_info_get, .stats_get = NULL, .stats_reset = NULL, - .queue_pair_setup = NULL, - .queue_pair_release = NULL, + .queue_pair_setup = mlx5_compress_qp_setup, + .queue_pair_release = mlx5_compress_qp_release, .private_xform_create = NULL, .private_xform_free = NULL, .stream_create = NULL, From patchwork Wed Jan 13 16:18:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86479 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9C6C8A04B5; Wed, 13 Jan 2021 17:19:22 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5EFD7140DCC; Wed, 13 Jan 2021 17:18:37 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id D5EB0140DE2 for ; Wed, 13 Jan 2021 17:18:32 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:30 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2i001884; Wed, 13 Jan 2021 18:18:30 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:06 +0000 Message-Id: <1610554690-411627-7-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 06/10] compress/mlx5: add transformation operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for the next operations: private_xform_create private_xform_free The driver transformation structure includes preparations for the next GGA WQE fields used by the enqueue function: opcode. compress specific fields checksum type and compress type. Signed-off-by: Matan Azrad --- drivers/compress/mlx5/mlx5_compress.c | 122 +++++++++++++++++++++++++++++++++- 1 file changed, 120 insertions(+), 2 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index ffd866a..132837e 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -24,6 +25,14 @@ #define MLX5_COMPRESS_DRIVER_NAME mlx5_compress #define MLX5_COMPRESS_LOG_NAME pmd.compress.mlx5 +struct mlx5_compress_xform { + SLIST_ENTRY(mlx5_compress_xform) next; + enum rte_comp_xform_type type; + enum rte_comp_checksum_type csum_type; + uint32_t opcode; + uint32_t gga_ctrl1; /* BE. */ +}; + struct mlx5_compress_priv { TAILQ_ENTRY(mlx5_compress_priv) next; struct ibv_context *ctx; /* Device context. */ @@ -35,6 +44,8 @@ struct mlx5_compress_priv { /* Minimum huffman block size supported by the device. */ struct ibv_pd *pd; struct rte_compressdev_config dev_config; + SLIST_HEAD(xform_list, mlx5_compress_xform) xform_list; + rte_spinlock_t xform_sl; }; struct mlx5_compress_qp { @@ -215,6 +226,113 @@ struct mlx5_compress_qp { return -1; } +static int +mlx5_compress_xform_free(struct rte_compressdev *dev, void *xform) +{ + struct mlx5_compress_priv *priv = dev->data->dev_private; + + rte_spinlock_lock(&priv->xform_sl); + SLIST_REMOVE(&priv->xform_list, xform, mlx5_compress_xform, next); + rte_spinlock_unlock(&priv->xform_sl); + rte_free(xform); + return 0; +} + +#define MLX5_COMP_MAX_WIN_SIZE_CONF 6u + +static int +mlx5_compress_xform_create(struct rte_compressdev *dev, + const struct rte_comp_xform *xform, + void **private_xform) +{ + struct mlx5_compress_priv *priv = dev->data->dev_private; + struct mlx5_compress_xform *xfrm; + uint32_t size; + + if (xform->type == RTE_COMP_COMPRESS && xform->compress.level == + RTE_COMP_LEVEL_NONE) { + DRV_LOG(ERR, "Non-compressed block is not supported."); + return -ENOTSUP; + } + if ((xform->type == RTE_COMP_COMPRESS && xform->compress.hash_algo != + RTE_COMP_HASH_ALGO_NONE) || (xform->type == RTE_COMP_DECOMPRESS && + xform->decompress.hash_algo != RTE_COMP_HASH_ALGO_NONE)) { + DRV_LOG(ERR, "SHA is not supported."); + return -ENOTSUP; + } + xfrm = rte_zmalloc_socket(__func__, sizeof(*xfrm), 0, + priv->dev_config.socket_id); + if (xfrm == NULL) + return -ENOMEM; + xfrm->opcode = MLX5_OPCODE_MMO; + xfrm->type = xform->type; + switch (xform->type) { + case RTE_COMP_COMPRESS: + switch (xform->compress.algo) { + case RTE_COMP_ALGO_NULL: + xfrm->opcode += MLX5_OPC_MOD_MMO_DMA << + WQE_CSEG_OPC_MOD_OFFSET; + break; + case RTE_COMP_ALGO_DEFLATE: + size = 1 << xform->compress.window_size; + size /= MLX5_GGA_COMP_WIN_SIZE_UNITS; + xfrm->gga_ctrl1 += RTE_MIN(rte_log2_u32(size), + MLX5_COMP_MAX_WIN_SIZE_CONF) << + WQE_GGA_COMP_WIN_SIZE_OFFSET; + if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT) + size = MLX5_GGA_COMP_LOG_BLOCK_SIZE_MAX; + else + size = priv->min_block_size - 1 + + xform->compress.level; + xfrm->gga_ctrl1 += RTE_MIN(size, + MLX5_GGA_COMP_LOG_BLOCK_SIZE_MAX) << + WQE_GGA_COMP_BLOCK_SIZE_OFFSET; + xfrm->opcode += MLX5_OPC_MOD_MMO_COMP << + WQE_CSEG_OPC_MOD_OFFSET; + size = xform->compress.deflate.huffman == + RTE_COMP_HUFFMAN_DYNAMIC ? + MLX5_GGA_COMP_LOG_DYNAMIC_SIZE_MAX : + MLX5_GGA_COMP_LOG_DYNAMIC_SIZE_MIN; + xfrm->gga_ctrl1 += size << + WQE_GGA_COMP_DYNAMIC_SIZE_OFFSET; + break; + default: + goto err; + } + xfrm->csum_type = xform->compress.chksum; + break; + case RTE_COMP_DECOMPRESS: + switch (xform->decompress.algo) { + case RTE_COMP_ALGO_NULL: + xfrm->opcode += MLX5_OPC_MOD_MMO_DMA << + WQE_CSEG_OPC_MOD_OFFSET; + break; + case RTE_COMP_ALGO_DEFLATE: + xfrm->opcode += MLX5_OPC_MOD_MMO_DECOMP << + WQE_CSEG_OPC_MOD_OFFSET; + break; + default: + goto err; + } + xfrm->csum_type = xform->decompress.chksum; + break; + default: + DRV_LOG(ERR, "Algorithm %u is not supported.", xform->type); + goto err; + } + DRV_LOG(DEBUG, "New xform: gga ctrl1 = 0x%08X opcode = 0x%08X csum " + "type = %d.", xfrm->gga_ctrl1, xfrm->opcode, xfrm->csum_type); + xfrm->gga_ctrl1 = rte_cpu_to_be_32(xfrm->gga_ctrl1); + rte_spinlock_lock(&priv->xform_sl); + SLIST_INSERT_HEAD(&priv->xform_list, xfrm, next); + rte_spinlock_unlock(&priv->xform_sl); + *private_xform = xfrm; + return 0; +err: + rte_free(xfrm); + return -ENOTSUP; +} + static struct rte_compressdev_ops mlx5_compress_ops = { .dev_configure = mlx5_compress_dev_configure, .dev_start = NULL, @@ -225,8 +343,8 @@ struct mlx5_compress_qp { .stats_reset = NULL, .queue_pair_setup = mlx5_compress_qp_setup, .queue_pair_release = mlx5_compress_qp_release, - .private_xform_create = NULL, - .private_xform_free = NULL, + .private_xform_create = mlx5_compress_xform_create, + .private_xform_free = mlx5_compress_xform_free, .stream_create = NULL, .stream_free = NULL, }; From patchwork Wed Jan 13 16:18:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86480 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6B18A04B5; Wed, 13 Jan 2021 17:19:34 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 01907140E06; Wed, 13 Jan 2021 17:18:39 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 03308140DE3 for ; Wed, 13 Jan 2021 17:18:32 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:31 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2j001884; Wed, 13 Jan 2021 18:18:31 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:07 +0000 Message-Id: <1610554690-411627-8-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 07/10] compress/mlx5: add memory region management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Mellanox user space drivers don't deal with physical addresses, that's why any mbuf virtual address moved directly to the HW descriptor(WQE). The mapping between the virtual address to the physical address is saved in MR configured by the kernel to the HW. Each MR has a key that should also be moved to the WQE by the SW. When the SW see address which is not mapped, it extends the address range and creates a MR using a system call. Add memory region cache management: 2 level cache per queue-pair - no locks. 1 shared cache between all the queues using a lock. Using this way, the MR key search per data-path address is optimized. Signed-off-by: Matan Azrad --- drivers/compress/mlx5/mlx5_compress.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 132837e..ab24a84 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -46,6 +46,7 @@ struct mlx5_compress_priv { struct rte_compressdev_config dev_config; SLIST_HEAD(xform_list, mlx5_compress_xform) xform_list; rte_spinlock_t xform_sl; + struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ }; struct mlx5_compress_qp { @@ -54,6 +55,7 @@ struct mlx5_compress_qp { uint16_t pi; uint16_t ci; volatile uint64_t *uar_addr; + struct mlx5_mr_ctrl mr_ctrl; int socket_id; struct mlx5_devx_cq cq; struct mlx5_devx_sq sq; @@ -118,6 +120,7 @@ struct mlx5_compress_qp { if (opaq != NULL) rte_free(opaq); } + mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); rte_free(qp); dev->data->queue_pairs[qp_id] = NULL; return 0; @@ -184,6 +187,13 @@ struct mlx5_compress_qp { rte_errno = ENOMEM; goto err; } + if (mlx5_mr_btree_init(&qp->mr_ctrl.cache_bh, MLX5_MR_BTREE_CACHE_N, + priv->dev_config.socket_id)) { + DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", + (uint32_t)qp_id); + rte_errno = ENOMEM; + goto err; + } qp->entries_n = 1 << log_ops_n; qp->socket_id = socket_id; qp->qp_id = qp_id; @@ -513,6 +523,17 @@ struct mlx5_compress_qp { claim_zero(mlx5_glue->close_device(priv->ctx)); return -1; } + if (mlx5_mr_btree_init(&priv->mr_scache.cache, + MLX5_MR_BTREE_CACHE_N * 2, rte_socket_id()) != 0) { + DRV_LOG(ERR, "Failed to allocate shared cache MR memory."); + mlx5_compress_hw_global_release(priv); + rte_compressdev_pmd_destroy(priv->cdev); + claim_zero(mlx5_glue->close_device(priv->ctx)); + rte_errno = ENOMEM; + return -rte_errno; + } + priv->mr_scache.reg_mr_cb = mlx5_common_verbs_reg_mr; + priv->mr_scache.dereg_mr_cb = mlx5_common_verbs_dereg_mr; pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&mlx5_compress_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); @@ -547,6 +568,7 @@ struct mlx5_compress_qp { TAILQ_REMOVE(&mlx5_compress_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); if (found != 0) { + mlx5_mr_release_cache(&priv->mr_scache); mlx5_compress_hw_global_release(priv); rte_compressdev_pmd_destroy(priv->cdev); claim_zero(mlx5_glue->close_device(priv->ctx)); From patchwork Wed Jan 13 16:18:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86481 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99BE5A04B5; Wed, 13 Jan 2021 17:19:45 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4CA82140E18; Wed, 13 Jan 2021 17:18:43 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id EDC3D140DE7 for ; Wed, 13 Jan 2021 17:18:37 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:34 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2k001884; Wed, 13 Jan 2021 18:18:34 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:08 +0000 Message-Id: <1610554690-411627-9-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 08/10] compress/mlx5: add data-path functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add implementation for the next compress data-path functions: dequeue_burst enqueue_burst Add the next operation for starting \ stopping data-path: dev_stop dev_close Each compress API enqueued operation is translated to a WQE. Once WQE is done, the HW sends CQE to the CQ, when SW see the CQE the operation will be updated and dequeued. Signed-off-by: Matan Azrad --- drivers/compress/mlx5/mlx5_compress.c | 207 +++++++++++++++++++++++++++++++++- 1 file changed, 203 insertions(+), 4 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index ab24a84..719def2 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include "mlx5_compress_utils.h" @@ -343,10 +344,23 @@ struct mlx5_compress_qp { return -ENOTSUP; } +static void +mlx5_compress_dev_stop(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); +} + +static int +mlx5_compress_dev_start(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); + return 0; +} + static struct rte_compressdev_ops mlx5_compress_ops = { .dev_configure = mlx5_compress_dev_configure, - .dev_start = NULL, - .dev_stop = NULL, + .dev_start = mlx5_compress_dev_start, + .dev_stop = mlx5_compress_dev_stop, .dev_close = mlx5_compress_dev_close, .dev_infos_get = mlx5_compress_dev_info_get, .stats_get = NULL, @@ -359,6 +373,191 @@ struct mlx5_compress_qp { .stream_free = NULL, }; +static __rte_always_inline uint32_t +mlx5_compress_dseg_set(struct mlx5_compress_qp *qp, + volatile struct mlx5_wqe_dseg *restrict dseg, + struct rte_mbuf *restrict mbuf, + uint32_t offset, uint32_t len) + +{ + uintptr_t addr = rte_pktmbuf_mtod_offset(mbuf, uintptr_t, offset); + + dseg->bcount = rte_cpu_to_be_32(len); + dseg->lkey = mlx5_mr_addr2mr_bh(qp->priv->pd, 0, &qp->priv->mr_scache, + &qp->mr_ctrl, addr, + !!(mbuf->ol_flags & EXT_ATTACHED_MBUF)); + dseg->pbuf = rte_cpu_to_be_64(addr); + return dseg->lkey; +} + +static uint16_t +mlx5_compress_enqueue_burst(void *queue_pair, struct rte_comp_op **ops, + uint16_t nb_ops) +{ + struct mlx5_compress_qp *qp = queue_pair; + volatile struct mlx5_gga_wqe *wqes = (volatile struct mlx5_gga_wqe *) + qp->sq.wqes, *wqe; + struct mlx5_compress_xform *xform; + struct rte_comp_op *op; + uint16_t mask = qp->entries_n - 1; + uint16_t remain = qp->entries_n - (qp->pi - qp->ci); + + if (remain < nb_ops) { + ops[remain]->status = RTE_COMP_OP_STATUS_NOT_PROCESSED; + nb_ops = remain; + } else { + remain = nb_ops; + } + if (unlikely(remain == 0)) + return 0; + do { + wqe = &wqes[qp->pi & mask]; + rte_prefetch0(&wqes[(qp->pi + 1) & mask]); + op = *ops; + xform = op->private_xform; + if (unlikely(op->op_type != RTE_COMP_OP_STATELESS || + (xform->type == RTE_COMP_COMPRESS && op->flush_flag < + RTE_COMP_FLUSH_FULL) || (mlx5_compress_dseg_set(qp, + &wqe->gather, op->m_src, op->src.offset, + op->src.length) == UINT32_MAX) || + (mlx5_compress_dseg_set(qp, &wqe->scatter, op->m_dst, + op->dst.offset, + rte_pktmbuf_pkt_len(op->m_dst) - + op->dst.offset) == UINT32_MAX))) { + op->status = RTE_COMP_OP_STATUS_INVALID_ARGS; + nb_ops -= remain; + if (unlikely(nb_ops == 0)) + return 0; + break; + } + wqe->gga_ctrl1 = xform->gga_ctrl1; + wqe->opcode = rte_cpu_to_be_32(xform->opcode + (qp->pi << 8)); + qp->ops[qp->pi & mask] = op; + ++ops; + qp->pi++; + } while (--remain); + rte_io_wmb(); + qp->sq.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->pi); + rte_wmb(); + *qp->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH.*/ + rte_wmb(); + return nb_ops; +} + +static void +mlx5_compress_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe, + volatile uint32_t *opaq) +{ + size_t i; + + DRV_LOG(ERR, "Error cqe:"); + for (i = 0; i < sizeof(struct mlx5_err_cqe) >> 2; i += 4) + DRV_LOG(ERR, "%08X %08X %08X %08X", cqe[i], cqe[i + 1], + cqe[i + 2], cqe[i + 3]); + DRV_LOG(ERR, "\nError wqe:"); + for (i = 0; i < sizeof(struct mlx5_gga_wqe) >> 2; i += 4) + DRV_LOG(ERR, "%08X %08X %08X %08X", wqe[i], wqe[i + 1], + wqe[i + 2], wqe[i + 3]); + DRV_LOG(ERR, "\nError opaq:"); + for (i = 0; i < sizeof(struct mlx5_gga_compress_opaque) >> 2; i += 4) + DRV_LOG(ERR, "%08X %08X %08X %08X", opaq[i], opaq[i + 1], + opaq[i + 2], opaq[i + 3]); +} + +static void +mlx5_compress_cqe_err_handle(struct mlx5_compress_qp *qp, + struct rte_comp_op *op) +{ + const uint32_t idx = qp->ci & (qp->entries_n - 1); + volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *) + &qp->cq.cqes[idx]; + volatile struct mlx5_gga_wqe *wqes = (volatile struct mlx5_gga_wqe *) + qp->sq.wqes; + volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; + + op->status = RTE_COMP_OP_STATUS_ERROR; + op->consumed = 0; + op->produced = 0; + op->output_chksum = 0; + op->debug_status = rte_be_to_cpu_32(opaq[idx].syndrom) | + ((uint64_t)rte_be_to_cpu_32(cqe->syndrome) << 32); + mlx5_compress_dump_err_objs((volatile uint32_t *)cqe, + (volatile uint32_t *)&wqes[idx], + (volatile uint32_t *)&opaq[idx]); +} + +static uint16_t +mlx5_compress_dequeue_burst(void *queue_pair, struct rte_comp_op **ops, + uint16_t nb_ops) +{ + struct mlx5_compress_qp *qp = queue_pair; + volatile struct mlx5_compress_xform *restrict xform; + volatile struct mlx5_cqe *restrict cqe; + volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; + struct rte_comp_op *restrict op; + const unsigned int cq_size = qp->entries_n; + const unsigned int mask = cq_size - 1; + uint32_t idx; + uint32_t next_idx = qp->ci & mask; + const uint16_t max = RTE_MIN((uint16_t)(qp->pi - qp->ci), nb_ops); + uint16_t i = 0; + int ret; + + if (unlikely(max == 0)) + return 0; + do { + idx = next_idx; + next_idx = (qp->ci + 1) & mask; + rte_prefetch0(&qp->cq.cqes[next_idx]); + rte_prefetch0(qp->ops[next_idx]); + op = qp->ops[idx]; + cqe = &qp->cq.cqes[idx]; + ret = check_cqe(cqe, cq_size, qp->ci); + /* + * Be sure owner read is done before any other cookie field or + * opaque field. + */ + rte_io_rmb(); + if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { + if (likely(ret == MLX5_CQE_STATUS_HW_OWN)) + break; + mlx5_compress_cqe_err_handle(qp, op); + } else { + xform = op->private_xform; + op->status = RTE_COMP_OP_STATUS_SUCCESS; + op->consumed = op->src.length; + op->produced = rte_be_to_cpu_32(cqe->byte_cnt); + MLX5_ASSERT(cqe->byte_cnt == + qp->opaque_buf[idx].scattered_length); + switch (xform->csum_type) { + case RTE_COMP_CHECKSUM_CRC32: + op->output_chksum = (uint64_t)rte_be_to_cpu_32 + (opaq[idx].crc32); + break; + case RTE_COMP_CHECKSUM_ADLER32: + op->output_chksum = (uint64_t)rte_be_to_cpu_32 + (opaq[idx].adler32) << 32; + break; + case RTE_COMP_CHECKSUM_CRC32_ADLER32: + op->output_chksum = (uint64_t)rte_be_to_cpu_32 + (opaq[idx].crc32) | + ((uint64_t)rte_be_to_cpu_32 + (opaq[idx].adler32) << 32); + break; + default: + break; + } + } + ops[i++] = op; + qp->ci++; + } while (i < max); + if (likely(i != 0)) { + rte_io_wmb(); + qp->cq.db_rec[0] = rte_cpu_to_be_32(qp->ci); + } + return i; +} + static struct ibv_device * mlx5_compress_get_ib_device_match(struct rte_pci_addr *addr) { @@ -510,8 +709,8 @@ struct mlx5_compress_qp { DRV_LOG(INFO, "Compress device %s was created successfully.", ibv->name); cdev->dev_ops = &mlx5_compress_ops; - cdev->dequeue_burst = NULL; - cdev->enqueue_burst = NULL; + cdev->dequeue_burst = mlx5_compress_dequeue_burst; + cdev->enqueue_burst = mlx5_compress_enqueue_burst; cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; priv = cdev->data->dev_private; priv->ctx = ctx; From patchwork Wed Jan 13 16:18:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86482 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC66EA04B5; Wed, 13 Jan 2021 17:19:56 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 94E98140E15; Wed, 13 Jan 2021 17:18:44 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 0AD30140E00 for ; Wed, 13 Jan 2021 17:18:37 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:36 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2l001884; Wed, 13 Jan 2021 18:18:36 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:09 +0000 Message-Id: <1610554690-411627-10-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 09/10] compress/mlx5: add statistics operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for the next statistics operations: stats_get stats_reset These statistics are counted by the SW data-path. Signed-off-by: Matan Azrad --- drivers/compress/mlx5/mlx5_compress.c | 36 +++++++++++++++++++++++++++++++++-- 1 file changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 719def2..d768453 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -63,6 +63,7 @@ struct mlx5_compress_qp { struct mlx5_pmd_mr opaque_mr; struct rte_comp_op **ops; struct mlx5_compress_priv *priv; + struct rte_compressdev_stats stats; }; #define MLX5_COMPRESS_MAX_QPS 1024 @@ -357,14 +358,42 @@ struct mlx5_compress_qp { return 0; } +static void +mlx5_compress_stats_get(struct rte_compressdev *dev, + struct rte_compressdev_stats *stats) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct mlx5_compress_qp *qp = dev->data->queue_pairs[qp_id]; + + stats->enqueued_count += qp->stats.enqueued_count; + stats->dequeued_count += qp->stats.dequeued_count; + stats->enqueue_err_count += qp->stats.enqueue_err_count; + stats->dequeue_err_count += qp->stats.dequeue_err_count; + } +} + +static void +mlx5_compress_stats_reset(struct rte_compressdev *dev) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct mlx5_compress_qp *qp = dev->data->queue_pairs[qp_id]; + + memset(&qp->stats, 0, sizeof(qp->stats)); + } +} + static struct rte_compressdev_ops mlx5_compress_ops = { .dev_configure = mlx5_compress_dev_configure, .dev_start = mlx5_compress_dev_start, .dev_stop = mlx5_compress_dev_stop, .dev_close = mlx5_compress_dev_close, .dev_infos_get = mlx5_compress_dev_info_get, - .stats_get = NULL, - .stats_reset = NULL, + .stats_get = mlx5_compress_stats_get, + .stats_reset = mlx5_compress_stats_reset, .queue_pair_setup = mlx5_compress_qp_setup, .queue_pair_release = mlx5_compress_qp_release, .private_xform_create = mlx5_compress_xform_create, @@ -436,6 +465,7 @@ struct mlx5_compress_qp { ++ops; qp->pi++; } while (--remain); + qp->stats.enqueued_count += nb_ops; rte_io_wmb(); qp->sq.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->pi); rte_wmb(); @@ -484,6 +514,7 @@ struct mlx5_compress_qp { mlx5_compress_dump_err_objs((volatile uint32_t *)cqe, (volatile uint32_t *)&wqes[idx], (volatile uint32_t *)&opaq[idx]); + qp->stats.dequeue_err_count++; } static uint16_t @@ -554,6 +585,7 @@ struct mlx5_compress_qp { if (likely(i != 0)) { rte_io_wmb(); qp->cq.db_rec[0] = rte_cpu_to_be_32(qp->ci); + qp->stats.dequeued_count += i; } return i; } From patchwork Wed Jan 13 16:18:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 86483 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D00C8A04B5; Wed, 13 Jan 2021 17:20:11 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 42248140E2A; Wed, 13 Jan 2021 17:18:46 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 13724140E12 for ; Wed, 13 Jan 2021 17:18:43 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 13 Jan 2021 18:18:38 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10DGII2m001884; Wed, 13 Jan 2021 18:18:38 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Thomas Monjalon , Ashish Gupta , Fiona Trahe , akhil.goyal@nxp.com Date: Wed, 13 Jan 2021 16:18:10 +0000 Message-Id: <1610554690-411627-11-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1610554690-411627-1-git-send-email-matan@nvidia.com> References: <1610373560-253158-1-git-send-email-matan@nvidia.com> <1610554690-411627-1-git-send-email-matan@nvidia.com> Subject: [dpdk-dev] [PATCH v2 10/10] compress/mlx5: add the supported capabilities X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add all the capabilities supported by the device. Add the driver documentations. Signed-off-by: Matan Azrad --- doc/guides/compressdevs/features/mlx5.ini | 13 +++++ doc/guides/compressdevs/index.rst | 1 + doc/guides/compressdevs/mlx5.rst | 84 +++++++++++++++++++++++++++++++ doc/guides/rel_notes/release_21_02.rst | 6 +++ drivers/compress/mlx5/mlx5_compress.c | 24 ++++++++- 5 files changed, 126 insertions(+), 2 deletions(-) create mode 100644 doc/guides/compressdevs/features/mlx5.ini create mode 100644 doc/guides/compressdevs/mlx5.rst diff --git a/doc/guides/compressdevs/features/mlx5.ini b/doc/guides/compressdevs/features/mlx5.ini new file mode 100644 index 0000000..891ce47 --- /dev/null +++ b/doc/guides/compressdevs/features/mlx5.ini @@ -0,0 +1,13 @@ +; +; Refer to default.ini for the full list of available PMD features. +; +; Supported features of 'MLX5' compression driver. +; +[Features] +HW Accelerated = Y +Deflate = Y +Adler32 = Y +Crc32 = Y +Adler32&Crc32 = Y +Fixed = Y +Dynamic = Y diff --git a/doc/guides/compressdevs/index.rst b/doc/guides/compressdevs/index.rst index 1f37e26..8f9f3a5 100644 --- a/doc/guides/compressdevs/index.rst +++ b/doc/guides/compressdevs/index.rst @@ -14,3 +14,4 @@ Compression Device Drivers octeontx qat_comp zlib + mlx5 diff --git a/doc/guides/compressdevs/mlx5.rst b/doc/guides/compressdevs/mlx5.rst new file mode 100644 index 0000000..4ee26b0 --- /dev/null +++ b/doc/guides/compressdevs/mlx5.rst @@ -0,0 +1,84 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2021 Mellanox Technologies, Ltd + +.. include:: + +MLX5 compress driver +==================== + +The MLX5 compress driver library +(**librte_compress_mlx5**) provides support for **Mellanox BlueField 2** +families of 25/50/100/200 Gb/s adapters. + +Design +------ + +This PMD is configuring the compress, decompress amd DMA engines. + +For security reasons and robustness, this driver only deals with virtual +memory addresses. The way resources allocations are handled by the kernel, +combined with hardware specifications that allow to handle virtual memory +addresses directly, ensure that DPDK applications cannot access random +physical memory (or memory that does not belong to the current process). + +The PMD uses libibverbs and libmlx5 to access the device firmware +or directly the hardware components. +There are different levels of objects and bypassing abilities +to get the best performances: + +- Verbs is a complete high-level generic API. +- Direct Verbs is a device-specific API. +- DevX allows to access firmware objects. + +Enabling librte_compress_mlx5 causes DPDK applications to be linked against +libibverbs. + +Mellanox mlx5 pci device can be probed by number of different pci devices, +for example net / vDPA / compress. To select the compress PMD ``class=compress`` +should be specified as device parameter. The compress device can be probed and +used with other Mellanox classes, by adding more options in the class. +For example: ``class=net:compress`` will probe both the net PMD and the compress +PMD. + +Features +-------- + +Compress mlx5 PMD has support for: + +Compression/Decompression algorithm: + +* DEFLATE. + +NULL algorithm for DMA operations. + +Huffman code type: + +* FIXED. +* DYNAMIC. + +Window size support: + +1KB, 2KB, 4KB, 8KB, 16KB and 32KB. + +Sharable transformation. + +Checksum generation: + +* CRC32, Adler32 and combined checksum. + +Limitations +----------- + +* Scatter-Gather, SHA and Stateful are not supported. +* Non-compressed block is not supported in compress (supported in decompress). + +Supported NICs +-------------- + +* Mellanox\ |reg| BlueField 2 SmartNIC + +Prerequisites +------------- + +- Mellanox OFED version: **5.2** + see :doc:`../../nics/mlx5` guide for more Mellanox OFED details. \ No newline at end of file diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst index 706cbf8..f672d7f 100644 --- a/doc/guides/rel_notes/release_21_02.rst +++ b/doc/guides/rel_notes/release_21_02.rst @@ -51,6 +51,12 @@ New Features * Other libs * Apps, Examples, Tools (if significant) +* **Added mlx5 compress PMD.** + + Added a new compress PMD driver for Bluefield 2 adapters. + + See the :doc:`../compressdevs/mlx5` for more details. + This section is a comment. Do not overwrite or remove it. Also, make sure to start the actual text at the margin. ======================================================= diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index d768453..7384351 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -74,8 +74,28 @@ struct mlx5_compress_qp { int mlx5_compress_logtype; -const struct rte_compressdev_capabilities mlx5_caps[RTE_COMP_ALGO_LIST_END]; - +static const struct rte_compressdev_capabilities mlx5_caps[] = { + { + .algo = RTE_COMP_ALGO_NULL, + .comp_feature_flags = RTE_COMP_FF_ADLER32_CHECKSUM | + RTE_COMP_FF_CRC32_CHECKSUM | + RTE_COMP_FF_CRC32_ADLER32_CHECKSUM | + RTE_COMP_FF_SHAREABLE_PRIV_XFORM, + }, + { + .algo = RTE_COMP_ALGO_DEFLATE, + .comp_feature_flags = RTE_COMP_FF_ADLER32_CHECKSUM | + RTE_COMP_FF_CRC32_CHECKSUM | + RTE_COMP_FF_CRC32_ADLER32_CHECKSUM | + RTE_COMP_FF_SHAREABLE_PRIV_XFORM | + RTE_COMP_FF_HUFFMAN_FIXED | + RTE_COMP_FF_HUFFMAN_DYNAMIC, + .window_size = {.min = 10, .max = 15, .increment = 1}, + }, + { + .algo = RTE_COMP_ALGO_LIST_END, + } +}; static void mlx5_compress_dev_info_get(struct rte_compressdev *dev,