From patchwork Fri Oct 27 14:55:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 133506 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5417043217; Fri, 27 Oct 2023 16:55:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C1BD640ED9; Fri, 27 Oct 2023 16:55:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id CEB8940ED2 for ; Fri, 27 Oct 2023 16:55:47 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39R5Tu37012500; Fri, 27 Oct 2023 07:55:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=+SdW7jhdIQhyn+FCZUV3jFg2vGLYH/dT8IwOkauRGQ0=; b=Hvq0dwMEtjOrQiJ/Bn4QImjcalnnEX7xZPjZKmiHshU3JELRx4uJm2Qbojh+rlRuB2q+ 2oyzJjdekfY7vEWhoxBJfaz4/70V2OdGNt3QQBt1e7DUjAgeyjyFlA9pwZmK5UmNjR6j gfh7CQquHKodNUAwWKK963jpnSQZI3qvgimyPIuD/bFaaQR0kv6hwcrMs7zvdUPNy+rT foRIEiw3X0OpcqtChUIASNkB403ENsvctMrlh5QOddt0pEAl1yErv2UbNNbBf5F0RM0U cJf10cmNdyvVJGdHBxNtd6p03du4fB9Jmhq8qwyh1cf4XgeskOoZfE6meJGRPD/FBIgY PA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3tywr83u8s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 27 Oct 2023 07:55:46 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 27 Oct 2023 07:55:45 -0700 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 27 Oct 2023 07:55:43 -0700 From: Nagadheeraj Rottela To: Thomas Monjalon , Nagadheeraj Rottela , Srikanth Jampala CC: Subject: [PATCH 1/7] crypto/nitrox: move nitrox common code to common folder Date: Fri, 27 Oct 2023 20:25:27 +0530 Message-ID: <20231027145534.16803-2-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231027145534.16803-1-rnagadheeraj@marvell.com> References: <20231027145534.16803-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: lHhmDMdEXLsj057ue5jVpHJg2wzfo_jP X-Proofpoint-ORIG-GUID: lHhmDMdEXLsj057ue5jVpHJg2wzfo_jP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-27_13,2023-10-27_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org - The common code will be shared by both crypto and compress Nitrox PMD's. Signed-off-by: Nagadheeraj Rottela --- MAINTAINERS | 1 + drivers/common/nitrox/meson.build | 35 +++++++++++++++++++ .../{crypto => common}/nitrox/nitrox_csr.h | 0 .../{crypto => common}/nitrox/nitrox_device.c | 14 ++++++++ .../{crypto => common}/nitrox/nitrox_device.h | 0 .../{crypto => common}/nitrox/nitrox_hal.c | 0 .../{crypto => common}/nitrox/nitrox_hal.h | 0 .../{crypto => common}/nitrox/nitrox_logs.c | 0 .../{crypto => common}/nitrox/nitrox_logs.h | 0 drivers/{crypto => common}/nitrox/nitrox_qp.c | 0 drivers/{crypto => common}/nitrox/nitrox_qp.h | 0 drivers/crypto/meson.build | 1 - drivers/crypto/nitrox/meson.build | 18 ---------- drivers/meson.build | 1 + 14 files changed, 51 insertions(+), 19 deletions(-) create mode 100644 drivers/common/nitrox/meson.build rename drivers/{crypto => common}/nitrox/nitrox_csr.h (100%) rename drivers/{crypto => common}/nitrox/nitrox_device.c (92%) rename drivers/{crypto => common}/nitrox/nitrox_device.h (100%) rename drivers/{crypto => common}/nitrox/nitrox_hal.c (100%) rename drivers/{crypto => common}/nitrox/nitrox_hal.h (100%) rename drivers/{crypto => common}/nitrox/nitrox_logs.c (100%) rename drivers/{crypto => common}/nitrox/nitrox_logs.h (100%) rename drivers/{crypto => common}/nitrox/nitrox_qp.c (100%) rename drivers/{crypto => common}/nitrox/nitrox_qp.h (100%) delete mode 100644 drivers/crypto/nitrox/meson.build diff --git a/MAINTAINERS b/MAINTAINERS index 4083658697..7e8272c0e0 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1145,6 +1145,7 @@ Marvell Nitrox M: Nagadheeraj Rottela M: Srikanth Jampala F: drivers/crypto/nitrox/ +F: drivers/common/nitrox/ F: doc/guides/cryptodevs/nitrox.rst F: doc/guides/cryptodevs/features/nitrox.ini diff --git a/drivers/common/nitrox/meson.build b/drivers/common/nitrox/meson.build new file mode 100644 index 0000000000..eb989a04e5 --- /dev/null +++ b/drivers/common/nitrox/meson.build @@ -0,0 +1,35 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. + +if not is_linux + build = false + reason = 'only supported on Linux' +endif + +nitrox_crypto = true +nitrox_crypto_path = 'crypto/nitrox' +nitrox_crypto_relpath = '../../' + nitrox_crypto_path + +if disable_drivers.contains(nitrox_crypto_path) + nitrox_crypto = false +endif + +deps += ['bus_pci', 'cryptodev'] +sources = files( + 'nitrox_device.c', + 'nitrox_hal.c', + 'nitrox_logs.c', + 'nitrox_qp.c', +) +includes += include_directories( + nitrox_crypto_relpath, +) + +if nitrox_crypto + foreach f: ['nitrox_sym.c', + 'nitrox_sym_capabilities.c', + 'nitrox_sym_reqmgr.c', + ] + sources += files(join_paths(nitrox_crypto_relpath, f)) + endforeach +endif diff --git a/drivers/crypto/nitrox/nitrox_csr.h b/drivers/common/nitrox/nitrox_csr.h similarity index 100% rename from drivers/crypto/nitrox/nitrox_csr.h rename to drivers/common/nitrox/nitrox_csr.h diff --git a/drivers/crypto/nitrox/nitrox_device.c b/drivers/common/nitrox/nitrox_device.c similarity index 92% rename from drivers/crypto/nitrox/nitrox_device.c rename to drivers/common/nitrox/nitrox_device.c index 5b319dd681..b2f638ec8a 100644 --- a/drivers/crypto/nitrox/nitrox_device.c +++ b/drivers/common/nitrox/nitrox_device.c @@ -120,5 +120,19 @@ static struct rte_pci_driver nitrox_pmd = { .remove = nitrox_pci_remove, }; +__rte_weak int +nitrox_sym_pmd_create(struct nitrox_device *ndev) +{ + RTE_SET_USED(ndev); + return 0; +} + +__rte_weak int +nitrox_sym_pmd_destroy(struct nitrox_device *ndev) +{ + RTE_SET_USED(ndev); + return 0; +} + RTE_PMD_REGISTER_PCI(nitrox, nitrox_pmd); RTE_PMD_REGISTER_PCI_TABLE(nitrox, pci_id_nitrox_map); diff --git a/drivers/crypto/nitrox/nitrox_device.h b/drivers/common/nitrox/nitrox_device.h similarity index 100% rename from drivers/crypto/nitrox/nitrox_device.h rename to drivers/common/nitrox/nitrox_device.h diff --git a/drivers/crypto/nitrox/nitrox_hal.c b/drivers/common/nitrox/nitrox_hal.c similarity index 100% rename from drivers/crypto/nitrox/nitrox_hal.c rename to drivers/common/nitrox/nitrox_hal.c diff --git a/drivers/crypto/nitrox/nitrox_hal.h b/drivers/common/nitrox/nitrox_hal.h similarity index 100% rename from drivers/crypto/nitrox/nitrox_hal.h rename to drivers/common/nitrox/nitrox_hal.h diff --git a/drivers/crypto/nitrox/nitrox_logs.c b/drivers/common/nitrox/nitrox_logs.c similarity index 100% rename from drivers/crypto/nitrox/nitrox_logs.c rename to drivers/common/nitrox/nitrox_logs.c diff --git a/drivers/crypto/nitrox/nitrox_logs.h b/drivers/common/nitrox/nitrox_logs.h similarity index 100% rename from drivers/crypto/nitrox/nitrox_logs.h rename to drivers/common/nitrox/nitrox_logs.h diff --git a/drivers/crypto/nitrox/nitrox_qp.c b/drivers/common/nitrox/nitrox_qp.c similarity index 100% rename from drivers/crypto/nitrox/nitrox_qp.c rename to drivers/common/nitrox/nitrox_qp.c diff --git a/drivers/crypto/nitrox/nitrox_qp.h b/drivers/common/nitrox/nitrox_qp.h similarity index 100% rename from drivers/crypto/nitrox/nitrox_qp.h rename to drivers/common/nitrox/nitrox_qp.h diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build index ee5377deff..3167b1ab85 100644 --- a/drivers/crypto/meson.build +++ b/drivers/crypto/meson.build @@ -13,7 +13,6 @@ drivers = [ 'ipsec_mb', 'mlx5', 'mvsam', - 'nitrox', 'null', 'octeontx', 'openssl', diff --git a/drivers/crypto/nitrox/meson.build b/drivers/crypto/nitrox/meson.build deleted file mode 100644 index 2cc47c4626..0000000000 --- a/drivers/crypto/nitrox/meson.build +++ /dev/null @@ -1,18 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(C) 2019 Marvell International Ltd. - -if not is_linux - build = false - reason = 'only supported on Linux' -endif - -deps += ['bus_pci'] -sources = files( - 'nitrox_device.c', - 'nitrox_hal.c', - 'nitrox_logs.c', - 'nitrox_sym.c', - 'nitrox_sym_capabilities.c', - 'nitrox_sym_reqmgr.c', - 'nitrox_qp.c', -) diff --git a/drivers/meson.build b/drivers/meson.build index 8c775bbe62..49deec5224 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -13,6 +13,7 @@ subdirs = [ 'bus', 'common/cnxk', # depends on bus. 'common/mlx5', # depends on bus. + 'common/nitrox', # depends on bus. 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'mempool', # depends on common and bus. From patchwork Fri Oct 27 14:55:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 133507 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BD32643217; Fri, 27 Oct 2023 16:55:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4FC442DE3; Fri, 27 Oct 2023 16:55:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1AA4642DE1 for ; Fri, 27 Oct 2023 16:55:50 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39R5Tu38012500; Fri, 27 Oct 2023 07:55:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=dE0tiHHDVCCLkgwghJ3PhwF9HbfaCAyIBy4XkKE98Dw=; b=Kh9dXufxxcrUUtov8kR2uxRqaauCtvONfGCrEIwSkugvBbcRKOWQhRbiQOTG+YKfZPnW EnsjNm2csv5ZZAvsIEjEjmdBwQfR3b4eL4MwGTQpSh8wCOZ4PPYmLjRJE5hZLElxR1Q6 oQf/llKsT+kCjLPSydA/BkM9Pn4pw9i3VA4JB3GLlsGNqEcWN0H+M7eF/vazEiIcadZc bkse+xTgLWe28Vx9/LKHuo3an4oVbkLn5+cMZu/BoXWB+gmcv6DDGVQxoAVakgbR6YRR /pBFz9vwqBll+g5AbNH/mGnvkDVvbt8u4qNS5l3Fm618yppMcANccnewmu5ZNHhLtiZQ hA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3tywr83u8v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 27 Oct 2023 07:55:49 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 27 Oct 2023 07:55:48 -0700 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 27 Oct 2023 07:55:46 -0700 From: Nagadheeraj Rottela To: Thomas Monjalon , Nagadheeraj Rottela , Srikanth Jampala , "Fan Zhang" , Ashish Gupta CC: Subject: [PATCH 2/7] compress/nitrox: add nitrox compressdev driver Date: Fri, 27 Oct 2023 20:25:28 +0530 Message-ID: <20231027145534.16803-3-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231027145534.16803-1-rnagadheeraj@marvell.com> References: <20231027145534.16803-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: v9-RmnLEmg9XBMLxh8qSoZVFxcLXavDk X-Proofpoint-ORIG-GUID: v9-RmnLEmg9XBMLxh8qSoZVFxcLXavDk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-27_13,2023-10-27_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce nitrox compressdev driver which implements below operations - dev_configure - dev_close - dev_infos_get - private_xform_create - private_xform_free Signed-off-by: Nagadheeraj Rottela --- MAINTAINERS | 7 + doc/guides/compressdevs/features/nitrox.ini | 13 + doc/guides/compressdevs/index.rst | 1 + doc/guides/compressdevs/nitrox.rst | 50 +++ drivers/common/nitrox/meson.build | 19 +- drivers/common/nitrox/nitrox_device.c | 36 +- drivers/common/nitrox/nitrox_device.h | 3 + drivers/compress/nitrox/nitrox_comp.c | 405 +++++++++++++++++++ drivers/compress/nitrox/nitrox_comp.h | 13 + drivers/compress/nitrox/nitrox_comp_reqmgr.c | 3 + 10 files changed, 543 insertions(+), 7 deletions(-) create mode 100644 doc/guides/compressdevs/features/nitrox.ini create mode 100644 doc/guides/compressdevs/nitrox.rst create mode 100644 drivers/compress/nitrox/nitrox_comp.c create mode 100644 drivers/compress/nitrox/nitrox_comp.h create mode 100644 drivers/compress/nitrox/nitrox_comp_reqmgr.c diff --git a/MAINTAINERS b/MAINTAINERS index 7e8272c0e0..be566d6c6c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1217,6 +1217,13 @@ F: drivers/compress/isal/ F: doc/guides/compressdevs/isal.rst F: doc/guides/compressdevs/features/isal.ini +Marvell Nitrox +M: Nagadheeraj Rottela +F: drivers/compress/nitrox/ +F: drivers/common/nitrox/ +F: doc/guides/compressdevs/nitrox.rst +F: doc/guides/compressdevs/features/nitrox.ini + NVIDIA mlx5 M: Matan Azrad F: drivers/compress/mlx5/ diff --git a/doc/guides/compressdevs/features/nitrox.ini b/doc/guides/compressdevs/features/nitrox.ini new file mode 100644 index 0000000000..f045e891c4 --- /dev/null +++ b/doc/guides/compressdevs/features/nitrox.ini @@ -0,0 +1,13 @@ +; +; Refer to default.ini for the full list of available PMD features. +; +; Supported features of 'nitrox' compression driver. +; +[Features] +HW Accelerated = Y +Deflate = Y +Fixed = Y +Dynamic = Y +OOP SGL In SGL Out = Y +OOP SGL In LB Out = Y +OOP LB In SGL Out = Y diff --git a/doc/guides/compressdevs/index.rst b/doc/guides/compressdevs/index.rst index 54a3ef4273..849f211688 100644 --- a/doc/guides/compressdevs/index.rst +++ b/doc/guides/compressdevs/index.rst @@ -12,6 +12,7 @@ Compression Device Drivers overview isal mlx5 + nitrox octeontx qat_comp zlib diff --git a/doc/guides/compressdevs/nitrox.rst b/doc/guides/compressdevs/nitrox.rst new file mode 100644 index 0000000000..a1989b400d --- /dev/null +++ b/doc/guides/compressdevs/nitrox.rst @@ -0,0 +1,50 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2023 Marvell International Ltd. + +Marvell NITROX Compression Poll Mode Driver +=========================================== + +The Nitrox compression poll mode driver provides support for offloading +compression and decompression operations to the NITROX V processor. +Detailed information about the NITROX V processor can be obtained here: + +* https://www.marvell.com/security-solutions/nitrox-security-processors/nitrox-v/ + +Features +-------- + +NITROX V compression PMD has support for: + +Compression/Decompression algorithm: + +* DEFLATE + +Huffman code type: + +* FIXED +* DYNAMIC + +Window size support: + +* Min - 2 bytes +* Max - 32KB + +Checksum generation: + +* CRC32, Adler + +Limitations +----------- + +* Compressdev level 0, no compression, is not supported. + +Initialization +-------------- + +Nitrox compression PMD depends on Nitrox kernel PF driver being installed on +the platform. Nitrox PF driver is required to create VF devices which will +be used by the PMD. Each VF device can enable one compressdev PMD. + +Nitrox kernel PF driver is available as part of CNN55XX-Driver SDK. The SDK +and it's installation instructions can be obtained from: +`Marvell Customer Portal `_. diff --git a/drivers/common/nitrox/meson.build b/drivers/common/nitrox/meson.build index eb989a04e5..9334b077ad 100644 --- a/drivers/common/nitrox/meson.build +++ b/drivers/common/nitrox/meson.build @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(C) 2019 Marvell International Ltd. +# Copyright(C) 2019-2023 Marvell International Ltd. if not is_linux build = false @@ -9,12 +9,18 @@ endif nitrox_crypto = true nitrox_crypto_path = 'crypto/nitrox' nitrox_crypto_relpath = '../../' + nitrox_crypto_path +nitrox_compress = true +nitrox_compress_path = 'compress/nitrox' +nitrox_compress_relpath = '../../' + nitrox_compress_path if disable_drivers.contains(nitrox_crypto_path) nitrox_crypto = false endif +if disable_drivers.contains(nitrox_compress_path) + nitrox_compress = false +endif -deps += ['bus_pci', 'cryptodev'] +deps += ['bus_pci', 'cryptodev', 'compressdev'] sources = files( 'nitrox_device.c', 'nitrox_hal.c', @@ -23,6 +29,7 @@ sources = files( ) includes += include_directories( nitrox_crypto_relpath, + nitrox_compress_relpath, ) if nitrox_crypto @@ -33,3 +40,11 @@ if nitrox_crypto sources += files(join_paths(nitrox_crypto_relpath, f)) endforeach endif + +if nitrox_compress + foreach f: ['nitrox_comp.c', + 'nitrox_comp_reqmgr.c', + ] + sources += files(join_paths(nitrox_compress_relpath, f)) + endforeach +endif diff --git a/drivers/common/nitrox/nitrox_device.c b/drivers/common/nitrox/nitrox_device.c index b2f638ec8a..6ac25d5ccc 100644 --- a/drivers/common/nitrox/nitrox_device.c +++ b/drivers/common/nitrox/nitrox_device.c @@ -7,6 +7,7 @@ #include "nitrox_device.h" #include "nitrox_hal.h" #include "nitrox_sym.h" +#include "nitrox_comp.h" #define PCI_VENDOR_ID_CAVIUM 0x177d #define NITROX_V_PCI_VF_DEV_ID 0x13 @@ -67,7 +68,7 @@ nitrox_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pdev) { struct nitrox_device *ndev; - int err; + int err = -1; /* Nitrox CSR space */ if (!pdev->mem_resource[0].addr) @@ -79,12 +80,19 @@ nitrox_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, ndev_init(ndev, pdev); err = nitrox_sym_pmd_create(ndev); - if (err) { - ndev_release(ndev); - return err; - } + if (err) + goto err_exit; + + err = nitrox_comp_pmd_create(ndev); + if (err) + goto err_exit; return 0; +err_exit: + nitrox_comp_pmd_destroy(ndev); + nitrox_sym_pmd_destroy(ndev); + ndev_release(ndev); + return err; } static int @@ -101,6 +109,10 @@ nitrox_pci_remove(struct rte_pci_device *pdev) if (err) return err; + err = nitrox_comp_pmd_destroy(ndev); + if (err) + return err; + ndev_release(ndev); return 0; } @@ -134,5 +146,19 @@ nitrox_sym_pmd_destroy(struct nitrox_device *ndev) return 0; } +__rte_weak int +nitrox_comp_pmd_create(struct nitrox_device *ndev) +{ + RTE_SET_USED(ndev); + return 0; +} + +__rte_weak int +nitrox_comp_pmd_destroy(struct nitrox_device *ndev) +{ + RTE_SET_USED(ndev); + return 0; +} + RTE_PMD_REGISTER_PCI(nitrox, nitrox_pmd); RTE_PMD_REGISTER_PCI_TABLE(nitrox, pci_id_nitrox_map); diff --git a/drivers/common/nitrox/nitrox_device.h b/drivers/common/nitrox/nitrox_device.h index 1ff7c59b63..df6b358e14 100644 --- a/drivers/common/nitrox/nitrox_device.h +++ b/drivers/common/nitrox/nitrox_device.h @@ -9,13 +9,16 @@ #include struct nitrox_sym_device; +struct nitrox_comp_device; struct nitrox_device { TAILQ_ENTRY(nitrox_device) next; struct rte_pci_device *pdev; uint8_t *bar_addr; struct nitrox_sym_device *sym_dev; + struct nitrox_comp_device *comp_dev; struct rte_device rte_sym_dev; + struct rte_device rte_comp_dev; uint16_t nr_queues; }; diff --git a/drivers/compress/nitrox/nitrox_comp.c b/drivers/compress/nitrox/nitrox_comp.c new file mode 100644 index 0000000000..d9bd04db06 --- /dev/null +++ b/drivers/compress/nitrox/nitrox_comp.c @@ -0,0 +1,405 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#include +#include +#include + +#include "nitrox_comp.h" +#include "nitrox_device.h" +#include "nitrox_logs.h" + +#define COMPRESSDEV_NAME_NITROX_PMD compress_nitrox +#define NITROX_DECOMP_CTX_SIZE 2048 +#define NITROX_CONSTANTS_MAX_SEARCH_DEPTH 31744 +#define NITROX_COMP_LEVEL_LOWEST_START 1 +#define NITROX_COMP_LEVEL_LOWEST_END 2 +#define NITROX_COMP_LEVEL_LOWER_START 3 +#define NITROX_COMP_LEVEL_LOWER_END 4 +#define NITROX_COMP_LEVEL_MEDIUM_START 5 +#define NITROX_COMP_LEVEL_MEDIUM_END 6 +#define NITROX_COMP_LEVEL_BEST_START 7 +#define NITROX_COMP_LEVEL_BEST_END 9 + +struct nitrox_comp_device { + struct rte_compressdev *cdev; + struct nitrox_device *ndev; + struct rte_mempool *xform_pool; +}; + +enum nitrox_comp_op { + NITROX_COMP_OP_DECOMPRESS, + NITROX_COMP_OP_COMPRESS, +}; + +enum nitrox_comp_algo { + NITROX_COMP_ALGO_DEFLATE_DEFAULT, + NITROX_COMP_ALGO_DEFLATE_DYNHUFF, + NITROX_COMP_ALGO_DEFLATE_FIXEDHUFF, + NITROX_COMP_ALGO_LZS, +}; + +enum nitrox_comp_level { + NITROX_COMP_LEVEL_BEST, + NITROX_COMP_LEVEL_MEDIUM, + NITROX_COMP_LEVEL_LOWER, + NITROX_COMP_LEVEL_LOWEST, +}; + +enum nitrox_chksum_type { + NITROX_CHKSUM_TYPE_CRC32, + NITROX_CHKSUM_TYPE_ADLER32, + NITROX_CHKSUM_TYPE_NONE, +}; + +struct nitrox_comp_xform { + enum nitrox_comp_op op; + enum nitrox_comp_algo algo; + enum nitrox_comp_level level; + enum nitrox_chksum_type chksum_type; +}; + +struct nitrox_comp_stream { + struct nitrox_comp_xform xform; + int window_size; + char context[NITROX_DECOMP_CTX_SIZE] __rte_aligned(8); + char history_window[NITROX_CONSTANTS_MAX_SEARCH_DEPTH] __rte_aligned(8); +}; + +static const char nitrox_comp_drv_name[] = RTE_STR(COMPRESSDEV_NAME_NITROX_PMD); +static const struct rte_driver nitrox_rte_comp_drv = { + .name = nitrox_comp_drv_name, + .alias = nitrox_comp_drv_name +}; + +static const struct rte_compressdev_capabilities + nitrox_comp_pmd_capabilities[] = { + { .algo = RTE_COMP_ALGO_DEFLATE, + .comp_feature_flags = RTE_COMP_FF_HUFFMAN_FIXED | + RTE_COMP_FF_HUFFMAN_DYNAMIC | + RTE_COMP_FF_CRC32_CHECKSUM | + RTE_COMP_FF_ADLER32_CHECKSUM | + RTE_COMP_FF_SHAREABLE_PRIV_XFORM | + RTE_COMP_FF_OOP_SGL_IN_SGL_OUT | + RTE_COMP_FF_OOP_SGL_IN_LB_OUT | + RTE_COMP_FF_OOP_LB_IN_SGL_OUT, + .window_size = { + .min = 1, + .max = 15, + .increment = 1 + }, + }, + RTE_COMP_END_OF_CAPABILITIES_LIST() +}; + +static int nitrox_comp_dev_configure(struct rte_compressdev *dev, + struct rte_compressdev_config *config) +{ + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_device *ndev = comp_dev->ndev; + + if (config->nb_queue_pairs > ndev->nr_queues) { + NITROX_LOG(ERR, "Invalid queue pairs, max supported %d\n", + ndev->nr_queues); + return -EINVAL; + } + + if (config->max_nb_priv_xforms) { + char xform_name[RTE_MEMPOOL_NAMESIZE]; + + snprintf(xform_name, sizeof(xform_name), "%s_xform", + dev->data->name); + comp_dev->xform_pool = rte_mempool_create(xform_name, + config->max_nb_priv_xforms, + sizeof(struct nitrox_comp_xform), + 0, 0, NULL, NULL, NULL, NULL, + config->socket_id, 0); + if (comp_dev->xform_pool == NULL) { + NITROX_LOG(ERR, "Failed to create xform pool, err %d\n", + rte_errno); + return -rte_errno; + } + } + + return 0; +} + +static int nitrox_comp_dev_start(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); + return 0; +} + +static void nitrox_comp_dev_stop(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); +} + +static int nitrox_comp_dev_close(struct rte_compressdev *dev) +{ + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + + rte_mempool_free(comp_dev->xform_pool); + comp_dev->xform_pool = NULL; + return 0; +} + +static void nitrox_comp_stats_get(struct rte_compressdev *dev, + struct rte_compressdev_stats *stats) +{ + RTE_SET_USED(dev); + RTE_SET_USED(stats); +} + +static void nitrox_comp_stats_reset(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); +} + +static void nitrox_comp_dev_info_get(struct rte_compressdev *dev, + struct rte_compressdev_info *info) +{ + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_device *ndev = comp_dev->ndev; + + if (!info) + return; + + info->max_nb_queue_pairs = ndev->nr_queues; + info->feature_flags = dev->feature_flags; + info->capabilities = nitrox_comp_pmd_capabilities; +} + +static int nitrox_comp_queue_pair_setup(struct rte_compressdev *dev, + uint16_t qp_id, + uint32_t max_inflight_ops, int socket_id) +{ + RTE_SET_USED(dev); + RTE_SET_USED(qp_id); + RTE_SET_USED(max_inflight_ops); + RTE_SET_USED(socket_id); + return -1; +} + +static int nitrox_comp_queue_pair_release(struct rte_compressdev *dev, + uint16_t qp_id) +{ + RTE_SET_USED(dev); + RTE_SET_USED(qp_id); + return 0; +} + +static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, + const struct rte_comp_xform *xform, + void **private_xform) +{ + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_comp_xform *nitrox_xform; + enum rte_comp_checksum_type chksum_type; + int ret; + + if (unlikely(comp_dev->xform_pool == NULL)) { + NITROX_LOG(ERR, "private xform pool not yet created\n"); + return -EINVAL; + } + + if (rte_mempool_get(comp_dev->xform_pool, private_xform)) { + NITROX_LOG(ERR, "Failed to get from private xform pool\n"); + return -ENOMEM; + } + + nitrox_xform = (struct nitrox_comp_xform *)*private_xform; + if (xform->type == RTE_COMP_COMPRESS) { + enum rte_comp_huffman algo; + int level; + + nitrox_xform->op = NITROX_COMP_OP_COMPRESS; + if (xform->compress.algo != RTE_COMP_ALGO_DEFLATE) { + NITROX_LOG(ERR, "Only deflate is supported\n"); + ret = -ENOTSUP; + goto err_exit; + } + + algo = xform->compress.deflate.huffman; + if (algo == RTE_COMP_HUFFMAN_DEFAULT) + nitrox_xform->algo = NITROX_COMP_ALGO_DEFLATE_DEFAULT; + else if (algo == RTE_COMP_HUFFMAN_FIXED) + nitrox_xform->algo = NITROX_COMP_ALGO_DEFLATE_FIXEDHUFF; + else if (algo == RTE_COMP_HUFFMAN_DYNAMIC) + nitrox_xform->algo = NITROX_COMP_ALGO_DEFLATE_DYNHUFF; + else { + NITROX_LOG(ERR, "Invalid deflate algorithm %d\n", algo); + ret = -EINVAL; + goto err_exit; + } + + level = xform->compress.level; + if (level >= NITROX_COMP_LEVEL_LOWEST_START && + level <= NITROX_COMP_LEVEL_LOWEST_END) { + nitrox_xform->level = NITROX_COMP_LEVEL_LOWEST; + } else if (level >= NITROX_COMP_LEVEL_LOWER_START && + level <= NITROX_COMP_LEVEL_LOWER_END) { + nitrox_xform->level = NITROX_COMP_LEVEL_LOWER; + } else if (level >= NITROX_COMP_LEVEL_MEDIUM_START && + level <= NITROX_COMP_LEVEL_MEDIUM_END) { + nitrox_xform->level = NITROX_COMP_LEVEL_MEDIUM; + } else if (level >= NITROX_COMP_LEVEL_BEST_START && + level <= NITROX_COMP_LEVEL_BEST_END) { + nitrox_xform->level = NITROX_COMP_LEVEL_BEST; + } else { + NITROX_LOG(ERR, "Unsupported compression level %d\n", + xform->compress.level); + ret = -ENOTSUP; + goto err_exit; + } + + chksum_type = xform->compress.chksum; + } else if (xform->type == RTE_COMP_DECOMPRESS) { + nitrox_xform->op = NITROX_COMP_OP_DECOMPRESS; + if (xform->decompress.algo != RTE_COMP_ALGO_DEFLATE) { + NITROX_LOG(ERR, "Only deflate is supported\n"); + ret = -ENOTSUP; + goto err_exit; + } + + nitrox_xform->algo = NITROX_COMP_ALGO_DEFLATE_DEFAULT; + nitrox_xform->level = NITROX_COMP_LEVEL_BEST; + chksum_type = xform->decompress.chksum; + } else { + ret = -EINVAL; + goto err_exit; + } + + if (chksum_type == RTE_COMP_CHECKSUM_NONE) + nitrox_xform->chksum_type = NITROX_CHKSUM_TYPE_NONE; + else if (chksum_type == RTE_COMP_CHECKSUM_CRC32) + nitrox_xform->chksum_type = NITROX_CHKSUM_TYPE_CRC32; + else if (chksum_type == RTE_COMP_CHECKSUM_ADLER32) + nitrox_xform->chksum_type = NITROX_CHKSUM_TYPE_ADLER32; + else { + NITROX_LOG(ERR, "Unsupported checksum type %d\n", + chksum_type); + ret = -ENOTSUP; + goto err_exit; + } + + return 0; +err_exit: + memset(nitrox_xform, 0, sizeof(*nitrox_xform)); + rte_mempool_put(comp_dev->xform_pool, nitrox_xform); + return ret; +} + +static int nitrox_comp_private_xform_free(struct rte_compressdev *dev, + void *private_xform) +{ + struct nitrox_comp_xform *nitrox_xform = private_xform; + struct rte_mempool *mp = rte_mempool_from_obj(nitrox_xform); + + RTE_SET_USED(dev); + if (nitrox_xform == NULL) + return -EINVAL; + + memset(nitrox_xform, 0, sizeof(*nitrox_xform)); + mp = rte_mempool_from_obj(nitrox_xform); + rte_mempool_put(mp, nitrox_xform); + return 0; +} + +static uint16_t nitrox_comp_dev_enq_burst(void *qp, + struct rte_comp_op **ops, + uint16_t nb_ops) +{ + RTE_SET_USED(qp); + RTE_SET_USED(ops); + RTE_SET_USED(nb_ops); + return 0; +} + +static uint16_t nitrox_comp_dev_deq_burst(void *qp, + struct rte_comp_op **ops, + uint16_t nb_ops) +{ + RTE_SET_USED(qp); + RTE_SET_USED(ops); + RTE_SET_USED(nb_ops); + return 0; +} + +static struct rte_compressdev_ops nitrox_compressdev_ops = { + .dev_configure = nitrox_comp_dev_configure, + .dev_start = nitrox_comp_dev_start, + .dev_stop = nitrox_comp_dev_stop, + .dev_close = nitrox_comp_dev_close, + + .stats_get = nitrox_comp_stats_get, + .stats_reset = nitrox_comp_stats_reset, + + .dev_infos_get = nitrox_comp_dev_info_get, + + .queue_pair_setup = nitrox_comp_queue_pair_setup, + .queue_pair_release = nitrox_comp_queue_pair_release, + + .private_xform_create = nitrox_comp_private_xform_create, + .private_xform_free = nitrox_comp_private_xform_free, + .stream_create = NULL, + .stream_free = NULL +}; + +int +nitrox_comp_pmd_create(struct nitrox_device *ndev) +{ + char name[RTE_COMPRESSDEV_NAME_MAX_LEN]; + struct rte_compressdev_pmd_init_params init_params = { + .name = "", + .socket_id = ndev->pdev->device.numa_node, + }; + struct rte_compressdev *cdev; + + rte_pci_device_name(&ndev->pdev->addr, name, sizeof(name)); + snprintf(name + strlen(name), + RTE_COMPRESSDEV_NAME_MAX_LEN - strlen(name), + "_n5comp"); + ndev->rte_comp_dev.driver = &nitrox_rte_comp_drv; + ndev->rte_comp_dev.numa_node = ndev->pdev->device.numa_node; + ndev->rte_comp_dev.devargs = NULL; + cdev = rte_compressdev_pmd_create(name, + &ndev->rte_comp_dev, + sizeof(struct nitrox_comp_device), + &init_params); + if (!cdev) { + NITROX_LOG(ERR, "Cryptodev '%s' creation failed\n", name); + return -ENODEV; + } + + cdev->dev_ops = &nitrox_compressdev_ops; + cdev->enqueue_burst = nitrox_comp_dev_enq_burst; + cdev->dequeue_burst = nitrox_comp_dev_deq_burst; + cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; + + ndev->comp_dev = cdev->data->dev_private; + ndev->comp_dev->cdev = cdev; + ndev->comp_dev->ndev = ndev; + ndev->comp_dev->xform_pool = NULL; + NITROX_LOG(DEBUG, "Created compressdev '%s', dev_id %d\n", + cdev->data->name, cdev->data->dev_id); + return 0; +} + +int +nitrox_comp_pmd_destroy(struct nitrox_device *ndev) +{ + int err; + + if (ndev->comp_dev == NULL) + return 0; + + err = rte_compressdev_pmd_destroy(ndev->comp_dev->cdev); + if (err) + return err; + + ndev->comp_dev = NULL; + return 0; +} + diff --git a/drivers/compress/nitrox/nitrox_comp.h b/drivers/compress/nitrox/nitrox_comp.h new file mode 100644 index 0000000000..536d314ca9 --- /dev/null +++ b/drivers/compress/nitrox/nitrox_comp.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#ifndef _NITROX_COMP_H_ +#define _NITROX_COMP_H_ + +struct nitrox_device; + +int nitrox_comp_pmd_create(struct nitrox_device *ndev); +int nitrox_comp_pmd_destroy(struct nitrox_device *ndev); + +#endif /* _NITROX_COMP_H_ */ diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.c b/drivers/compress/nitrox/nitrox_comp_reqmgr.c new file mode 100644 index 0000000000..5ff64fabce --- /dev/null +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.c @@ -0,0 +1,3 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ From patchwork Fri Oct 27 14:55:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 133508 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 516A943217; Fri, 27 Oct 2023 16:56:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6DBF142E16; Fri, 27 Oct 2023 16:55:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AD55A42DE7 for ; Fri, 27 Oct 2023 16:55:53 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39RAqjXr002022 for ; Fri, 27 Oct 2023 07:55:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/cTQgc47BcLo8cw2C2Q7Rp0LQ2bRvtHpxjrwfiuROXA=; b=c2U1MzTorGPsuG/eOv5gLX5paSX9JxlTtkENgm6qIYmCfcQOVaBMVlKqvvFcBWRuPRx4 dQ9CbqBR6GhrGbpWWiETUY74/SzWdOa18thJVRpNIt9ZycuQAv5Rq4U5kXtfAvCuNmQq pEZr2poEb7nb6fSi0d7na8DaUXjvyCh5L59yWctsHSE9PQ2E/fkZOYZNa7wvBfdfhOZe CQsRHkKOLVRJjlvrPDm/ycVq5HNOvhj8qzutK2zDV2rwV0sdV6sx/mT/te3gzxOhBUpR aptrf8HdoAl/BFkI0btwHKnYqDFR8lLNt2PxYyInbFHg2f1khP61gIwonHW2o/e+rOna RQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3u0bu7rs4t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 27 Oct 2023 07:55:52 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 27 Oct 2023 07:55:51 -0700 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 27 Oct 2023 07:55:49 -0700 From: Nagadheeraj Rottela To: Nagadheeraj Rottela , Srikanth Jampala CC: Subject: [PATCH 3/7] common/nitrox: add compress hardware queue management Date: Fri, 27 Oct 2023 20:25:29 +0530 Message-ID: <20231027145534.16803-4-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231027145534.16803-1-rnagadheeraj@marvell.com> References: <20231027145534.16803-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Fpqr54XS5sGW5n-8tl9QKdBh_YKG3Arz X-Proofpoint-ORIG-GUID: Fpqr54XS5sGW5n-8tl9QKdBh_YKG3Arz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-27_13,2023-10-27_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add compress device ring initialization and cleanup code. Signed-off-by: Nagadheeraj Rottela --- drivers/common/nitrox/nitrox_csr.h | 12 +++ drivers/common/nitrox/nitrox_hal.c | 116 +++++++++++++++++++++++++++++ drivers/common/nitrox/nitrox_hal.h | 115 ++++++++++++++++++++++++++++ drivers/common/nitrox/nitrox_qp.c | 53 +++++++++++-- drivers/common/nitrox/nitrox_qp.h | 35 ++++++++- 5 files changed, 322 insertions(+), 9 deletions(-) diff --git a/drivers/common/nitrox/nitrox_csr.h b/drivers/common/nitrox/nitrox_csr.h index de7a3c6713..1ee538f53c 100644 --- a/drivers/common/nitrox/nitrox_csr.h +++ b/drivers/common/nitrox/nitrox_csr.h @@ -25,6 +25,18 @@ /* AQM Virtual Function Registers */ #define AQMQ_QSZX(_i) (0x20008UL + ((_i) * 0x40000UL)) +/* ZQM virtual function registers */ +#define ZQMQ_DRBLX(_i) (0x30000UL + ((_i) * 0x40000UL)) +#define ZQMQ_QSZX(_i) (0x30008UL + ((_i) * 0x40000UL)) +#define ZQMQ_BADRX(_i) (0x30010UL + ((_i) * 0x40000UL)) +#define ZQMQ_NXT_CMDX(_i) (0x30018UL + ((_i) * 0x40000UL)) +#define ZQMQ_CMD_CNTX(_i) (0x30020UL + ((_i) * 0x40000UL)) +#define ZQMQ_CMP_THRX(_i) (0x30028UL + ((_i) * 0x40000UL)) +#define ZQMQ_CMP_CNTX(_i) (0x30030UL + ((_i) * 0x40000UL)) +#define ZQMQ_TIM_LDX(_i) (0x30038UL + ((_i) * 0x40000UL)) +#define ZQMQ_ENX(_i) (0x30048UL + ((_i) * 0x40000UL)) +#define ZQMQ_ACTIVITY_STATX(_i) (0x30050UL + ((_i) * 0x40000UL)) + static inline uint64_t nitrox_read_csr(uint8_t *bar_addr, uint64_t offset) { diff --git a/drivers/common/nitrox/nitrox_hal.c b/drivers/common/nitrox/nitrox_hal.c index 433f3adb20..4da9490fff 100644 --- a/drivers/common/nitrox/nitrox_hal.c +++ b/drivers/common/nitrox/nitrox_hal.c @@ -9,6 +9,7 @@ #include "nitrox_hal.h" #include "nitrox_csr.h" +#include "nitrox_logs.h" #define MAX_VF_QUEUES 8 #define MAX_PF_QUEUES 64 @@ -164,6 +165,121 @@ setup_nps_pkt_solicit_output_port(uint8_t *bar_addr, uint16_t port) } } +int +zqmq_input_ring_disable(uint8_t *bar_addr, uint16_t ring) +{ + union zqmq_activity_stat zqmq_activity_stat; + union zqmq_en zqmq_en; + union zqmq_cmp_cnt zqmq_cmp_cnt; + uint64_t reg_addr; + int max_retries = 5; + + /* clear queue enable */ + reg_addr = ZQMQ_ENX(ring); + zqmq_en.u64 = nitrox_read_csr(bar_addr, reg_addr); + zqmq_en.s.queue_enable = 0; + nitrox_write_csr(bar_addr, reg_addr, zqmq_en.u64); + rte_delay_us_block(100); + + /* wait for queue active to clear */ + reg_addr = ZQMQ_ACTIVITY_STATX(ring); + zqmq_activity_stat.u64 = nitrox_read_csr(bar_addr, reg_addr); + while (zqmq_activity_stat.s.queue_active && max_retries--) { + rte_delay_ms(10); + zqmq_activity_stat.u64 = nitrox_read_csr(bar_addr, reg_addr); + } + + if (zqmq_activity_stat.s.queue_active) { + NITROX_LOG(ERR, "Failed to disable zqmq ring %d\n", ring); + return -EBUSY; + } + + /* clear commands completed count */ + reg_addr = ZQMQ_CMP_CNTX(ring); + zqmq_cmp_cnt.u64 = nitrox_read_csr(bar_addr, reg_addr); + nitrox_write_csr(bar_addr, reg_addr, zqmq_cmp_cnt.u64); + rte_delay_us_block(CSR_DELAY); + return 0; +} + +int +setup_zqmq_input_ring(uint8_t *bar_addr, uint16_t ring, uint32_t rsize, + phys_addr_t raddr) +{ + union zqmq_drbl zqmq_drbl; + union zqmq_qsz zqmq_qsz; + union zqmq_en zqmq_en; + union zqmq_cmp_thr zqmq_cmp_thr; + union zqmq_tim_ld zqmq_tim_ld; + uint64_t reg_addr = 0; + int max_retries = 5; + int err = 0; + + err = zqmq_input_ring_disable(bar_addr, ring); + if (err) + return err; + + /* clear doorbell count */ + reg_addr = ZQMQ_DRBLX(ring); + zqmq_drbl.u64 = 0; + zqmq_drbl.s.dbell_count = 0xFFFFFFFF; + nitrox_write_csr(bar_addr, reg_addr, zqmq_drbl.u64); + rte_delay_us_block(CSR_DELAY); + + reg_addr = ZQMQ_NXT_CMDX(ring); + nitrox_write_csr(bar_addr, reg_addr, 0); + rte_delay_us_block(CSR_DELAY); + + /* write queue length */ + reg_addr = ZQMQ_QSZX(ring); + zqmq_qsz.u64 = 0; + zqmq_qsz.s.host_queue_size = rsize; + nitrox_write_csr(bar_addr, reg_addr, zqmq_qsz.u64); + rte_delay_us_block(CSR_DELAY); + + /* write queue base address */ + reg_addr = ZQMQ_BADRX(ring); + nitrox_write_csr(bar_addr, reg_addr, raddr); + rte_delay_us_block(CSR_DELAY); + + /* write commands completed threshold */ + reg_addr = ZQMQ_CMP_THRX(ring); + zqmq_cmp_thr.u64 = 0; + zqmq_cmp_thr.s.commands_completed_threshold = 0; + nitrox_write_csr(bar_addr, reg_addr, zqmq_cmp_thr.u64); + rte_delay_us_block(CSR_DELAY); + + /* write timer load value */ + reg_addr = ZQMQ_TIM_LDX(ring); + zqmq_tim_ld.u64 = 0; + zqmq_tim_ld.s.timer_load_value = 0; + nitrox_write_csr(bar_addr, reg_addr, zqmq_tim_ld.u64); + rte_delay_us_block(CSR_DELAY); + + reg_addr = ZQMQ_ENX(ring); + zqmq_en.u64 = nitrox_read_csr(bar_addr, reg_addr); + zqmq_en.s.queue_enable = 1; + nitrox_write_csr(bar_addr, reg_addr, zqmq_en.u64); + rte_delay_us_block(100); + + /* enable queue */ + zqmq_en.u64 = 0; + zqmq_en.u64 = nitrox_read_csr(bar_addr, reg_addr); + while (!zqmq_en.s.queue_enable && max_retries--) { + rte_delay_ms(10); + zqmq_en.u64 = nitrox_read_csr(bar_addr, reg_addr); + } + + if (!zqmq_en.s.queue_enable) { + NITROX_LOG(ERR, "Failed to enable zqmq ring %d\n", ring); + err = -EFAULT; + } else { + err = 0; + } + + return err; +} + int vf_get_vf_config_mode(uint8_t *bar_addr) { diff --git a/drivers/common/nitrox/nitrox_hal.h b/drivers/common/nitrox/nitrox_hal.h index dcfbd11d85..dee7a4d9e7 100644 --- a/drivers/common/nitrox/nitrox_hal.h +++ b/drivers/common/nitrox/nitrox_hal.h @@ -146,6 +146,101 @@ union aqmq_qsz { } s; }; +union zqmq_activity_stat { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 63; + uint64_t queue_active : 1; +#else + uint64_t queue_active : 1; + uint64_t raz : 63; +#endif + } s; +}; + +union zqmq_en { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 63; + uint64_t queue_enable : 1; +#else + uint64_t queue_enable : 1; + uint64_t raz : 63; +#endif + } s; +}; + +union zqmq_cmp_cnt { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 30; + uint64_t resend : 1; + uint64_t completion_status : 1; + uint64_t commands_completed_count: 32; +#else + uint64_t commands_completed_count: 32; + uint64_t completion_status : 1; + uint64_t resend : 1; + uint64_t raz : 30; +#endif + } s; +}; + +union zqmq_drbl { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 32; + uint64_t dbell_count : 32; +#else + uint64_t dbell_count : 32; + uint64_t raz : 32; +#endif + } s; +}; + +union zqmq_qsz { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 32; + uint64_t host_queue_size: 32; +#else + uint64_t host_queue_size: 32; + uint64_t raz : 32; +#endif + } s; +}; + +union zqmq_cmp_thr { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 32; + uint64_t commands_completed_threshold : 32; +#else + uint64_t commands_completed_threshold : 32; + uint64_t raz : 32; +#endif + } s; +}; + +union zqmq_tim_ld { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 32; + uint64_t timer_load_value: 32; +#else + uint64_t timer_load_value: 32; + uint64_t raz : 32; +#endif + } s; +}; + enum nitrox_vf_mode { NITROX_MODE_PF = 0x0, NITROX_MODE_VF16 = 0x1, @@ -154,6 +249,23 @@ enum nitrox_vf_mode { NITROX_MODE_VF128 = 0x4, }; +static inline int +inc_zqmq_next_cmd(uint8_t *bar_addr, uint16_t ring) +{ + uint64_t reg_addr = 0; + uint64_t val; + + reg_addr = ZQMQ_NXT_CMDX(ring); + val = nitrox_read_csr(bar_addr, reg_addr); + val++; + nitrox_write_csr(bar_addr, reg_addr, val); + rte_delay_us_block(CSR_DELAY); + if (nitrox_read_csr(bar_addr, reg_addr) != val) + return -EIO; + + return 0; +} + int vf_get_vf_config_mode(uint8_t *bar_addr); int vf_config_mode_to_nr_queues(enum nitrox_vf_mode vf_mode); void setup_nps_pkt_input_ring(uint8_t *bar_addr, uint16_t ring, uint32_t rsize, @@ -161,5 +273,8 @@ void setup_nps_pkt_input_ring(uint8_t *bar_addr, uint16_t ring, uint32_t rsize, void setup_nps_pkt_solicit_output_port(uint8_t *bar_addr, uint16_t port); void nps_pkt_input_ring_disable(uint8_t *bar_addr, uint16_t ring); void nps_pkt_solicited_port_disable(uint8_t *bar_addr, uint16_t port); +int setup_zqmq_input_ring(uint8_t *bar_addr, uint16_t ring, uint32_t rsize, + phys_addr_t raddr); +int zqmq_input_ring_disable(uint8_t *bar_addr, uint16_t ring); #endif /* _NITROX_HAL_H_ */ diff --git a/drivers/common/nitrox/nitrox_qp.c b/drivers/common/nitrox/nitrox_qp.c index 5e85ccbd51..6ec0781f1a 100644 --- a/drivers/common/nitrox/nitrox_qp.c +++ b/drivers/common/nitrox/nitrox_qp.c @@ -2,7 +2,7 @@ * Copyright(C) 2019 Marvell International Ltd. */ -#include +#include #include #include "nitrox_qp.h" @@ -20,6 +20,7 @@ nitrox_setup_cmdq(struct nitrox_qp *qp, uint8_t *bar_addr, const struct rte_memzone *mz; size_t cmdq_size = qp->count * instr_size; uint64_t offset; + int err = 0; snprintf(mz_name, sizeof(mz_name), "%s_cmdq_%d", dev_name, qp->qno); mz = rte_memzone_reserve_aligned(mz_name, cmdq_size, socket_id, @@ -32,14 +33,34 @@ nitrox_setup_cmdq(struct nitrox_qp *qp, uint8_t *bar_addr, return -ENOMEM; } + switch (qp->type) { + case NITROX_QUEUE_SE: + offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(qp->qno); + qp->cmdq.dbell_csr_addr = NITROX_CSR_ADDR(bar_addr, offset); + setup_nps_pkt_input_ring(bar_addr, qp->qno, qp->count, + mz->iova); + setup_nps_pkt_solicit_output_port(bar_addr, qp->qno); + break; + case NITROX_QUEUE_ZIP: + offset = ZQMQ_DRBLX(qp->qno); + qp->cmdq.dbell_csr_addr = NITROX_CSR_ADDR(bar_addr, offset); + err = setup_zqmq_input_ring(bar_addr, qp->qno, qp->count, + mz->iova); + break; + default: + NITROX_LOG(ERR, "Invalid queue type %d\n", qp->type); + err = -EINVAL; + break; + } + + if (err) { + rte_memzone_free(mz); + return err; + } + qp->cmdq.mz = mz; - offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(qp->qno); - qp->cmdq.dbell_csr_addr = NITROX_CSR_ADDR(bar_addr, offset); qp->cmdq.ring = mz->addr; qp->cmdq.instr_size = instr_size; - setup_nps_pkt_input_ring(bar_addr, qp->qno, qp->count, mz->iova); - setup_nps_pkt_solicit_output_port(bar_addr, qp->qno); - return 0; } @@ -62,8 +83,23 @@ nitrox_setup_ridq(struct nitrox_qp *qp, int socket_id) static int nitrox_release_cmdq(struct nitrox_qp *qp, uint8_t *bar_addr) { - nps_pkt_solicited_port_disable(bar_addr, qp->qno); - nps_pkt_input_ring_disable(bar_addr, qp->qno); + int err = 0; + + switch (qp->type) { + case NITROX_QUEUE_SE: + nps_pkt_solicited_port_disable(bar_addr, qp->qno); + nps_pkt_input_ring_disable(bar_addr, qp->qno); + break; + case NITROX_QUEUE_ZIP: + err = zqmq_input_ring_disable(bar_addr, qp->qno); + break; + default: + err = -EINVAL; + } + + if (err) + return err; + return rte_memzone_free(qp->cmdq.mz); } @@ -83,6 +119,7 @@ nitrox_qp_setup(struct nitrox_qp *qp, uint8_t *bar_addr, const char *dev_name, return -EINVAL; } + qp->bar_addr = bar_addr; qp->count = count; qp->head = qp->tail = 0; rte_atomic16_init(&qp->pending_count); diff --git a/drivers/common/nitrox/nitrox_qp.h b/drivers/common/nitrox/nitrox_qp.h index d42d53f92b..177bcd7705 100644 --- a/drivers/common/nitrox/nitrox_qp.h +++ b/drivers/common/nitrox/nitrox_qp.h @@ -8,9 +8,16 @@ #include #include +#include "nitrox_hal.h" struct nitrox_softreq; +enum nitrox_queue_type { + NITROX_QUEUE_SE, + NITROX_QUEUE_AE, + NITROX_QUEUE_ZIP, +}; + struct command_queue { const struct rte_memzone *mz; uint8_t *dbell_csr_addr; @@ -22,14 +29,23 @@ struct rid { struct nitrox_softreq *sr; }; +struct nitrox_qp_stats { + uint64_t enqueued_count; + uint64_t dequeued_count; + uint64_t enqueue_err_count; + uint64_t dequeue_err_count; +}; + struct nitrox_qp { + enum nitrox_queue_type type; + uint8_t *bar_addr; struct command_queue cmdq; struct rid *ridq; uint32_t count; uint32_t head; uint32_t tail; struct rte_mempool *sr_mp; - struct rte_cryptodev_stats stats; + struct nitrox_qp_stats stats; uint16_t qno; rte_atomic16_t pending_count; }; @@ -89,6 +105,23 @@ nitrox_qp_enqueue(struct nitrox_qp *qp, void *instr, struct nitrox_softreq *sr) rte_atomic16_inc(&qp->pending_count); } +static inline int +nitrox_qp_enqueue_sr(struct nitrox_qp *qp, struct nitrox_softreq *sr) +{ + uint32_t head = qp->head % qp->count; + int err; + + err = inc_zqmq_next_cmd(qp->bar_addr, qp->qno); + if (unlikely(err)) + return err; + + qp->head++; + qp->ridq[head].sr = sr; + rte_smp_wmb(); + rte_atomic16_inc(&qp->pending_count); + return 0; +} + static inline void nitrox_qp_dequeue(struct nitrox_qp *qp) { From patchwork Fri Oct 27 14:55:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 133509 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DE62943217; Fri, 27 Oct 2023 16:56:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 77C9842E06; Fri, 27 Oct 2023 16:55:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id AC13740ED2 for ; Fri, 27 Oct 2023 16:55:55 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39R5Tnis012400 for ; Fri, 27 Oct 2023 07:55:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=TjZGAo8cfaXnunhPOnD6pHXuAGXDH2/4Ey46Mmot5z8=; b=HuIejACYai4Z6GuTG5Ko2cOIEHsSYOjwAq+PzhLvnsxWAThIopAMQH+O9NknsiRBIXNo Y2OGzXgW11ZXQRliNpRbYeZA3FBj15z6v05enloavywfEnhwh0PUUh5bvLqOpFuQ0Hdt 6SS2UmTABN1tgMve3+f6BNexHD1jwVB7JiQe7IeyvGe5BM+aiz1l84iK7mfMbmSdLP1u kXTvKbsrLddgamrLJ07K+6KbfoC6ifxrSi1bWT1yOmkkQdukmxcYk1LfOy/JpNSHJRMr jU2nK4EFnx3F228hctD/e52RM2wzpKXQbeSClW2DKMHe6dq32am54zv3DjJxH/ARtHdp Ug== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3tywr83u9d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 27 Oct 2023 07:55:54 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 27 Oct 2023 07:55:53 -0700 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 27 Oct 2023 07:55:52 -0700 From: Nagadheeraj Rottela To: Nagadheeraj Rottela , Srikanth Jampala CC: Subject: [PATCH 4/7] crypto/nitrox: set queue type during queue pair setup Date: Fri, 27 Oct 2023 20:25:30 +0530 Message-ID: <20231027145534.16803-5-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231027145534.16803-1-rnagadheeraj@marvell.com> References: <20231027145534.16803-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: cwQOLKZlQx0Y8QOEmaZyCysqerqC5MYJ X-Proofpoint-ORIG-GUID: cwQOLKZlQx0Y8QOEmaZyCysqerqC5MYJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-27_13,2023-10-27_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Set queue type as SE to initialize symmetric hardware queue. Signed-off-by: Nagadheeraj Rottela --- drivers/crypto/nitrox/nitrox_sym.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c index 1244317438..03652d3ade 100644 --- a/drivers/crypto/nitrox/nitrox_sym.c +++ b/drivers/crypto/nitrox/nitrox_sym.c @@ -198,6 +198,7 @@ nitrox_sym_dev_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id, return -ENOMEM; } + qp->type = NITROX_QUEUE_SE; qp->qno = qp_id; err = nitrox_qp_setup(qp, ndev->bar_addr, cdev->data->name, qp_conf->nb_descriptors, NPS_PKT_IN_INSTR_SIZE, From patchwork Fri Oct 27 14:55:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 133510 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 854F443217; Fri, 27 Oct 2023 16:56:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9668642E26; Fri, 27 Oct 2023 16:55:59 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 8663842E22 for ; Fri, 27 Oct 2023 16:55:58 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39R5TuUW012486; Fri, 27 Oct 2023 07:55:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=BkhfNMHgKNhNiJRUbydmyLBYvy4ZeadvxuLDdwBKMp4=; b=WGY9H0uypM4V3MI1ZtFDUzAMrqfFWdvlnNLh1+T/IhulOWNSErLyuulBL5enFJH63w4k zMC9z/EY2epgtg2axr3Xx/xXtDllF77s3V0X6XNaEK6Y4faacTlNGE5aA+Zp223GFgbt cUK850QZr5ur8UnUxCo22MHQcmhxobm0W5Lp/523zT2dfqs/5cq/41aFNGthlLWwHrAS WZSrvJD9j2G/0hf+4rrYiXJ3mnj1pWF5Df4AE/DFpWOLatpbYWAy/MFX34s3XUZcxvBV pyknzwWitT9OJFKltDbCvNgoazpdrKpJZUFH8ZtjyvCAdlLMyelAiaycO5btXqFVxPSW gA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3tywr83u9k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 27 Oct 2023 07:55:57 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 27 Oct 2023 07:55:56 -0700 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 27 Oct 2023 07:55:54 -0700 From: Nagadheeraj Rottela To: Nagadheeraj Rottela , Fan Zhang , Ashish Gupta CC: Subject: [PATCH 5/7] compress/nitrox: add software queue management Date: Fri, 27 Oct 2023 20:25:31 +0530 Message-ID: <20231027145534.16803-6-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231027145534.16803-1-rnagadheeraj@marvell.com> References: <20231027145534.16803-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 9JTSSmYfMvTn_FRQ-X2AfbhT3Q5QN7rn X-Proofpoint-ORIG-GUID: 9JTSSmYfMvTn_FRQ-X2AfbhT3Q5QN7rn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-27_13,2023-10-27_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add software queue management code corresponding to queue pair setup and release functions. Signed-off-by: Nagadheeraj Rottela --- drivers/compress/nitrox/nitrox_comp.c | 116 +++++++++++++++++++++++--- 1 file changed, 105 insertions(+), 11 deletions(-) diff --git a/drivers/compress/nitrox/nitrox_comp.c b/drivers/compress/nitrox/nitrox_comp.c index d9bd04db06..2d6e6dcc17 100644 --- a/drivers/compress/nitrox/nitrox_comp.c +++ b/drivers/compress/nitrox/nitrox_comp.c @@ -5,10 +5,12 @@ #include #include #include +#include #include "nitrox_comp.h" #include "nitrox_device.h" #include "nitrox_logs.h" +#include "nitrox_qp.h" #define COMPRESSDEV_NAME_NITROX_PMD compress_nitrox #define NITROX_DECOMP_CTX_SIZE 2048 @@ -21,6 +23,7 @@ #define NITROX_COMP_LEVEL_MEDIUM_END 6 #define NITROX_COMP_LEVEL_BEST_START 7 #define NITROX_COMP_LEVEL_BEST_END 9 +#define ZIP_INSTR_SIZE 64 struct nitrox_comp_device { struct rte_compressdev *cdev; @@ -73,6 +76,9 @@ static const struct rte_driver nitrox_rte_comp_drv = { .alias = nitrox_comp_drv_name }; +static int nitrox_comp_queue_pair_release(struct rte_compressdev *dev, + uint16_t qp_id); + static const struct rte_compressdev_capabilities nitrox_comp_pmd_capabilities[] = { { .algo = RTE_COMP_ALGO_DEFLATE, @@ -138,8 +144,15 @@ static void nitrox_comp_dev_stop(struct rte_compressdev *dev) static int nitrox_comp_dev_close(struct rte_compressdev *dev) { + int i, ret; struct nitrox_comp_device *comp_dev = dev->data->dev_private; + for (i = 0; i < dev->data->nb_queue_pairs; i++) { + ret = nitrox_comp_queue_pair_release(dev, i); + if (ret) + return ret; + } + rte_mempool_free(comp_dev->xform_pool); comp_dev->xform_pool = NULL; return 0; @@ -148,13 +161,33 @@ static int nitrox_comp_dev_close(struct rte_compressdev *dev) static void nitrox_comp_stats_get(struct rte_compressdev *dev, struct rte_compressdev_stats *stats) { - RTE_SET_USED(dev); - RTE_SET_USED(stats); + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct nitrox_qp *qp = dev->data->queue_pairs[qp_id]; + + if (!qp) + continue; + + stats->enqueued_count += qp->stats.enqueued_count; + stats->dequeued_count += qp->stats.dequeued_count; + stats->enqueue_err_count += qp->stats.enqueue_err_count; + stats->dequeue_err_count += qp->stats.dequeue_err_count; + } } static void nitrox_comp_stats_reset(struct rte_compressdev *dev) { - RTE_SET_USED(dev); + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct nitrox_qp *qp = dev->data->queue_pairs[qp_id]; + + if (!qp) + continue; + + memset(&qp->stats, 0, sizeof(qp->stats)); + } } static void nitrox_comp_dev_info_get(struct rte_compressdev *dev, @@ -175,19 +208,80 @@ static int nitrox_comp_queue_pair_setup(struct rte_compressdev *dev, uint16_t qp_id, uint32_t max_inflight_ops, int socket_id) { - RTE_SET_USED(dev); - RTE_SET_USED(qp_id); - RTE_SET_USED(max_inflight_ops); - RTE_SET_USED(socket_id); - return -1; + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_device *ndev = comp_dev->ndev; + struct nitrox_qp *qp = NULL; + int err; + + NITROX_LOG(DEBUG, "queue %d\n", qp_id); + if (qp_id >= ndev->nr_queues) { + NITROX_LOG(ERR, "queue %u invalid, max queues supported %d\n", + qp_id, ndev->nr_queues); + return -EINVAL; + } + + if (dev->data->queue_pairs[qp_id]) { + err = nitrox_comp_queue_pair_release(dev, qp_id); + if (err) + return err; + } + + qp = rte_zmalloc_socket("nitrox PMD qp", sizeof(*qp), + RTE_CACHE_LINE_SIZE, + socket_id); + if (!qp) { + NITROX_LOG(ERR, "Failed to allocate nitrox qp\n"); + return -ENOMEM; + } + + qp->type = NITROX_QUEUE_ZIP; + qp->qno = qp_id; + err = nitrox_qp_setup(qp, ndev->bar_addr, dev->data->name, + max_inflight_ops, ZIP_INSTR_SIZE, + socket_id); + if (unlikely(err)) + goto qp_setup_err; + + dev->data->queue_pairs[qp_id] = qp; + NITROX_LOG(DEBUG, "queue %d setup done\n", qp_id); + return 0; + +qp_setup_err: + rte_free(qp); + return err; } static int nitrox_comp_queue_pair_release(struct rte_compressdev *dev, uint16_t qp_id) { - RTE_SET_USED(dev); - RTE_SET_USED(qp_id); - return 0; + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_device *ndev = comp_dev->ndev; + struct nitrox_qp *qp; + int err; + + NITROX_LOG(DEBUG, "queue %d\n", qp_id); + if (qp_id >= ndev->nr_queues) { + NITROX_LOG(ERR, "queue %u invalid, max queues supported %d\n", + qp_id, ndev->nr_queues); + return -EINVAL; + } + + qp = dev->data->queue_pairs[qp_id]; + if (!qp) { + NITROX_LOG(DEBUG, "queue %u already freed\n", qp_id); + return 0; + } + + if (!nitrox_qp_is_empty(qp)) { + NITROX_LOG(ERR, "queue %d not empty\n", qp_id); + return -EAGAIN; + } + + dev->data->queue_pairs[qp_id] = NULL; + err = nitrox_qp_release(qp, ndev->bar_addr); + rte_free(qp); + NITROX_LOG(DEBUG, "queue %d release done\n", qp_id); + return err; } static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, From patchwork Fri Oct 27 14:55:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 133511 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DB1A843217; Fri, 27 Oct 2023 16:56:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B928342E48; Fri, 27 Oct 2023 16:56:02 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CD3C242E45 for ; Fri, 27 Oct 2023 16:56:01 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39RAqsLC003042; Fri, 27 Oct 2023 07:56:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=VQlNrrYaoOKNRLfLxJ0cnYd93r1Y2+rEZdHUTcqtx6M=; b=j0seg5AREp+9IRdupzQRWwQjMmeBg570kX9f4aIqKitic7FVYTuCQtUbo0+GKyqwKjp+ OB9vR4b6V+ME/QbdNPYPgs5MLAC/jxTO2ZS3seGBLk+7S3/nMHXSojGwoSgM/la239rn l0F93wI/ucjfJBUT087Fj6JoRJEjzm/hDKm6znrvsHiM/Tzpr46V+hPvOViujgwC81GA WV5B954zmom2UzxCAA2XnHKKfUWw38CDHnsbfiPTbMDbVgYZpL0h2mfiGakxu3kmApsl tHzm5tUsXN6tDUc/4q5Q+Jq23V987ARPaF/mTqF5GqgxwsKFQza7HMd6AtqEgvtk1BDn nw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3u0bu7rs58-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 27 Oct 2023 07:56:01 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 27 Oct 2023 07:55:59 -0700 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 27 Oct 2023 07:55:57 -0700 From: Nagadheeraj Rottela To: Nagadheeraj Rottela , Fan Zhang , Ashish Gupta CC: Subject: [PATCH 6/7] compress/nitrox: add stateless request support Date: Fri, 27 Oct 2023 20:25:32 +0530 Message-ID: <20231027145534.16803-7-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231027145534.16803-1-rnagadheeraj@marvell.com> References: <20231027145534.16803-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: tRUr1kzPeL2aN1zqQlvyYWWuHSGbN7zB X-Proofpoint-ORIG-GUID: tRUr1kzPeL2aN1zqQlvyYWWuHSGbN7zB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-27_13,2023-10-27_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement enqueue and dequeue burst operations for stateless request support. Signed-off-by: Nagadheeraj Rottela --- drivers/compress/nitrox/nitrox_comp.c | 139 ++-- drivers/compress/nitrox/nitrox_comp_reqmgr.c | 789 +++++++++++++++++++ drivers/compress/nitrox/nitrox_comp_reqmgr.h | 61 ++ 3 files changed, 937 insertions(+), 52 deletions(-) create mode 100644 drivers/compress/nitrox/nitrox_comp_reqmgr.h diff --git a/drivers/compress/nitrox/nitrox_comp.c b/drivers/compress/nitrox/nitrox_comp.c index 2d6e6dcc17..cda0633929 100644 --- a/drivers/compress/nitrox/nitrox_comp.c +++ b/drivers/compress/nitrox/nitrox_comp.c @@ -10,11 +10,10 @@ #include "nitrox_comp.h" #include "nitrox_device.h" #include "nitrox_logs.h" +#include "nitrox_comp_reqmgr.h" #include "nitrox_qp.h" #define COMPRESSDEV_NAME_NITROX_PMD compress_nitrox -#define NITROX_DECOMP_CTX_SIZE 2048 -#define NITROX_CONSTANTS_MAX_SEARCH_DEPTH 31744 #define NITROX_COMP_LEVEL_LOWEST_START 1 #define NITROX_COMP_LEVEL_LOWEST_END 2 #define NITROX_COMP_LEVEL_LOWER_START 3 @@ -31,45 +30,6 @@ struct nitrox_comp_device { struct rte_mempool *xform_pool; }; -enum nitrox_comp_op { - NITROX_COMP_OP_DECOMPRESS, - NITROX_COMP_OP_COMPRESS, -}; - -enum nitrox_comp_algo { - NITROX_COMP_ALGO_DEFLATE_DEFAULT, - NITROX_COMP_ALGO_DEFLATE_DYNHUFF, - NITROX_COMP_ALGO_DEFLATE_FIXEDHUFF, - NITROX_COMP_ALGO_LZS, -}; - -enum nitrox_comp_level { - NITROX_COMP_LEVEL_BEST, - NITROX_COMP_LEVEL_MEDIUM, - NITROX_COMP_LEVEL_LOWER, - NITROX_COMP_LEVEL_LOWEST, -}; - -enum nitrox_chksum_type { - NITROX_CHKSUM_TYPE_CRC32, - NITROX_CHKSUM_TYPE_ADLER32, - NITROX_CHKSUM_TYPE_NONE, -}; - -struct nitrox_comp_xform { - enum nitrox_comp_op op; - enum nitrox_comp_algo algo; - enum nitrox_comp_level level; - enum nitrox_chksum_type chksum_type; -}; - -struct nitrox_comp_stream { - struct nitrox_comp_xform xform; - int window_size; - char context[NITROX_DECOMP_CTX_SIZE] __rte_aligned(8); - char history_window[NITROX_CONSTANTS_MAX_SEARCH_DEPTH] __rte_aligned(8); -}; - static const char nitrox_comp_drv_name[] = RTE_STR(COMPRESSDEV_NAME_NITROX_PMD); static const struct rte_driver nitrox_rte_comp_drv = { .name = nitrox_comp_drv_name, @@ -242,10 +202,17 @@ static int nitrox_comp_queue_pair_setup(struct rte_compressdev *dev, if (unlikely(err)) goto qp_setup_err; + qp->sr_mp = nitrox_comp_req_pool_create(dev, qp->count, qp_id, + socket_id); + if (unlikely(!qp->sr_mp)) + goto req_pool_err; + dev->data->queue_pairs[qp_id] = qp; NITROX_LOG(DEBUG, "queue %d setup done\n", qp_id); return 0; +req_pool_err: + nitrox_qp_release(qp, ndev->bar_addr); qp_setup_err: rte_free(qp); return err; @@ -279,6 +246,7 @@ static int nitrox_comp_queue_pair_release(struct rte_compressdev *dev, dev->data->queue_pairs[qp_id] = NULL; err = nitrox_qp_release(qp, ndev->bar_addr); + nitrox_comp_req_pool_free(qp->sr_mp); rte_free(qp); NITROX_LOG(DEBUG, "queue %d release done\n", qp_id); return err; @@ -329,8 +297,10 @@ static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, } level = xform->compress.level; - if (level >= NITROX_COMP_LEVEL_LOWEST_START && - level <= NITROX_COMP_LEVEL_LOWEST_END) { + if (level == RTE_COMP_LEVEL_PMD_DEFAULT) { + nitrox_xform->level = NITROX_COMP_LEVEL_MEDIUM; + } else if (level >= NITROX_COMP_LEVEL_LOWEST_START && + level <= NITROX_COMP_LEVEL_LOWEST_END) { nitrox_xform->level = NITROX_COMP_LEVEL_LOWEST; } else if (level >= NITROX_COMP_LEVEL_LOWER_START && level <= NITROX_COMP_LEVEL_LOWER_END) { @@ -401,24 +371,89 @@ static int nitrox_comp_private_xform_free(struct rte_compressdev *dev, return 0; } -static uint16_t nitrox_comp_dev_enq_burst(void *qp, +static int nitrox_enq_single_op(struct nitrox_qp *qp, struct rte_comp_op *op) +{ + struct nitrox_softreq *sr; + int err; + + if (unlikely(rte_mempool_get(qp->sr_mp, (void **)&sr))) + return -ENOMEM; + + err = nitrox_process_comp_req(op, sr); + if (unlikely(err)) { + rte_mempool_put(qp->sr_mp, sr); + return err; + } + + nitrox_qp_enqueue(qp, nitrox_comp_instr_addr(sr), sr); + return 0; +} + +static uint16_t nitrox_comp_dev_enq_burst(void *queue_pair, struct rte_comp_op **ops, uint16_t nb_ops) { - RTE_SET_USED(qp); - RTE_SET_USED(ops); - RTE_SET_USED(nb_ops); + struct nitrox_qp *qp = queue_pair; + uint16_t free_slots = 0; + uint16_t cnt = 0; + bool err = false; + + free_slots = nitrox_qp_free_count(qp); + if (nb_ops > free_slots) + nb_ops = free_slots; + + for (cnt = 0; cnt < nb_ops; cnt++) { + if (unlikely(nitrox_enq_single_op(qp, ops[cnt]))) { + err = true; + break; + } + } + + nitrox_ring_dbell(qp, cnt); + qp->stats.enqueued_count += cnt; + if (unlikely(err)) + qp->stats.enqueue_err_count++; + + return cnt; +} + +static int nitrox_deq_single_op(struct nitrox_qp *qp, + struct rte_comp_op **op_ptr) +{ + struct nitrox_softreq *sr; + int err; + + sr = nitrox_qp_get_softreq(qp); + err = nitrox_check_comp_req(sr, op_ptr); + if (err == -EAGAIN) + return err; + + nitrox_qp_dequeue(qp); + rte_mempool_put(qp->sr_mp, sr); + if (err == 0) + qp->stats.dequeued_count++; + else + qp->stats.dequeue_err_count++; + return 0; } -static uint16_t nitrox_comp_dev_deq_burst(void *qp, +static uint16_t nitrox_comp_dev_deq_burst(void *queue_pair, struct rte_comp_op **ops, uint16_t nb_ops) { - RTE_SET_USED(qp); - RTE_SET_USED(ops); - RTE_SET_USED(nb_ops); - return 0; + struct nitrox_qp *qp = queue_pair; + uint16_t filled_slots = nitrox_qp_used_count(qp); + int cnt = 0; + + if (nb_ops > filled_slots) + nb_ops = filled_slots; + + for (cnt = 0; cnt < nb_ops; cnt++) + if (nitrox_deq_single_op(qp, &ops[cnt])) + break; + + return cnt; } static struct rte_compressdev_ops nitrox_compressdev_ops = { diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.c b/drivers/compress/nitrox/nitrox_comp_reqmgr.c index 5ff64fabce..5090af0ee1 100644 --- a/drivers/compress/nitrox/nitrox_comp_reqmgr.c +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.c @@ -1,3 +1,792 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(C) 2023 Marvell International Ltd. */ + +#include +#include +#include + +#include "nitrox_comp_reqmgr.h" +#include "nitrox_logs.h" +#include "rte_comp.h" + +#define NITROX_ZIP_SGL_COUNT 16 +#define NITROX_ZIP_MAX_ZPTRS 2048 +#define NITROX_ZIP_MAX_DATASIZE ((1 << 24) - 1) +#define NITROX_ZIP_MAX_ONFSIZE 1024 +#define CMD_TIMEOUT 2 + +union nitrox_zip_instr_word0 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 8; + uint64_t tol : 24; + uint64_t raz1 : 5; + uint64_t exn : 3; + uint64_t raz2 : 1; + uint64_t exbits : 7; + uint64_t raz3 : 3; + uint64_t ca : 1; + uint64_t sf : 1; + uint64_t ss : 2; + uint64_t cc : 2; + uint64_t ef : 1; + uint64_t bf : 1; + uint64_t co : 1; + uint64_t raz4 : 1; + uint64_t ds : 1; + uint64_t dg : 1; + uint64_t hg : 1; +#else + uint64_t hg : 1; + uint64_t dg : 1; + uint64_t ds : 1; + uint64_t raz4 : 1; + uint64_t co : 1; + uint64_t bf : 1; + uint64_t ef : 1; + uint64_t cc : 2; + uint64_t ss : 2; + uint64_t sf : 1; + uint64_t ca : 1; + uint64_t raz3 : 3; + uint64_t exbits : 7; + uint64_t raz2 : 1; + uint64_t exn : 3; + uint64_t raz1 : 5; + uint64_t tol : 24; + uint64_t raz0 : 8; +#endif + + }; +}; + +union nitrox_zip_instr_word1 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t hl : 16; + uint64_t raz0 : 16; + uint64_t adlercrc32 : 32; +#else + uint64_t adlercrc32 : 32; + uint64_t raz0 : 16; + uint64_t hl : 16; +#endif + }; +}; + +union nitrox_zip_instr_word2 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 20; + uint64_t cptr : 44; +#else + uint64_t cptr : 44; + uint64_t raz0 : 20; +#endif + }; +}; + +union nitrox_zip_instr_word3 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 4; + uint64_t hlen : 16; + uint64_t hptr : 44; +#else + uint64_t hptr : 44; + uint64_t hlen : 16; + uint64_t raz0 : 4; +#endif + }; +}; + +union nitrox_zip_instr_word4 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 4; + uint64_t ilen : 16; + uint64_t iptr : 44; +#else + uint64_t iptr : 44; + uint64_t ilen : 16; + uint64_t raz0 : 4; +#endif + }; +}; + +union nitrox_zip_instr_word5 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 4; + uint64_t olen : 16; + uint64_t optr : 44; +#else + uint64_t optr : 44; + uint64_t olen : 16; + uint64_t raz0 : 4; +#endif + }; +}; + +union nitrox_zip_instr_word6 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 20; + uint64_t rptr : 44; +#else + uint64_t rptr : 44; + uint64_t raz0 : 20; +#endif + }; +}; + +union nitrox_zip_instr_word7 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t grp : 3; + uint64_t raz0 : 41; + uint64_t addr_msb: 20; +#else + uint64_t addr_msb: 20; + uint64_t raz0 : 41; + uint64_t grp : 3; +#endif + }; +}; + +struct nitrox_zip_instr { + union nitrox_zip_instr_word0 w0; + union nitrox_zip_instr_word1 w1; + union nitrox_zip_instr_word2 w2; + union nitrox_zip_instr_word3 w3; + union nitrox_zip_instr_word4 w4; + union nitrox_zip_instr_word5 w5; + union nitrox_zip_instr_word6 w6; + union nitrox_zip_instr_word7 w7; +}; + +union nitrox_zip_result_word0 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t crc32 : 32; + uint64_t adler32: 32; +#else + uint64_t adler32: 32; + uint64_t crc32 : 32; +#endif + }; +}; + +union nitrox_zip_result_word1 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t tbyteswritten : 32; + uint64_t tbytesread : 32; +#else + uint64_t tbytesread : 32; + uint64_t tbyteswritten : 32; +#endif + }; +}; + +union nitrox_zip_result_word2 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t tbits : 32; + uint64_t raz0 : 5; + uint64_t exn : 3; + uint64_t raz1 : 1; + uint64_t exbits : 7; + uint64_t raz2 : 7; + uint64_t ef : 1; + uint64_t compcode: 8; +#else + uint64_t compcode: 8; + uint64_t ef : 1; + uint64_t raz2 : 7; + uint64_t exbits : 7; + uint64_t raz1 : 1; + uint64_t exn : 3; + uint64_t raz0 : 5; + uint64_t tbits : 32; +#endif + }; +}; + +struct nitrox_zip_result { + union nitrox_zip_result_word0 w0; + union nitrox_zip_result_word1 w1; + union nitrox_zip_result_word2 w2; +}; + +union nitrox_zip_zptr { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 3; + uint64_t le : 1; + uint64_t length : 16; + uint64_t addr : 44; +#else + uint64_t addr : 44; + uint64_t length : 16; + uint64_t le : 1; + uint64_t raz0 : 3; +#endif + } s; +}; + +struct nitrox_zip_iova_addr { + union { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t addr_msb: 20; + uint64_t addr : 44; +#else + uint64_t addr : 44; + uint64_t addr_msb: 20; +#endif + } zda; + + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t addr_msb: 20; + uint64_t addr : 41; + uint64_t align_8bytes: 3; +#else + uint64_t align_8bytes: 3; + uint64_t addr : 41; + uint64_t addr_msb: 20; +#endif + } z8a; + }; +}; + +enum nitrox_zip_comp_code { + NITROX_CC_NOTDONE = 0, + NITROX_CC_SUCCESS = 1, + NITROX_CC_DTRUNC = 2, + NITROX_CC_STOP = 3, + NITROX_CC_ITRUNK = 4, + NITROX_CC_RBLOCK = 5, + NITROX_CC_NLEN = 6, + NITROX_CC_BADCODE = 7, + NITROX_CC_BADCODE2 = 8, + NITROX_CC_ZERO_LEN = 9, + NITROX_CC_PARITY = 10, + NITROX_CC_FATAL = 11, + NITROX_CC_TIMEOUT = 12, + NITROX_CC_NPCI_ERR = 13, +}; + +struct nitrox_sgtable { + union nitrox_zip_zptr *sgl; + uint64_t addr_msb; + uint32_t total_bytes; + uint16_t nb_sgls; + uint16_t filled_sgls; +}; + +struct nitrox_softreq { + struct nitrox_zip_instr instr; + struct nitrox_zip_result zip_res __rte_aligned(8); + uint8_t decomp_threshold[NITROX_ZIP_MAX_ONFSIZE]; + struct rte_comp_op *op; + struct nitrox_sgtable src; + struct nitrox_sgtable dst; + struct nitrox_comp_xform xform; + uint64_t timeout; +}; + +static int create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, + struct rte_mbuf *mbuf, uint32_t off, + uint32_t datalen, uint8_t extra_segs, + int socket_id) +{ + struct rte_mbuf *m; + union nitrox_zip_zptr *sgl; + struct nitrox_zip_iova_addr zip_addr; + uint16_t nb_segs; + uint16_t i; + uint32_t mlen; + + if (unlikely(datalen > NITROX_ZIP_MAX_DATASIZE)) { + NITROX_LOG(ERR, "Unsupported datalen %d, max supported %d\n", + datalen, NITROX_ZIP_MAX_DATASIZE); + return -ENOTSUP; + } + + nb_segs = mbuf->nb_segs + extra_segs; + for (m = mbuf; m && off > rte_pktmbuf_data_len(m); m = m->next) { + off -= rte_pktmbuf_data_len(m); + nb_segs--; + } + + if (unlikely(nb_segs > NITROX_ZIP_MAX_ZPTRS)) { + NITROX_LOG(ERR, "Mbuf has more segments %d than supported\n", + nb_segs); + return -ENOTSUP; + } + + if (unlikely(nb_segs > sgtbl->nb_sgls)) { + union nitrox_zip_zptr *sgl; + + NITROX_LOG(INFO, "Mbuf has more segs %d than allocated %d\n", + nb_segs, sgtbl->nb_sgls); + sgl = rte_realloc_socket(sgtbl->sgl, + sizeof(*sgtbl->sgl) * nb_segs, + 8, socket_id); + if (unlikely(!sgl)) { + NITROX_LOG(ERR, "Failed to expand sglist memory\n"); + return -ENOMEM; + } + + sgtbl->sgl = sgl; + sgtbl->nb_sgls = nb_segs; + } + + sgtbl->filled_sgls = 0; + sgtbl->total_bytes = 0; + sgl = sgtbl->sgl; + if (!m) + return 0; + + mlen = rte_pktmbuf_data_len(m) - off; + if (datalen <= mlen) + mlen = datalen; + + i = 0; + zip_addr.u64 = rte_pktmbuf_iova_offset(m, off); + sgl[i].s.addr = zip_addr.zda.addr; + sgl[i].s.length = mlen; + sgl[i].s.le = 0; + sgtbl->total_bytes += mlen; + sgtbl->addr_msb = zip_addr.zda.addr_msb; + datalen -= mlen; + i++; + for (m = m->next; m && datalen; m = m->next) { + mlen = rte_pktmbuf_data_len(m) < datalen ? + rte_pktmbuf_data_len(m) : datalen; + zip_addr.u64 = rte_pktmbuf_iova(m); + if (unlikely(zip_addr.zda.addr_msb != sgtbl->addr_msb)) { + NITROX_LOG(ERR, "zip_ptrs have different msb addr\n"); + return -ENOTSUP; + } + + sgl[i].s.addr = zip_addr.zda.addr; + sgl[i].s.length = mlen; + sgl[i].s.le = 0; + sgtbl->total_bytes += mlen; + datalen -= mlen; + i++; + } + + sgtbl->filled_sgls = i; + return 0; +} + +static int softreq_init(struct nitrox_softreq *sr) +{ + struct rte_mempool *mp; + int err; + + mp = rte_mempool_from_obj(sr); + if (unlikely(mp == NULL)) + return -EINVAL; + + err = create_sglist_from_mbuf(&sr->src, sr->op->m_src, + sr->op->src.offset, + sr->op->src.length, 0, mp->socket_id); + if (unlikely(err)) + return err; + + err = create_sglist_from_mbuf(&sr->dst, sr->op->m_dst, + sr->op->dst.offset, + rte_pktmbuf_pkt_len(sr->op->m_dst) - sr->op->dst.offset, + (sr->xform.op == NITROX_COMP_OP_DECOMPRESS) ? 1 : 0, + mp->socket_id); + if (unlikely(err)) + return err; + + if (sr->xform.op == NITROX_COMP_OP_DECOMPRESS) { + struct nitrox_zip_iova_addr zip_addr; + int i; + + zip_addr.u64 = rte_mempool_virt2iova(sr) + + offsetof(struct nitrox_softreq, decomp_threshold); + i = sr->dst.filled_sgls; + sr->dst.sgl[i].s.addr = zip_addr.zda.addr; + sr->dst.sgl[i].s.length = NITROX_ZIP_MAX_ONFSIZE; + sr->dst.sgl[i].s.le = 0; + sr->dst.total_bytes += NITROX_ZIP_MAX_ONFSIZE; + sr->dst.filled_sgls++; + } + + return 0; +} + +static void nitrox_zip_instr_to_b64(struct nitrox_softreq *sr) +{ + struct nitrox_zip_instr *instr = &sr->instr; + int i; + + for (i = 0; instr->w0.dg && (i < instr->w4.ilen); i++) + sr->src.sgl[i].u64 = rte_cpu_to_be_64(sr->src.sgl[i].u64); + + for (i = 0; instr->w0.ds && (i < instr->w5.olen); i++) + sr->dst.sgl[i].u64 = rte_cpu_to_be_64(sr->dst.sgl[i].u64); + + instr->w0.u64 = rte_cpu_to_be_64(instr->w0.u64); + instr->w1.u64 = rte_cpu_to_be_64(instr->w1.u64); + instr->w2.u64 = rte_cpu_to_be_64(instr->w2.u64); + instr->w3.u64 = rte_cpu_to_be_64(instr->w3.u64); + instr->w4.u64 = rte_cpu_to_be_64(instr->w4.u64); + instr->w5.u64 = rte_cpu_to_be_64(instr->w5.u64); + instr->w6.u64 = rte_cpu_to_be_64(instr->w6.u64); + instr->w7.u64 = rte_cpu_to_be_64(instr->w7.u64); +} + +static int process_zip_stateless(struct nitrox_softreq *sr) +{ + struct nitrox_zip_instr *instr; + struct nitrox_comp_xform *xform; + struct nitrox_zip_iova_addr zip_addr; + uint64_t iptr_msb, optr_msb, rptr_msb; + int err; + + xform = sr->op->private_xform; + if (unlikely(xform == NULL)) { + NITROX_LOG(ERR, "Invalid stateless comp op\n"); + return -EINVAL; + } + + if (unlikely(xform->op == NITROX_COMP_OP_COMPRESS && + sr->op->flush_flag != RTE_COMP_FLUSH_FULL && + sr->op->flush_flag != RTE_COMP_FLUSH_FINAL)) { + NITROX_LOG(ERR, "Invalid flush flag %d in stateless op\n", + sr->op->flush_flag); + return -EINVAL; + } + + sr->xform = *xform; + err = softreq_init(sr); + if (unlikely(err)) + return err; + + instr = &sr->instr; + memset(instr, 0, sizeof(*instr)); + /* word 0 */ + instr->w0.tol = sr->dst.total_bytes; + instr->w0.exn = 0; + instr->w0.exbits = 0; + instr->w0.ca = 0; + if (xform->op == NITROX_COMP_OP_DECOMPRESS || + sr->op->flush_flag == RTE_COMP_FLUSH_FULL) + instr->w0.sf = 1; + else + instr->w0.sf = 0; + + instr->w0.ss = xform->level; + instr->w0.cc = xform->algo; + if (xform->op == NITROX_COMP_OP_COMPRESS && + sr->op->flush_flag == RTE_COMP_FLUSH_FINAL) + instr->w0.ef = 1; + else + instr->w0.ef = 0; + + instr->w0.bf = 1; + instr->w0.co = xform->op; + if (sr->dst.filled_sgls > 1) + instr->w0.ds = 1; + else + instr->w0.ds = 0; + + if (sr->src.filled_sgls > 1) + instr->w0.dg = 1; + else + instr->w0.dg = 0; + + instr->w0.hg = 0; + + /* word 1 */ + instr->w1.hl = 0; + if (sr->op->input_chksum != 0) + instr->w1.adlercrc32 = sr->op->input_chksum; + else if (xform->chksum_type == NITROX_CHKSUM_TYPE_ADLER32) + instr->w1.adlercrc32 = 1; + else if (xform->chksum_type == NITROX_CHKSUM_TYPE_CRC32) + instr->w1.adlercrc32 = 0; + + /* word 2 */ + instr->w2.cptr = 0; + + /* word 3 */ + instr->w3.hlen = 0; + instr->w3.hptr = 0; + + /* word 4 */ + if (sr->src.filled_sgls == 1) { + instr->w4.ilen = sr->src.sgl[0].s.length; + instr->w4.iptr = sr->src.sgl[0].s.addr; + iptr_msb = sr->src.addr_msb; + } else { + zip_addr.u64 = rte_malloc_virt2iova(sr->src.sgl); + instr->w4.ilen = sr->src.filled_sgls; + instr->w4.iptr = zip_addr.zda.addr; + iptr_msb = zip_addr.zda.addr_msb; + } + + /* word 5 */ + if (sr->dst.filled_sgls == 1) { + instr->w5.olen = sr->dst.sgl[0].s.length; + instr->w5.optr = sr->dst.sgl[0].s.addr; + optr_msb = sr->dst.addr_msb; + } else { + zip_addr.u64 = rte_malloc_virt2iova(sr->dst.sgl); + instr->w5.olen = sr->dst.filled_sgls; + instr->w5.optr = zip_addr.zda.addr; + optr_msb = zip_addr.zda.addr_msb; + } + + /* word 6 */ + memset(&sr->zip_res, 0, sizeof(sr->zip_res)); + zip_addr.u64 = rte_mempool_virt2iova(sr) + + offsetof(struct nitrox_softreq, zip_res); + instr->w6.rptr = zip_addr.zda.addr; + rptr_msb = zip_addr.zda.addr_msb; + + if (iptr_msb != optr_msb || iptr_msb != rptr_msb) { + NITROX_LOG(ERR, "addr_msb is not same for all addresses\n"); + return -ENOTSUP; + } + + /* word 7 */ + instr->w7.addr_msb = iptr_msb; + instr->w7.grp = 0; + + nitrox_zip_instr_to_b64(sr); + return 0; +} + +static int process_zip_request(struct nitrox_softreq *sr) +{ + int err; + + switch (sr->op->op_type) { + case RTE_COMP_OP_STATELESS: + err = process_zip_stateless(sr); + break; + default: + err = -EINVAL; + break; + } + + return err; +} + +int +nitrox_process_comp_req(struct rte_comp_op *op, struct nitrox_softreq *sr) +{ + int err; + + sr->op = op; + err = process_zip_request(sr); + if (unlikely(err)) + goto err_exit; + + sr->timeout = rte_get_timer_cycles() + CMD_TIMEOUT * rte_get_timer_hz(); + return 0; +err_exit: + if (err == -ENOMEM) + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + else + sr->op->status = RTE_COMP_OP_STATUS_INVALID_ARGS; + + return err; +} + +static struct nitrox_zip_result zip_result_to_cpu64(struct nitrox_zip_result *r) +{ + struct nitrox_zip_result out_res; + + out_res.w2.u64 = rte_be_to_cpu_64(r->w2.u64); + out_res.w1.u64 = rte_be_to_cpu_64(r->w1.u64); + out_res.w0.u64 = rte_be_to_cpu_64(r->w0.u64); + return out_res; +} + +int +nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op) +{ + struct nitrox_zip_result zip_res; + int output_unused_bytes; + int err = 0; + + zip_res = zip_result_to_cpu64(&sr->zip_res); + if (zip_res.w2.compcode == NITROX_CC_NOTDONE) { + if (rte_get_timer_cycles() >= sr->timeout) { + NITROX_LOG(ERR, "Op timedout\n"); + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + err = -ETIMEDOUT; + goto exit; + } else { + return -EAGAIN; + } + } + + if (unlikely(zip_res.w2.compcode != NITROX_CC_SUCCESS)) { + struct rte_comp_op *op = sr->op; + + NITROX_LOG(ERR, "Op dequeue error 0x%x\n", + zip_res.w2.compcode); + if (zip_res.w2.compcode == NITROX_CC_STOP || + zip_res.w2.compcode == NITROX_CC_DTRUNC) + op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED; + else + op->status = RTE_COMP_OP_STATUS_ERROR; + + op->consumed = 0; + op->produced = 0; + err = -EFAULT; + goto exit; + } + + output_unused_bytes = sr->dst.total_bytes - zip_res.w1.tbyteswritten; + if (unlikely(sr->xform.op == NITROX_COMP_OP_DECOMPRESS && + output_unused_bytes < NITROX_ZIP_MAX_ONFSIZE)) { + NITROX_LOG(ERR, "TOL %d, Total bytes written %d\n", + sr->dst.total_bytes, zip_res.w1.tbyteswritten); + sr->op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED; + sr->op->consumed = 0; + sr->op->produced = sr->dst.total_bytes - NITROX_ZIP_MAX_ONFSIZE; + err = -EIO; + goto exit; + } + + if (sr->xform.op == NITROX_COMP_OP_COMPRESS && + sr->op->flush_flag == RTE_COMP_FLUSH_FINAL && + zip_res.w2.exn) { + uint32_t datalen = zip_res.w1.tbyteswritten; + uint32_t off = sr->op->dst.offset; + struct rte_mbuf *m = sr->op->m_dst; + uint32_t mlen; + uint8_t *last_byte; + + for (; m && off > rte_pktmbuf_data_len(m); m = m->next) + off -= rte_pktmbuf_data_len(m); + + mlen = rte_pktmbuf_data_len(m) - off; + for (; m && (datalen > mlen); m = m->next) + datalen -= mlen; + + last_byte = rte_pktmbuf_mtod_offset(m, uint8_t *, datalen - 1); + *last_byte = zip_res.w2.exbits & 0xFF; + } + + sr->op->consumed = zip_res.w1.tbytesread; + sr->op->produced = zip_res.w1.tbyteswritten; + if (sr->xform.chksum_type == NITROX_CHKSUM_TYPE_CRC32) + sr->op->output_chksum = zip_res.w0.crc32; + else if (sr->xform.chksum_type == NITROX_CHKSUM_TYPE_ADLER32) + sr->op->output_chksum = zip_res.w0.adler32; + + sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; + err = 0; +exit: + *op = sr->op; + return err; +} + +void * +nitrox_comp_instr_addr(struct nitrox_softreq *sr) +{ + return &sr->instr; +} + +static void req_pool_obj_free(struct rte_mempool *mp, void *opaque, void *obj, + unsigned int obj_idx) +{ + struct nitrox_softreq *sr; + + RTE_SET_USED(mp); + RTE_SET_USED(opaque); + RTE_SET_USED(obj_idx); + sr = obj; + rte_free(sr->src.sgl); + sr->src.sgl = NULL; + rte_free(sr->dst.sgl); + sr->dst.sgl = NULL; +} + +void +nitrox_comp_req_pool_free(struct rte_mempool *mp) +{ + rte_mempool_obj_iter(mp, req_pool_obj_free, NULL); + rte_mempool_free(mp); +} + +static void req_pool_obj_init(struct rte_mempool *mp, void *arg, void *obj, + unsigned int obj_idx) +{ + struct nitrox_softreq *sr; + int *err = arg; + + RTE_SET_USED(mp); + RTE_SET_USED(obj_idx); + sr = obj; + sr->src.sgl = rte_zmalloc_socket(NULL, + sizeof(*sr->src.sgl) * NITROX_ZIP_SGL_COUNT, + 8, mp->socket_id); + sr->dst.sgl = rte_zmalloc_socket(NULL, + sizeof(*sr->dst.sgl) * NITROX_ZIP_SGL_COUNT, + 8, mp->socket_id); + if (sr->src.sgl == NULL || sr->dst.sgl == NULL) { + NITROX_LOG(ERR, "Failed to allocate zip_sgl memory\n"); + *err = -ENOMEM; + } + + sr->src.nb_sgls = NITROX_ZIP_SGL_COUNT; + sr->src.filled_sgls = 0; + sr->dst.nb_sgls = NITROX_ZIP_SGL_COUNT; + sr->dst.filled_sgls = 0; +} + +struct rte_mempool * +nitrox_comp_req_pool_create(struct rte_compressdev *dev, uint32_t nobjs, + uint16_t qp_id, int socket_id) +{ + char softreq_pool_name[RTE_RING_NAMESIZE]; + struct rte_mempool *mp; + int err = 0; + + snprintf(softreq_pool_name, RTE_RING_NAMESIZE, "%s_sr_%d", + dev->data->name, qp_id); + mp = rte_mempool_create(softreq_pool_name, + RTE_ALIGN_MUL_CEIL(nobjs, 64), + sizeof(struct nitrox_softreq), + 64, 0, NULL, NULL, req_pool_obj_init, &err, + socket_id, 0); + if (unlikely(!mp)) + NITROX_LOG(ERR, "Failed to create req pool, qid %d, err %d\n", + qp_id, rte_errno); + + if (unlikely(err)) { + nitrox_comp_req_pool_free(mp); + return NULL; + } + + return mp; +} diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.h b/drivers/compress/nitrox/nitrox_comp_reqmgr.h new file mode 100644 index 0000000000..16be92e813 --- /dev/null +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell International Ltd. + */ + +#ifndef _NITROX_COMP_REQMGR_H_ +#define _NITROX_COMP_REQMGR_H_ + +#define NITROX_DECOMP_CTX_SIZE 2048 +#define NITROX_CONSTANTS_MAX_SEARCH_DEPTH 31744 + +struct nitrox_qp; +struct nitrox_softreq; + +enum nitrox_comp_op { + NITROX_COMP_OP_DECOMPRESS, + NITROX_COMP_OP_COMPRESS, +}; + +enum nitrox_comp_algo { + NITROX_COMP_ALGO_DEFLATE_DEFAULT, + NITROX_COMP_ALGO_DEFLATE_DYNHUFF, + NITROX_COMP_ALGO_DEFLATE_FIXEDHUFF, + NITROX_COMP_ALGO_LZS, +}; + +enum nitrox_comp_level { + NITROX_COMP_LEVEL_BEST, + NITROX_COMP_LEVEL_MEDIUM, + NITROX_COMP_LEVEL_LOWER, + NITROX_COMP_LEVEL_LOWEST, +}; + +enum nitrox_chksum_type { + NITROX_CHKSUM_TYPE_CRC32, + NITROX_CHKSUM_TYPE_ADLER32, + NITROX_CHKSUM_TYPE_NONE, +}; + +struct nitrox_comp_xform { + enum nitrox_comp_op op; + enum nitrox_comp_algo algo; + enum nitrox_comp_level level; + enum nitrox_chksum_type chksum_type; +}; + +struct nitrox_comp_stream { + struct nitrox_comp_xform xform; + int window_size; + char context[NITROX_DECOMP_CTX_SIZE] __rte_aligned(8); + char history_window[NITROX_CONSTANTS_MAX_SEARCH_DEPTH] __rte_aligned(8); +}; + +int nitrox_process_comp_req(struct rte_comp_op *op, struct nitrox_softreq *sr); +int nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op); +void *nitrox_comp_instr_addr(struct nitrox_softreq *sr); +struct rte_mempool *nitrox_comp_req_pool_create(struct rte_compressdev *cdev, + uint32_t nobjs, uint16_t qp_id, + int socket_id); +void nitrox_comp_req_pool_free(struct rte_mempool *mp); + +#endif /* _NITROX_COMP_REQMGR_H_ */ From patchwork Fri Oct 27 14:55:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 133512 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 276A243217; Fri, 27 Oct 2023 16:56:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2EC5A42E57; Fri, 27 Oct 2023 16:56:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6EBE342E5A for ; Fri, 27 Oct 2023 16:56:04 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39R5TxQP012574; Fri, 27 Oct 2023 07:56:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=nhkC+VA2ckLUgjqgx8yqJANhgqUJiDTU0GFvrdw9w5c=; b=X/Q/KB9VMyFkOuPHhkXsmUSsXyDrrYKDvs3NhG8IrQ3Z6kwlIUzOY/B9gkktSQjCkQAl zA4hPHhIJjbTqj2ujpqOslDSjkfkPNH9u2U8XkVMnjIBgfS6A3gzSY16S356EE5VLIX9 Pr1VSvKC5zHP6td0tcvnyahWqxm3eCM67Qi1FZ+1jWl675eZ3H6IeybPYjPukf5D1S1J b7T4/D7nyMlvvYDNf7pmu66+DDBpIgc3/htrvo5dPvtjLyPGYLAcZtQ4DGJzxZKNNTLM knA9YuOOjMEeDGi2UmSAmY6JZPxbw90DJidalLF6sdu5iDGPhf6AsVA1sT/UBriJTcpr QA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3tywr83ua0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 27 Oct 2023 07:56:03 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 27 Oct 2023 07:56:01 -0700 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 27 Oct 2023 07:56:00 -0700 From: Nagadheeraj Rottela To: Nagadheeraj Rottela , Fan Zhang , Ashish Gupta CC: Subject: [PATCH 7/7] compress/nitrox: add stateful request support Date: Fri, 27 Oct 2023 20:25:33 +0530 Message-ID: <20231027145534.16803-8-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231027145534.16803-1-rnagadheeraj@marvell.com> References: <20231027145534.16803-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: E392kUZeHLgNeImrhnqdDysIfP0o4b4k X-Proofpoint-ORIG-GUID: E392kUZeHLgNeImrhnqdDysIfP0o4b4k X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.987,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-27_13,2023-10-27_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement enqueue and dequeue burst operations for stateful request support. Signed-off-by: Nagadheeraj Rottela --- drivers/compress/nitrox/nitrox_comp.c | 187 +++++-- drivers/compress/nitrox/nitrox_comp_reqmgr.c | 555 ++++++++++++++++--- drivers/compress/nitrox/nitrox_comp_reqmgr.h | 16 +- 3 files changed, 628 insertions(+), 130 deletions(-) diff --git a/drivers/compress/nitrox/nitrox_comp.c b/drivers/compress/nitrox/nitrox_comp.c index cda0633929..ea7a43e432 100644 --- a/drivers/compress/nitrox/nitrox_comp.c +++ b/drivers/compress/nitrox/nitrox_comp.c @@ -14,6 +14,8 @@ #include "nitrox_qp.h" #define COMPRESSDEV_NAME_NITROX_PMD compress_nitrox +#define NITROX_COMP_WINDOW_SIZE_MIN 1 +#define NITROX_COMP_WINDOW_SIZE_MAX 15 #define NITROX_COMP_LEVEL_LOWEST_START 1 #define NITROX_COMP_LEVEL_LOWEST_END 2 #define NITROX_COMP_LEVEL_LOWER_START 3 @@ -49,10 +51,12 @@ static const struct rte_compressdev_capabilities RTE_COMP_FF_SHAREABLE_PRIV_XFORM | RTE_COMP_FF_OOP_SGL_IN_SGL_OUT | RTE_COMP_FF_OOP_SGL_IN_LB_OUT | - RTE_COMP_FF_OOP_LB_IN_SGL_OUT, + RTE_COMP_FF_OOP_LB_IN_SGL_OUT | + RTE_COMP_FF_STATEFUL_COMPRESSION | + RTE_COMP_FF_STATEFUL_DECOMPRESSION, .window_size = { - .min = 1, - .max = 15, + .min = NITROX_COMP_WINDOW_SIZE_MIN, + .max = NITROX_COMP_WINDOW_SIZE_MAX, .increment = 1 }, }, @@ -64,6 +68,8 @@ static int nitrox_comp_dev_configure(struct rte_compressdev *dev, { struct nitrox_comp_device *comp_dev = dev->data->dev_private; struct nitrox_device *ndev = comp_dev->ndev; + uint32_t xform_cnt; + char name[RTE_MEMPOOL_NAMESIZE]; if (config->nb_queue_pairs > ndev->nr_queues) { NITROX_LOG(ERR, "Invalid queue pairs, max supported %d\n", @@ -71,21 +77,21 @@ static int nitrox_comp_dev_configure(struct rte_compressdev *dev, return -EINVAL; } - if (config->max_nb_priv_xforms) { - char xform_name[RTE_MEMPOOL_NAMESIZE]; - - snprintf(xform_name, sizeof(xform_name), "%s_xform", - dev->data->name); - comp_dev->xform_pool = rte_mempool_create(xform_name, - config->max_nb_priv_xforms, - sizeof(struct nitrox_comp_xform), - 0, 0, NULL, NULL, NULL, NULL, - config->socket_id, 0); - if (comp_dev->xform_pool == NULL) { - NITROX_LOG(ERR, "Failed to create xform pool, err %d\n", - rte_errno); - return -rte_errno; - } + xform_cnt = config->max_nb_priv_xforms + config->max_nb_streams; + if (unlikely(xform_cnt == 0)) { + NITROX_LOG(ERR, "Invalid configuration with 0 xforms\n"); + return -EINVAL; + } + + snprintf(name, sizeof(name), "%s_xform", dev->data->name); + comp_dev->xform_pool = rte_mempool_create(name, + xform_cnt, sizeof(struct nitrox_comp_xform), + 0, 0, NULL, NULL, NULL, NULL, + config->socket_id, 0); + if (comp_dev->xform_pool == NULL) { + NITROX_LOG(ERR, "Failed to create xform pool, err %d\n", + rte_errno); + return -rte_errno; } return 0; @@ -257,7 +263,7 @@ static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, void **private_xform) { struct nitrox_comp_device *comp_dev = dev->data->dev_private; - struct nitrox_comp_xform *nitrox_xform; + struct nitrox_comp_xform *nxform; enum rte_comp_checksum_type chksum_type; int ret; @@ -271,12 +277,13 @@ static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, return -ENOMEM; } - nitrox_xform = (struct nitrox_comp_xform *)*private_xform; + nxform = (struct nitrox_comp_xform *)*private_xform; + memset(nxform, 0, sizeof(*nxform)); if (xform->type == RTE_COMP_COMPRESS) { enum rte_comp_huffman algo; int level; - nitrox_xform->op = NITROX_COMP_OP_COMPRESS; + nxform->op = NITROX_COMP_OP_COMPRESS; if (xform->compress.algo != RTE_COMP_ALGO_DEFLATE) { NITROX_LOG(ERR, "Only deflate is supported\n"); ret = -ENOTSUP; @@ -285,11 +292,11 @@ static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, algo = xform->compress.deflate.huffman; if (algo == RTE_COMP_HUFFMAN_DEFAULT) - nitrox_xform->algo = NITROX_COMP_ALGO_DEFLATE_DEFAULT; + nxform->algo = NITROX_COMP_ALGO_DEFLATE_DEFAULT; else if (algo == RTE_COMP_HUFFMAN_FIXED) - nitrox_xform->algo = NITROX_COMP_ALGO_DEFLATE_FIXEDHUFF; + nxform->algo = NITROX_COMP_ALGO_DEFLATE_FIXEDHUFF; else if (algo == RTE_COMP_HUFFMAN_DYNAMIC) - nitrox_xform->algo = NITROX_COMP_ALGO_DEFLATE_DYNHUFF; + nxform->algo = NITROX_COMP_ALGO_DEFLATE_DYNHUFF; else { NITROX_LOG(ERR, "Invalid deflate algorithm %d\n", algo); ret = -EINVAL; @@ -298,19 +305,19 @@ static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, level = xform->compress.level; if (level == RTE_COMP_LEVEL_PMD_DEFAULT) { - nitrox_xform->level = NITROX_COMP_LEVEL_MEDIUM; + nxform->level = NITROX_COMP_LEVEL_MEDIUM; } else if (level >= NITROX_COMP_LEVEL_LOWEST_START && level <= NITROX_COMP_LEVEL_LOWEST_END) { - nitrox_xform->level = NITROX_COMP_LEVEL_LOWEST; + nxform->level = NITROX_COMP_LEVEL_LOWEST; } else if (level >= NITROX_COMP_LEVEL_LOWER_START && level <= NITROX_COMP_LEVEL_LOWER_END) { - nitrox_xform->level = NITROX_COMP_LEVEL_LOWER; + nxform->level = NITROX_COMP_LEVEL_LOWER; } else if (level >= NITROX_COMP_LEVEL_MEDIUM_START && level <= NITROX_COMP_LEVEL_MEDIUM_END) { - nitrox_xform->level = NITROX_COMP_LEVEL_MEDIUM; + nxform->level = NITROX_COMP_LEVEL_MEDIUM; } else if (level >= NITROX_COMP_LEVEL_BEST_START && level <= NITROX_COMP_LEVEL_BEST_END) { - nitrox_xform->level = NITROX_COMP_LEVEL_BEST; + nxform->level = NITROX_COMP_LEVEL_BEST; } else { NITROX_LOG(ERR, "Unsupported compression level %d\n", xform->compress.level); @@ -320,15 +327,15 @@ static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, chksum_type = xform->compress.chksum; } else if (xform->type == RTE_COMP_DECOMPRESS) { - nitrox_xform->op = NITROX_COMP_OP_DECOMPRESS; + nxform->op = NITROX_COMP_OP_DECOMPRESS; if (xform->decompress.algo != RTE_COMP_ALGO_DEFLATE) { NITROX_LOG(ERR, "Only deflate is supported\n"); ret = -ENOTSUP; goto err_exit; } - nitrox_xform->algo = NITROX_COMP_ALGO_DEFLATE_DEFAULT; - nitrox_xform->level = NITROX_COMP_LEVEL_BEST; + nxform->algo = NITROX_COMP_ALGO_DEFLATE_DEFAULT; + nxform->level = NITROX_COMP_LEVEL_BEST; chksum_type = xform->decompress.chksum; } else { ret = -EINVAL; @@ -336,11 +343,11 @@ static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, } if (chksum_type == RTE_COMP_CHECKSUM_NONE) - nitrox_xform->chksum_type = NITROX_CHKSUM_TYPE_NONE; + nxform->chksum_type = NITROX_CHKSUM_TYPE_NONE; else if (chksum_type == RTE_COMP_CHECKSUM_CRC32) - nitrox_xform->chksum_type = NITROX_CHKSUM_TYPE_CRC32; + nxform->chksum_type = NITROX_CHKSUM_TYPE_CRC32; else if (chksum_type == RTE_COMP_CHECKSUM_ADLER32) - nitrox_xform->chksum_type = NITROX_CHKSUM_TYPE_ADLER32; + nxform->chksum_type = NITROX_CHKSUM_TYPE_ADLER32; else { NITROX_LOG(ERR, "Unsupported checksum type %d\n", chksum_type); @@ -348,29 +355,104 @@ static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, goto err_exit; } + nxform->context = NULL; + nxform->history_window = NULL; + nxform->window_size = 0; + nxform->hlen = 0; + nxform->exn = 0; + nxform->exbits = 0; + nxform->bf = true; return 0; err_exit: - memset(nitrox_xform, 0, sizeof(*nitrox_xform)); - rte_mempool_put(comp_dev->xform_pool, nitrox_xform); + memset(nxform, 0, sizeof(*nxform)); + rte_mempool_put(comp_dev->xform_pool, nxform); return ret; } static int nitrox_comp_private_xform_free(struct rte_compressdev *dev, void *private_xform) { - struct nitrox_comp_xform *nitrox_xform = private_xform; - struct rte_mempool *mp = rte_mempool_from_obj(nitrox_xform); + struct nitrox_comp_xform *nxform = private_xform; + struct rte_mempool *mp = rte_mempool_from_obj(nxform); RTE_SET_USED(dev); - if (nitrox_xform == NULL) + if (unlikely(nxform == NULL)) return -EINVAL; - memset(nitrox_xform, 0, sizeof(*nitrox_xform)); - mp = rte_mempool_from_obj(nitrox_xform); - rte_mempool_put(mp, nitrox_xform); + memset(nxform, 0, sizeof(*nxform)); + mp = rte_mempool_from_obj(nxform); + rte_mempool_put(mp, nxform); return 0; } +static int nitrox_comp_stream_free(struct rte_compressdev *dev, void *stream) +{ + struct nitrox_comp_xform *nxform = stream; + + if (unlikely(nxform == NULL)) + return -EINVAL; + + rte_free(nxform->history_window); + nxform->history_window = NULL; + rte_free(nxform->context); + nxform->context = NULL; + return nitrox_comp_private_xform_free(dev, stream); +} + +static int nitrox_comp_stream_create(struct rte_compressdev *dev, + const struct rte_comp_xform *xform, void **stream) +{ + int err; + struct nitrox_comp_xform *nxform; + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + + err = nitrox_comp_private_xform_create(dev, xform, stream); + if (unlikely(err)) + return err; + + nxform = *stream; + if (xform->type == RTE_COMP_COMPRESS) { + uint8_t window_size = xform->compress.window_size; + + if (unlikely(window_size < NITROX_COMP_WINDOW_SIZE_MIN || + window_size > NITROX_COMP_WINDOW_SIZE_MAX)) { + NITROX_LOG(ERR, "Invalid window size %d\n", + window_size); + return -EINVAL; + } + + if (window_size == NITROX_COMP_WINDOW_SIZE_MAX) + nxform->window_size = NITROX_CONSTANTS_MAX_SEARCH_DEPTH; + else + nxform->window_size = RTE_BIT32(window_size); + } else { + nxform->window_size = NITROX_DEFAULT_DEFLATE_SEARCH_DEPTH; + } + + nxform->history_window = rte_zmalloc_socket(NULL, nxform->window_size, + 8, comp_dev->xform_pool->socket_id); + if (unlikely(nxform->history_window == NULL)) { + err = -ENOMEM; + goto err_exit; + } + + if (xform->type == RTE_COMP_COMPRESS) + return 0; + + nxform->context = rte_zmalloc_socket(NULL, + NITROX_DECOMP_CTX_SIZE, 8, + comp_dev->xform_pool->socket_id); + if (unlikely(nxform->context == NULL)) { + err = -ENOMEM; + goto err_exit; + } + + return 0; +err_exit: + nitrox_comp_stream_free(dev, *stream); + return err; +} + static int nitrox_enq_single_op(struct nitrox_qp *qp, struct rte_comp_op *op) { struct nitrox_softreq *sr; @@ -385,8 +467,12 @@ static int nitrox_enq_single_op(struct nitrox_qp *qp, struct rte_comp_op *op) return err; } - nitrox_qp_enqueue(qp, nitrox_comp_instr_addr(sr), sr); - return 0; + if (op->status == RTE_COMP_OP_STATUS_SUCCESS) + err = nitrox_qp_enqueue_sr(qp, sr); + else + nitrox_qp_enqueue(qp, nitrox_comp_instr_addr(sr), sr); + + return err; } static uint16_t nitrox_comp_dev_enq_burst(void *queue_pair, @@ -396,6 +482,7 @@ static uint16_t nitrox_comp_dev_enq_burst(void *queue_pair, struct nitrox_qp *qp = queue_pair; uint16_t free_slots = 0; uint16_t cnt = 0; + uint16_t dbcnt = 0; bool err = false; free_slots = nitrox_qp_free_count(qp); @@ -407,9 +494,12 @@ static uint16_t nitrox_comp_dev_enq_burst(void *queue_pair, err = true; break; } + + if (ops[cnt]->status != RTE_COMP_OP_STATUS_SUCCESS) + dbcnt++; } - nitrox_ring_dbell(qp, cnt); + nitrox_ring_dbell(qp, dbcnt); qp->stats.enqueued_count += cnt; if (unlikely(err)) qp->stats.enqueue_err_count++; @@ -472,8 +562,8 @@ static struct rte_compressdev_ops nitrox_compressdev_ops = { .private_xform_create = nitrox_comp_private_xform_create, .private_xform_free = nitrox_comp_private_xform_free, - .stream_create = NULL, - .stream_free = NULL + .stream_create = nitrox_comp_stream_create, + .stream_free = nitrox_comp_stream_free, }; int @@ -531,4 +621,3 @@ nitrox_comp_pmd_destroy(struct nitrox_device *ndev) ndev->comp_dev = NULL; return 0; } - diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.c b/drivers/compress/nitrox/nitrox_comp_reqmgr.c index 5090af0ee1..93d53f6f3d 100644 --- a/drivers/compress/nitrox/nitrox_comp_reqmgr.c +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.c @@ -5,11 +5,13 @@ #include #include #include +#include #include "nitrox_comp_reqmgr.h" #include "nitrox_logs.h" #include "rte_comp.h" +#define NITROX_INSTR_BUFFER_DEBUG 0 #define NITROX_ZIP_SGL_COUNT 16 #define NITROX_ZIP_MAX_ZPTRS 2048 #define NITROX_ZIP_MAX_DATASIZE ((1 << 24) - 1) @@ -307,10 +309,222 @@ struct nitrox_softreq { struct rte_comp_op *op; struct nitrox_sgtable src; struct nitrox_sgtable dst; - struct nitrox_comp_xform xform; uint64_t timeout; }; +#if NITROX_INSTR_BUFFER_DEBUG +static void nitrox_dump_databuf(const char *name, struct rte_mbuf *m, + uint32_t off, uint32_t datalen) +{ + uint32_t mlen; + + if (!rte_log_can_log(nitrox_logtype, RTE_LOG_DEBUG)) + return; + + for (; m && off > rte_pktmbuf_data_len(m); m = m->next) + off -= rte_pktmbuf_data_len(m); + + mlen = rte_pktmbuf_data_len(m) - off; + if (datalen <= mlen) + mlen = datalen; + + rte_hexdump(stderr, name, rte_pktmbuf_mtod_offset(m, char *, off), + mlen); + for (m = m->next; m && datalen; m = m->next) { + mlen = rte_pktmbuf_data_len(m) < datalen ? + rte_pktmbuf_data_len(m) : datalen; + rte_hexdump(stderr, name, rte_pktmbuf_mtod(m, char *), mlen); + } + + fprintf(stderr, "\n"); +} + +static void nitrox_dump_zip_instr(struct nitrox_zip_instr *instr, + union nitrox_zip_zptr *hptr_arr, + union nitrox_zip_zptr *iptr_arr, + union nitrox_zip_zptr *optr_arr) +{ + uint64_t value; + int i = 0; + + if (!rte_log_can_log(nitrox_logtype, RTE_LOG_DEBUG)) + return; + + fprintf(stderr, "\nZIP instruction..(%p)\n", instr); + fprintf(stderr, "\tWORD0 = 0x%016"PRIx64"\n", instr->w0.u64); + fprintf(stderr, "\t\tTOL = %d\n", instr->w0.tol); + fprintf(stderr, "\t\tEXNUM = %d\n", instr->w0.exn); + fprintf(stderr, "\t\tEXBITS = %x\n", instr->w0.exbits); + fprintf(stderr, "\t\tCA = %d\n", instr->w0.ca); + fprintf(stderr, "\t\tSF = %d\n", instr->w0.sf); + fprintf(stderr, "\t\tSS = %d\n", instr->w0.ss); + fprintf(stderr, "\t\tCC = %d\n", instr->w0.cc); + fprintf(stderr, "\t\tEF = %d\n", instr->w0.ef); + fprintf(stderr, "\t\tBF = %d\n", instr->w0.bf); + fprintf(stderr, "\t\tCO = %d\n", instr->w0.co); + fprintf(stderr, "\t\tDS = %d\n", instr->w0.ds); + fprintf(stderr, "\t\tDG = %d\n", instr->w0.dg); + fprintf(stderr, "\t\tHG = %d\n", instr->w0.hg); + fprintf(stderr, "\n"); + + fprintf(stderr, "\tWORD1 = 0x%016"PRIx64"\n", instr->w1.u64); + fprintf(stderr, "\t\tHL = %d\n", instr->w1.hl); + fprintf(stderr, "\t\tADLERCRC32 = 0x%08x\n", instr->w1.adlercrc32); + fprintf(stderr, "\n"); + + value = instr->w2.cptr; + fprintf(stderr, "\tWORD2 = 0x%016"PRIx64"\n", instr->w2.u64); + fprintf(stderr, "\t\tCPTR = 0x%11"PRIx64"\n", value); + fprintf(stderr, "\n"); + + value = instr->w3.hptr; + fprintf(stderr, "\tWORD3 = 0x%016"PRIx64"\n", instr->w3.u64); + fprintf(stderr, "\t\tHLEN = %d\n", instr->w3.hlen); + fprintf(stderr, "\t\tHPTR = 0x%11"PRIx64"\n", value); + + if (instr->w0.hg && hptr_arr) { + for (i = 0; i < instr->w3.hlen; i++) { + value = hptr_arr[i].s.addr; + fprintf(stderr, "\t\t\tZPTR[%d] : Length = %d Addr = 0x%11"PRIx64"\n", + i, hptr_arr[i].s.length, value); + } + } + + fprintf(stderr, "\n"); + + value = instr->w4.iptr; + fprintf(stderr, "\tWORD4 = 0x%016"PRIx64"\n", instr->w4.u64); + fprintf(stderr, "\t\tILEN = %d\n", instr->w4.ilen); + fprintf(stderr, "\t\tIPTR = 0x%11"PRIx64"\n", value); + if (instr->w0.dg && iptr_arr) { + for (i = 0; i < instr->w4.ilen; i++) { + value = iptr_arr[i].s.addr; + fprintf(stderr, "\t\t\tZPTR[%d] : Length = %d Addr = 0x%11"PRIx64"\n", + i, iptr_arr[i].s.length, value); + } + } + + fprintf(stderr, "\n"); + + value = instr->w5.optr; + fprintf(stderr, "\tWORD5 = 0x%016"PRIx64"\n", instr->w5.u64); + fprintf(stderr, "\t\t OLEN = %d\n", instr->w5.olen); + fprintf(stderr, "\t\t OPTR = 0x%11"PRIx64"\n", value); + if (instr->w0.ds && optr_arr) { + for (i = 0; i < instr->w5.olen; i++) { + value = optr_arr[i].s.addr; + fprintf(stderr, "\t\t\tZPTR[%d] : Length = %d Addr = 0x%11"PRIx64"\n", + i, optr_arr[i].s.length, value); + } + } + + fprintf(stderr, "\n"); + + value = instr->w6.rptr; + fprintf(stderr, "\tWORD6 = 0x%016"PRIx64"\n", instr->w6.u64); + fprintf(stderr, "\t\tRPTR = 0x%11"PRIx64"\n", value); + fprintf(stderr, "\n"); + + fprintf(stderr, "\tWORD7 = 0x%016"PRIx64"\n", instr->w7.u64); + fprintf(stderr, "\t\tGRP = %x\n", instr->w7.grp); + fprintf(stderr, "\t\tADDR_MSB = 0x%5x\n", instr->w7.addr_msb); + fprintf(stderr, "\n"); +} + +static void nitrox_dump_zip_result(struct nitrox_zip_instr *instr, + struct nitrox_zip_result *result) +{ + if (!rte_log_can_log(nitrox_logtype, RTE_LOG_DEBUG)) + return; + + fprintf(stderr, "ZIP result..(instr %p)\n", instr); + fprintf(stderr, "\tWORD0 = 0x%016"PRIx64"\n", result->w0.u64); + fprintf(stderr, "\t\tCRC32 = 0x%8x\n", result->w0.crc32); + fprintf(stderr, "\t\tADLER32 = 0x%8x\n", result->w0.adler32); + fprintf(stderr, "\n"); + + fprintf(stderr, "\tWORD1 = 0x%016"PRIx64"\n", result->w1.u64); + fprintf(stderr, "\t\tTBYTESWRITTEN = %u\n", result->w1.tbyteswritten); + fprintf(stderr, "\t\tTBYTESREAD = %u\n", result->w1.tbytesread); + fprintf(stderr, "\n"); + + fprintf(stderr, "\tWORD2 = 0x%016"PRIx64"\n", result->w2.u64); + fprintf(stderr, "\t\tTBITS = %u\n", result->w2.tbits); + fprintf(stderr, "\t\tEXN = %d\n", result->w2.exn); + fprintf(stderr, "\t\tEBITS = %x\n", result->w2.exbits); + fprintf(stderr, "\t\tEF = %d\n", result->w2.ef); + fprintf(stderr, "\t\tCOMPCODE = 0x%2x\n", result->w2.compcode); + fprintf(stderr, "\n"); +} +#else +#define nitrox_dump_databuf(name, m, off, datalen) +#define nitrox_dump_zip_instr(instr, hptr_arr, iptr_arr, optr_arr) +#define nitrox_dump_zip_result(instr, result) +#endif + +static int handle_zero_length_compression(struct nitrox_softreq *sr, + struct nitrox_comp_xform *xform) +{ + union { + uint32_t num; + uint8_t bytes[4]; + } fblk; + uint32_t dstlen, rlen; + struct rte_mbuf *m; + uint32_t off; + uint32_t mlen; + uint32_t i = 0; + uint8_t *ptr; + + fblk.num = xform->exn ? (xform->exbits & 0x7F) : 0; + fblk.num |= (0x3 << xform->exn); + memset(&sr->zip_res, 0, sizeof(sr->zip_res)); + sr->zip_res.w1.tbytesread = xform->hlen; + sr->zip_res.w1.tbyteswritten = 2; + sr->zip_res.w2.ef = 1; + if (xform->exn == 7) + sr->zip_res.w1.tbyteswritten++; + + rlen = sr->zip_res.w1.tbyteswritten; + dstlen = rte_pktmbuf_pkt_len(sr->op->m_dst) - sr->op->dst.offset; + if (unlikely(dstlen < rlen)) + return -EIO; + + off = sr->op->dst.offset; + for (m = sr->op->m_dst; m && off > rte_pktmbuf_data_len(m); m = m->next) + off -= rte_pktmbuf_data_len(m); + + if (unlikely(!m)) + return -EIO; + + mlen = rte_pktmbuf_data_len(m) - off; + if (rlen <= mlen) + mlen = rlen; + + ptr = rte_pktmbuf_mtod_offset(m, uint8_t *, off); + memcpy(ptr, fblk.bytes, mlen); + i += mlen; + rlen -= mlen; + for (m = m->next; m && rlen; m = m->next) { + mlen = rte_pktmbuf_data_len(m) < rlen ? + rte_pktmbuf_data_len(m) : rlen; + ptr = rte_pktmbuf_mtod(m, uint8_t *); + memcpy(ptr, &fblk.bytes[i], mlen); + i += mlen; + rlen -= mlen; + } + + if (unlikely(rlen != 0)) + return -EIO; + + sr->zip_res.w2.compcode = NITROX_CC_SUCCESS; + sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; + sr->zip_res.w0.u64 = rte_cpu_to_be_64(sr->zip_res.w0.u64); + sr->zip_res.w1.u64 = rte_cpu_to_be_64(sr->zip_res.w1.u64); + sr->zip_res.w2.u64 = rte_cpu_to_be_64(sr->zip_res.w2.u64); + return 0; +} + static int create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf, uint32_t off, uint32_t datalen, uint8_t extra_segs, @@ -398,10 +612,12 @@ static int create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, return 0; } -static int softreq_init(struct nitrox_softreq *sr) +static int softreq_init(struct nitrox_softreq *sr, + struct nitrox_comp_xform *xform) { struct rte_mempool *mp; int err; + bool need_decomp_threshold; mp = rte_mempool_from_obj(sr); if (unlikely(mp == NULL)) @@ -413,15 +629,17 @@ static int softreq_init(struct nitrox_softreq *sr) if (unlikely(err)) return err; + need_decomp_threshold = (sr->op->op_type == RTE_COMP_OP_STATELESS && + xform->op == NITROX_COMP_OP_DECOMPRESS); err = create_sglist_from_mbuf(&sr->dst, sr->op->m_dst, sr->op->dst.offset, rte_pktmbuf_pkt_len(sr->op->m_dst) - sr->op->dst.offset, - (sr->xform.op == NITROX_COMP_OP_DECOMPRESS) ? 1 : 0, + need_decomp_threshold ? 1 : 0, mp->socket_id); if (unlikely(err)) return err; - if (sr->xform.op == NITROX_COMP_OP_DECOMPRESS) { + if (need_decomp_threshold) { struct nitrox_zip_iova_addr zip_addr; int i; @@ -459,12 +677,12 @@ static void nitrox_zip_instr_to_b64(struct nitrox_softreq *sr) instr->w7.u64 = rte_cpu_to_be_64(instr->w7.u64); } -static int process_zip_stateless(struct nitrox_softreq *sr) +static int process_zip_request(struct nitrox_softreq *sr) { struct nitrox_zip_instr *instr; struct nitrox_comp_xform *xform; struct nitrox_zip_iova_addr zip_addr; - uint64_t iptr_msb, optr_msb, rptr_msb; + uint64_t iptr_msb, optr_msb, rptr_msb, cptr_msb, hptr_msb; int err; xform = sr->op->private_xform; @@ -473,7 +691,14 @@ static int process_zip_stateless(struct nitrox_softreq *sr) return -EINVAL; } - if (unlikely(xform->op == NITROX_COMP_OP_COMPRESS && + if (unlikely(sr->op->op_type == RTE_COMP_OP_STATEFUL && + xform->op == NITROX_COMP_OP_COMPRESS && + sr->op->flush_flag == RTE_COMP_FLUSH_FINAL && + sr->op->src.length == 0)) + return handle_zero_length_compression(sr, xform); + + if (unlikely(sr->op->op_type == RTE_COMP_OP_STATELESS && + xform->op == NITROX_COMP_OP_COMPRESS && sr->op->flush_flag != RTE_COMP_FLUSH_FULL && sr->op->flush_flag != RTE_COMP_FLUSH_FINAL)) { NITROX_LOG(ERR, "Invalid flush flag %d in stateless op\n", @@ -481,8 +706,7 @@ static int process_zip_stateless(struct nitrox_softreq *sr) return -EINVAL; } - sr->xform = *xform; - err = softreq_init(sr); + err = softreq_init(sr, xform); if (unlikely(err)) return err; @@ -490,10 +714,11 @@ static int process_zip_stateless(struct nitrox_softreq *sr) memset(instr, 0, sizeof(*instr)); /* word 0 */ instr->w0.tol = sr->dst.total_bytes; - instr->w0.exn = 0; - instr->w0.exbits = 0; + instr->w0.exn = xform->exn; + instr->w0.exbits = xform->exbits; instr->w0.ca = 0; if (xform->op == NITROX_COMP_OP_DECOMPRESS || + sr->op->flush_flag == RTE_COMP_FLUSH_SYNC || sr->op->flush_flag == RTE_COMP_FLUSH_FULL) instr->w0.sf = 1; else @@ -501,13 +726,12 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w0.ss = xform->level; instr->w0.cc = xform->algo; - if (xform->op == NITROX_COMP_OP_COMPRESS && - sr->op->flush_flag == RTE_COMP_FLUSH_FINAL) + if (sr->op->flush_flag == RTE_COMP_FLUSH_FINAL) instr->w0.ef = 1; else instr->w0.ef = 0; - instr->w0.bf = 1; + instr->w0.bf = xform->bf; instr->w0.co = xform->op; if (sr->dst.filled_sgls > 1) instr->w0.ds = 1; @@ -522,8 +746,11 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w0.hg = 0; /* word 1 */ - instr->w1.hl = 0; - if (sr->op->input_chksum != 0) + instr->w1.hl = xform->hlen; + if (sr->op->op_type == RTE_COMP_OP_STATEFUL && !xform->bf) + instr->w1.adlercrc32 = xform->chksum; + else if (sr->op->op_type == RTE_COMP_OP_STATELESS && + sr->op->input_chksum != 0) instr->w1.adlercrc32 = sr->op->input_chksum; else if (xform->chksum_type == NITROX_CHKSUM_TYPE_ADLER32) instr->w1.adlercrc32 = 1; @@ -531,11 +758,23 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w1.adlercrc32 = 0; /* word 2 */ - instr->w2.cptr = 0; + if (xform->context) + zip_addr.u64 = rte_malloc_virt2iova(xform->context); + else + zip_addr.u64 = 0; + + instr->w2.cptr = zip_addr.zda.addr; + cptr_msb = zip_addr.zda.addr_msb; /* word 3 */ - instr->w3.hlen = 0; - instr->w3.hptr = 0; + instr->w3.hlen = xform->hlen; + if (xform->history_window) + zip_addr.u64 = rte_malloc_virt2iova(xform->history_window); + else + zip_addr.u64 = 0; + + instr->w3.hptr = zip_addr.zda.addr; + hptr_msb = zip_addr.zda.addr_msb; /* word 4 */ if (sr->src.filled_sgls == 1) { @@ -568,7 +807,9 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w6.rptr = zip_addr.zda.addr; rptr_msb = zip_addr.zda.addr_msb; - if (iptr_msb != optr_msb || iptr_msb != rptr_msb) { + if (unlikely(iptr_msb != optr_msb || iptr_msb != rptr_msb || + (xform->history_window && (iptr_msb != hptr_msb)) || + (xform->context && (iptr_msb != cptr_msb)))) { NITROX_LOG(ERR, "addr_msb is not same for all addresses\n"); return -ENOTSUP; } @@ -577,32 +818,20 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w7.addr_msb = iptr_msb; instr->w7.grp = 0; + nitrox_dump_zip_instr(instr, NULL, sr->src.sgl, sr->dst.sgl); + nitrox_dump_databuf("IN", sr->op->m_src, sr->op->src.offset, + sr->op->src.length); nitrox_zip_instr_to_b64(sr); return 0; } -static int process_zip_request(struct nitrox_softreq *sr) -{ - int err; - - switch (sr->op->op_type) { - case RTE_COMP_OP_STATELESS: - err = process_zip_stateless(sr); - break; - default: - err = -EINVAL; - break; - } - - return err; -} - int nitrox_process_comp_req(struct rte_comp_op *op, struct nitrox_softreq *sr) { int err; sr->op = op; + sr->op->status = RTE_COMP_OP_STATUS_NOT_PROCESSED; err = process_zip_request(sr); if (unlikely(err)) goto err_exit; @@ -628,55 +857,239 @@ static struct nitrox_zip_result zip_result_to_cpu64(struct nitrox_zip_result *r) return out_res; } -int -nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op) +static int post_process_zip_stateless(struct nitrox_softreq *sr, + struct nitrox_comp_xform *xform, + struct nitrox_zip_result *zip_res) { - struct nitrox_zip_result zip_res; int output_unused_bytes; - int err = 0; - - zip_res = zip_result_to_cpu64(&sr->zip_res); - if (zip_res.w2.compcode == NITROX_CC_NOTDONE) { - if (rte_get_timer_cycles() >= sr->timeout) { - NITROX_LOG(ERR, "Op timedout\n"); - sr->op->status = RTE_COMP_OP_STATUS_ERROR; - err = -ETIMEDOUT; - goto exit; - } else { - return -EAGAIN; - } - } - if (unlikely(zip_res.w2.compcode != NITROX_CC_SUCCESS)) { + if (unlikely(zip_res->w2.compcode != NITROX_CC_SUCCESS)) { struct rte_comp_op *op = sr->op; - NITROX_LOG(ERR, "Op dequeue error 0x%x\n", - zip_res.w2.compcode); - if (zip_res.w2.compcode == NITROX_CC_STOP || - zip_res.w2.compcode == NITROX_CC_DTRUNC) + NITROX_LOG(ERR, "Dequeue error 0x%x\n", + zip_res->w2.compcode); + if (zip_res->w2.compcode == NITROX_CC_STOP || + zip_res->w2.compcode == NITROX_CC_DTRUNC) op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED; else op->status = RTE_COMP_OP_STATUS_ERROR; op->consumed = 0; op->produced = 0; - err = -EFAULT; - goto exit; + return -EFAULT; } - output_unused_bytes = sr->dst.total_bytes - zip_res.w1.tbyteswritten; - if (unlikely(sr->xform.op == NITROX_COMP_OP_DECOMPRESS && + output_unused_bytes = sr->dst.total_bytes - zip_res->w1.tbyteswritten; + if (unlikely(xform->op == NITROX_COMP_OP_DECOMPRESS && output_unused_bytes < NITROX_ZIP_MAX_ONFSIZE)) { NITROX_LOG(ERR, "TOL %d, Total bytes written %d\n", - sr->dst.total_bytes, zip_res.w1.tbyteswritten); + sr->dst.total_bytes, zip_res->w1.tbyteswritten); sr->op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED; sr->op->consumed = 0; sr->op->produced = sr->dst.total_bytes - NITROX_ZIP_MAX_ONFSIZE; - err = -EIO; - goto exit; + return -EIO; + } + + if (xform->chksum_type == NITROX_CHKSUM_TYPE_CRC32) + sr->op->output_chksum = zip_res->w0.crc32; + else if (xform->chksum_type == NITROX_CHKSUM_TYPE_ADLER32) + sr->op->output_chksum = zip_res->w0.adler32; + + sr->op->consumed = RTE_MIN(sr->op->src.length, + (uint32_t)zip_res->w1.tbytesread); + sr->op->produced = zip_res->w1.tbyteswritten; + sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; + return 0; +} + +static int update_history(struct rte_mbuf *mbuf, uint32_t off, uint16_t datalen, + uint8_t *dst) +{ + struct rte_mbuf *m; + uint32_t mlen; + uint16_t copied = 0; + + for (m = mbuf; m && off > rte_pktmbuf_data_len(m); m = m->next) + off -= rte_pktmbuf_data_len(m); + + if (unlikely(!m)) { + NITROX_LOG(ERR, "Failed to update history. Invalid mbuf\n"); + return -EINVAL; + } + + mlen = rte_pktmbuf_data_len(m) - off; + if (datalen <= mlen) + mlen = datalen; + + memcpy(&dst[copied], rte_pktmbuf_mtod_offset(m, char *, off), mlen); + copied += mlen; + datalen -= mlen; + for (m = m->next; m && datalen; m = m->next) { + mlen = rte_pktmbuf_data_len(m) < datalen ? + rte_pktmbuf_data_len(m) : datalen; + memcpy(&dst[copied], rte_pktmbuf_mtod(m, char *), mlen); + copied += mlen; + datalen -= mlen; + } + + if (unlikely(datalen != 0)) { + NITROX_LOG(ERR, "Failed to update history. Invalid datalen\n"); + return -EINVAL; + } + + return 0; +} + +static void reset_nitrox_xform(struct nitrox_comp_xform *xform) +{ + xform->hlen = 0; + xform->exn = 0; + xform->exbits = 0; + xform->bf = true; +} + +static int post_process_zip_stateful(struct nitrox_softreq *sr, + struct nitrox_comp_xform *xform, + struct nitrox_zip_result *zip_res) +{ + uint32_t bytesread = 0; + uint32_t chksum = 0; + + if (unlikely(zip_res->w2.compcode == NITROX_CC_DTRUNC)) { + sr->op->consumed = 0; + sr->op->produced = 0; + xform->hlen = 0; + sr->op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE; + NITROX_LOG(ERR, "Dequeue compress DTRUNC error\n"); + return 0; + } else if (unlikely(zip_res->w2.compcode == NITROX_CC_STOP)) { + sr->op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE; + NITROX_LOG(NOTICE, "Dequeue decompress dynamic STOP\n"); + } else if (zip_res->w2.compcode == NITROX_CC_SUCCESS) { + sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; + } else { + xform->hlen = 0; + xform->exn = 0; + xform->exbits = 0; + xform->bf = true; + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + NITROX_LOG(ERR, "Dequeue error 0x%x\n", + zip_res->w2.compcode); + return -EFAULT; + } + + if (xform->op == NITROX_COMP_OP_COMPRESS) { + if (zip_res->w1.tbytesread < xform->hlen) { + NITROX_LOG(ERR, "Invalid bytesread\n"); + reset_nitrox_xform(xform); + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + return -EFAULT; + } + + bytesread = zip_res->w1.tbytesread - xform->hlen; + } else { + bytesread = RTE_MIN(sr->op->src.length, + (uint32_t)zip_res->w1.tbytesread); + } + + if ((xform->op == NITROX_COMP_OP_COMPRESS && + (sr->op->flush_flag == RTE_COMP_FLUSH_NONE || + sr->op->flush_flag == RTE_COMP_FLUSH_SYNC)) || + (xform->op == NITROX_COMP_OP_DECOMPRESS && !zip_res->w2.ef)) { + struct rte_mbuf *mbuf; + uint32_t pktlen, m_off; + int err; + + if (xform->op == NITROX_COMP_OP_COMPRESS) { + mbuf = sr->op->m_src; + pktlen = bytesread; + m_off = sr->op->src.offset; + } else { + mbuf = sr->op->m_dst; + pktlen = zip_res->w1.tbyteswritten; + m_off = sr->op->dst.offset; + } + + if (pktlen >= xform->window_size) { + m_off += pktlen - xform->window_size; + err = update_history(mbuf, m_off, xform->window_size, + xform->history_window); + xform->hlen = xform->window_size; + } else if ((xform->hlen + pktlen) <= xform->window_size) { + err = update_history(mbuf, m_off, pktlen, + &xform->history_window[xform->hlen]); + xform->hlen += pktlen; + } else { + uint16_t shift_off, shift_len; + + shift_off = pktlen + xform->hlen - xform->window_size; + shift_len = xform->hlen - shift_off; + memmove(xform->history_window, + &xform->history_window[shift_off], + shift_len); + err = update_history(mbuf, m_off, pktlen, + &xform->history_window[shift_len]); + xform->hlen = xform->window_size; + + } + + if (unlikely(err)) { + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + return err; + } + + if (xform->op == NITROX_COMP_OP_COMPRESS) { + xform->exn = zip_res->w2.exn; + xform->exbits = zip_res->w2.exbits; + } + + xform->bf = false; + } else { + reset_nitrox_xform(xform); } - if (sr->xform.op == NITROX_COMP_OP_COMPRESS && + if (xform->chksum_type == NITROX_CHKSUM_TYPE_CRC32) + chksum = zip_res->w0.crc32; + else if (xform->chksum_type == NITROX_CHKSUM_TYPE_ADLER32) + chksum = zip_res->w0.adler32; + + if (xform->bf) + sr->op->output_chksum = chksum; + else + xform->chksum = chksum; + + sr->op->consumed = bytesread; + sr->op->produced = zip_res->w1.tbyteswritten; + return 0; +} + +int +nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op) +{ + struct nitrox_zip_result zip_res; + struct nitrox_comp_xform *xform; + int err = 0; + + zip_res = zip_result_to_cpu64(&sr->zip_res); + if (zip_res.w2.compcode == NITROX_CC_NOTDONE) { + if (rte_get_timer_cycles() >= sr->timeout) { + NITROX_LOG(ERR, "Op timedout\n"); + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + err = -ETIMEDOUT; + goto exit; + } else { + return -EAGAIN; + } + } + + xform = sr->op->private_xform; + if (sr->op->op_type == RTE_COMP_OP_STATELESS) + err = post_process_zip_stateless(sr, xform, &zip_res); + else + err = post_process_zip_stateful(sr, xform, &zip_res); + + if (sr->op->status == RTE_COMP_OP_STATUS_SUCCESS && + xform->op == NITROX_COMP_OP_COMPRESS && sr->op->flush_flag == RTE_COMP_FLUSH_FINAL && zip_res.w2.exn) { uint32_t datalen = zip_res.w1.tbyteswritten; @@ -696,17 +1109,11 @@ nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op) *last_byte = zip_res.w2.exbits & 0xFF; } - sr->op->consumed = zip_res.w1.tbytesread; - sr->op->produced = zip_res.w1.tbyteswritten; - if (sr->xform.chksum_type == NITROX_CHKSUM_TYPE_CRC32) - sr->op->output_chksum = zip_res.w0.crc32; - else if (sr->xform.chksum_type == NITROX_CHKSUM_TYPE_ADLER32) - sr->op->output_chksum = zip_res.w0.adler32; - - sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; - err = 0; exit: *op = sr->op; + nitrox_dump_zip_result(&sr->instr, &zip_res); + nitrox_dump_databuf("OUT after", sr->op->m_dst, sr->op->dst.offset, + sr->op->produced); return err; } diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.h b/drivers/compress/nitrox/nitrox_comp_reqmgr.h index 16be92e813..1b374e4796 100644 --- a/drivers/compress/nitrox/nitrox_comp_reqmgr.h +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.h @@ -7,6 +7,7 @@ #define NITROX_DECOMP_CTX_SIZE 2048 #define NITROX_CONSTANTS_MAX_SEARCH_DEPTH 31744 +#define NITROX_DEFAULT_DEFLATE_SEARCH_DEPTH 32768 struct nitrox_qp; struct nitrox_softreq; @@ -41,13 +42,14 @@ struct nitrox_comp_xform { enum nitrox_comp_algo algo; enum nitrox_comp_level level; enum nitrox_chksum_type chksum_type; -}; - -struct nitrox_comp_stream { - struct nitrox_comp_xform xform; - int window_size; - char context[NITROX_DECOMP_CTX_SIZE] __rte_aligned(8); - char history_window[NITROX_CONSTANTS_MAX_SEARCH_DEPTH] __rte_aligned(8); + uint8_t *context; + uint8_t *history_window; + uint32_t chksum; + uint16_t window_size; + uint16_t hlen; + uint8_t exn; + uint8_t exbits; + bool bf; }; int nitrox_process_comp_req(struct rte_comp_op *op, struct nitrox_softreq *sr);