From patchwork Fri Mar 1 16:25:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 137683 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4614E43C0C; Fri, 1 Mar 2024 17:26:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E68904328F; Fri, 1 Mar 2024 17:26:06 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3E57642FFB for ; Fri, 1 Mar 2024 17:26:05 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 4219qqee024794; Fri, 1 Mar 2024 08:26:04 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=Zo88+aRjswIPfN7XYP4P8NUEroL1ovIXwtTDGggGhmc=; b=Nni iqTi3qhtrlSp3EiJdw7UrT6LKHaufKCdDhZPU7KHvaMxGXl3+GbeTxiRzJlM8RW7 JFN1TaWpa+FeiAyBXMfy2T088A9uUutKiygk2Tf+cgirDv3vkb9nKQaWIGBqlF3K O9Pv2uWS5PI3uJjIMuqW9lrMYcNk/x95OTzlKI8zfizLB0K4FdQdX0pz2reXGCGk BwFdbBgAFHhNakbiOwdiprdRad13nCyu5fsHi4CAZ8FG1gvdNUZN8TXJoSuKYLO6 RMCHRUWzrixLzGmnKytb17PBEsnSnaAk034qujZ/50WIbI0dwbnOWKc34rZq7Axh TKrDa3K5g52f7rtBYuQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3wjfay86wt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Mar 2024 08:26:04 -0800 (PST) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Fri, 1 Mar 2024 08:26:02 -0800 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Fri, 1 Mar 2024 08:26:00 -0800 From: Nagadheeraj Rottela To: , , CC: , Nagadheeraj Rottela Subject: [PATCH v4 1/7] crypto/nitrox: move common code Date: Fri, 1 Mar 2024 21:55:47 +0530 Message-ID: <20240301162553.30523-2-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240301162553.30523-1-rnagadheeraj@marvell.com> References: <20240301162553.30523-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: soAb1pRxopHfERnCqTqYwIMcGHav_gmq X-Proofpoint-GUID: soAb1pRxopHfERnCqTqYwIMcGHav_gmq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-01_17,2024-03-01_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new compressdev Nitrox PMD will be added in next few patches. This patch moves some of the common code which is shared across Nitrox crypto and compress drivers to drivers/common/nitrox folder. Signed-off-by: Nagadheeraj Rottela --- MAINTAINERS | 1 + drivers/common/nitrox/meson.build | 18 ++++++++++++++++++ drivers/{crypto => common}/nitrox/nitrox_csr.h | 0 .../{crypto => common}/nitrox/nitrox_device.c | 14 ++++++++++++++ .../{crypto => common}/nitrox/nitrox_device.h | 1 - drivers/{crypto => common}/nitrox/nitrox_hal.c | 0 drivers/{crypto => common}/nitrox/nitrox_hal.h | 0 .../{crypto => common}/nitrox/nitrox_logs.c | 0 .../{crypto => common}/nitrox/nitrox_logs.h | 0 drivers/{crypto => common}/nitrox/nitrox_qp.c | 2 +- drivers/{crypto => common}/nitrox/nitrox_qp.h | 11 ++++++++++- drivers/common/nitrox/version.map | 9 +++++++++ drivers/crypto/nitrox/meson.build | 11 +++++------ drivers/meson.build | 1 + 14 files changed, 59 insertions(+), 9 deletions(-) create mode 100644 drivers/common/nitrox/meson.build rename drivers/{crypto => common}/nitrox/nitrox_csr.h (100%) rename drivers/{crypto => common}/nitrox/nitrox_device.c (92%) rename drivers/{crypto => common}/nitrox/nitrox_device.h (94%) rename drivers/{crypto => common}/nitrox/nitrox_hal.c (100%) rename drivers/{crypto => common}/nitrox/nitrox_hal.h (100%) rename drivers/{crypto => common}/nitrox/nitrox_logs.c (100%) rename drivers/{crypto => common}/nitrox/nitrox_logs.h (100%) rename drivers/{crypto => common}/nitrox/nitrox_qp.c (99%) rename drivers/{crypto => common}/nitrox/nitrox_qp.h (91%) create mode 100644 drivers/common/nitrox/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 962c359cdd..d6abebc55c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1142,6 +1142,7 @@ Marvell Nitrox M: Nagadheeraj Rottela M: Srikanth Jampala F: drivers/crypto/nitrox/ +F: drivers/common/nitrox/ F: doc/guides/cryptodevs/nitrox.rst F: doc/guides/cryptodevs/features/nitrox.ini diff --git a/drivers/common/nitrox/meson.build b/drivers/common/nitrox/meson.build new file mode 100644 index 0000000000..99fadbbfc9 --- /dev/null +++ b/drivers/common/nitrox/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2024 Marvell. + +if not is_linux + build = false + reason = 'only supported on Linux' +endif + +deps += ['bus_pci'] + +sources += files( + 'nitrox_device.c', + 'nitrox_hal.c', + 'nitrox_logs.c', + 'nitrox_qp.c', +) + +includes += include_directories('../../crypto/nitrox') diff --git a/drivers/crypto/nitrox/nitrox_csr.h b/drivers/common/nitrox/nitrox_csr.h similarity index 100% rename from drivers/crypto/nitrox/nitrox_csr.h rename to drivers/common/nitrox/nitrox_csr.h diff --git a/drivers/crypto/nitrox/nitrox_device.c b/drivers/common/nitrox/nitrox_device.c similarity index 92% rename from drivers/crypto/nitrox/nitrox_device.c rename to drivers/common/nitrox/nitrox_device.c index 5b319dd681..b2f638ec8a 100644 --- a/drivers/crypto/nitrox/nitrox_device.c +++ b/drivers/common/nitrox/nitrox_device.c @@ -120,5 +120,19 @@ static struct rte_pci_driver nitrox_pmd = { .remove = nitrox_pci_remove, }; +__rte_weak int +nitrox_sym_pmd_create(struct nitrox_device *ndev) +{ + RTE_SET_USED(ndev); + return 0; +} + +__rte_weak int +nitrox_sym_pmd_destroy(struct nitrox_device *ndev) +{ + RTE_SET_USED(ndev); + return 0; +} + RTE_PMD_REGISTER_PCI(nitrox, nitrox_pmd); RTE_PMD_REGISTER_PCI_TABLE(nitrox, pci_id_nitrox_map); diff --git a/drivers/crypto/nitrox/nitrox_device.h b/drivers/common/nitrox/nitrox_device.h similarity index 94% rename from drivers/crypto/nitrox/nitrox_device.h rename to drivers/common/nitrox/nitrox_device.h index 1ff7c59b63..b7c7ffd772 100644 --- a/drivers/crypto/nitrox/nitrox_device.h +++ b/drivers/common/nitrox/nitrox_device.h @@ -6,7 +6,6 @@ #define _NITROX_DEVICE_H_ #include -#include struct nitrox_sym_device; diff --git a/drivers/crypto/nitrox/nitrox_hal.c b/drivers/common/nitrox/nitrox_hal.c similarity index 100% rename from drivers/crypto/nitrox/nitrox_hal.c rename to drivers/common/nitrox/nitrox_hal.c diff --git a/drivers/crypto/nitrox/nitrox_hal.h b/drivers/common/nitrox/nitrox_hal.h similarity index 100% rename from drivers/crypto/nitrox/nitrox_hal.h rename to drivers/common/nitrox/nitrox_hal.h diff --git a/drivers/crypto/nitrox/nitrox_logs.c b/drivers/common/nitrox/nitrox_logs.c similarity index 100% rename from drivers/crypto/nitrox/nitrox_logs.c rename to drivers/common/nitrox/nitrox_logs.c diff --git a/drivers/crypto/nitrox/nitrox_logs.h b/drivers/common/nitrox/nitrox_logs.h similarity index 100% rename from drivers/crypto/nitrox/nitrox_logs.h rename to drivers/common/nitrox/nitrox_logs.h diff --git a/drivers/crypto/nitrox/nitrox_qp.c b/drivers/common/nitrox/nitrox_qp.c similarity index 99% rename from drivers/crypto/nitrox/nitrox_qp.c rename to drivers/common/nitrox/nitrox_qp.c index 5e85ccbd51..79a26f0024 100644 --- a/drivers/crypto/nitrox/nitrox_qp.c +++ b/drivers/common/nitrox/nitrox_qp.c @@ -2,7 +2,7 @@ * Copyright(C) 2019 Marvell International Ltd. */ -#include +#include #include #include "nitrox_qp.h" diff --git a/drivers/crypto/nitrox/nitrox_qp.h b/drivers/common/nitrox/nitrox_qp.h similarity index 91% rename from drivers/crypto/nitrox/nitrox_qp.h rename to drivers/common/nitrox/nitrox_qp.h index d42d53f92b..23dffd1268 100644 --- a/drivers/crypto/nitrox/nitrox_qp.h +++ b/drivers/common/nitrox/nitrox_qp.h @@ -22,6 +22,13 @@ struct rid { struct nitrox_softreq *sr; }; +struct nitrox_qp_stats { + uint64_t enqueued_count; + uint64_t dequeued_count; + uint64_t enqueue_err_count; + uint64_t dequeue_err_count; +}; + struct nitrox_qp { struct command_queue cmdq; struct rid *ridq; @@ -29,7 +36,7 @@ struct nitrox_qp { uint32_t head; uint32_t tail; struct rte_mempool *sr_mp; - struct rte_cryptodev_stats stats; + struct nitrox_qp_stats stats; uint16_t qno; rte_atomic16_t pending_count; }; @@ -96,9 +103,11 @@ nitrox_qp_dequeue(struct nitrox_qp *qp) rte_atomic16_dec(&qp->pending_count); } +__rte_internal int nitrox_qp_setup(struct nitrox_qp *qp, uint8_t *bar_addr, const char *dev_name, uint32_t nb_descriptors, uint8_t inst_size, int socket_id); +__rte_internal int nitrox_qp_release(struct nitrox_qp *qp, uint8_t *bar_addr); #endif /* _NITROX_QP_H_ */ diff --git a/drivers/common/nitrox/version.map b/drivers/common/nitrox/version.map new file mode 100644 index 0000000000..43295171e4 --- /dev/null +++ b/drivers/common/nitrox/version.map @@ -0,0 +1,9 @@ +INTERNAL { + global: + + nitrox_logtype; + nitrox_qp_release; + nitrox_qp_setup; + + local: *; +}; diff --git a/drivers/crypto/nitrox/meson.build b/drivers/crypto/nitrox/meson.build index 2cc47c4626..f8887713d2 100644 --- a/drivers/crypto/nitrox/meson.build +++ b/drivers/crypto/nitrox/meson.build @@ -6,13 +6,12 @@ if not is_linux reason = 'only supported on Linux' endif -deps += ['bus_pci'] -sources = files( - 'nitrox_device.c', - 'nitrox_hal.c', - 'nitrox_logs.c', +deps += ['common_nitrox', 'bus_pci', 'cryptodev'] + +sources += files( 'nitrox_sym.c', 'nitrox_sym_capabilities.c', 'nitrox_sym_reqmgr.c', - 'nitrox_qp.c', ) + +includes += include_directories('../../common/nitrox') diff --git a/drivers/meson.build b/drivers/meson.build index f2be71bc05..9fd66e3264 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -14,6 +14,7 @@ subdirs = [ 'common/cnxk', # depends on bus. 'common/mlx5', # depends on bus. 'common/nfp', # depends on bus. + 'common/nitrox', # depends on bus. 'common/qat', # depends on bus. 'common/sfc_efx', # depends on bus. 'mempool', # depends on common and bus. From patchwork Fri Mar 1 16:25:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 137684 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E78A43C0C; Fri, 1 Mar 2024 17:26:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3FE6242D66; Fri, 1 Mar 2024 17:26:11 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6C18D43251 for ; Fri, 1 Mar 2024 17:26:09 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 4219mtgk013852; Fri, 1 Mar 2024 08:26:08 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=y2/NENeEFH+odg7bfeex0vJ9Sjt94RiccQ60l7GOVpM=; b=e8/ /7Z3UOAiFN9IW54LDWP4vkllyygk20Y2HJ4VRPXGbtIG2WhFBHoSviwMFrjZkjAZ 17S6ihhYvF8K89HsbRBZb4wHyRQkJNZeKHYrQQPjHUXq00om/+JEiSLMHDixHosO OE7NuRq9XVLiQewzdIYw7yB8m2JHVTt8lR3erkAK2neGrhpv3nOA4BNP8FAtUJfU 3RDjKv8E1c5ke3OjxybiyGWhMV1RhiIPlsEeLpvnuKe0v3+cQbQilToCEXmLhCwO Pbtx3LqD95amk8vnx2DIosmq6Wy4zDvp0+8Mf+v9r2VRQ9t+Tyb35nWDOTm4+jyd tOib82gldcqQDCokJCg== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3wkcq593tk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Mar 2024 08:26:08 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1258.12; Fri, 1 Mar 2024 08:26:07 -0800 Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 1 Mar 2024 08:26:05 -0800 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Fri, 1 Mar 2024 08:26:03 -0800 From: Nagadheeraj Rottela To: , , CC: , Nagadheeraj Rottela Subject: [PATCH v4 2/7] drivers/compress: add Nitrox driver Date: Fri, 1 Mar 2024 21:55:48 +0530 Message-ID: <20240301162553.30523-3-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240301162553.30523-1-rnagadheeraj@marvell.com> References: <20240301162553.30523-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: m-pKAQODtgEJwoivdcewm3OmnSxgl-vr X-Proofpoint-ORIG-GUID: m-pKAQODtgEJwoivdcewm3OmnSxgl-vr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-01_17,2024-03-01_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce Nitrox compressdev driver. This patch implements below operations - dev_configure - dev_close - dev_infos_get - private_xform_create - private_xform_free Signed-off-by: Nagadheeraj Rottela --- MAINTAINERS | 7 + doc/guides/compressdevs/features/nitrox.ini | 17 + doc/guides/compressdevs/index.rst | 1 + doc/guides/compressdevs/nitrox.rst | 50 +++ doc/guides/rel_notes/release_24_03.rst | 3 + drivers/common/nitrox/meson.build | 1 + drivers/common/nitrox/nitrox_device.c | 37 +- drivers/common/nitrox/nitrox_device.h | 3 + drivers/compress/nitrox/meson.build | 15 + drivers/compress/nitrox/nitrox_comp.c | 353 +++++++++++++++++++ drivers/compress/nitrox/nitrox_comp.h | 33 ++ drivers/compress/nitrox/nitrox_comp_reqmgr.h | 40 +++ 12 files changed, 555 insertions(+), 5 deletions(-) create mode 100644 doc/guides/compressdevs/features/nitrox.ini create mode 100644 doc/guides/compressdevs/nitrox.rst create mode 100644 drivers/compress/nitrox/meson.build create mode 100644 drivers/compress/nitrox/nitrox_comp.c create mode 100644 drivers/compress/nitrox/nitrox_comp.h create mode 100644 drivers/compress/nitrox/nitrox_comp_reqmgr.h diff --git a/MAINTAINERS b/MAINTAINERS index d6abebc55c..a6e2cf6eae 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1215,6 +1215,13 @@ F: drivers/compress/isal/ F: doc/guides/compressdevs/isal.rst F: doc/guides/compressdevs/features/isal.ini +Marvell Nitrox +M: Nagadheeraj Rottela +F: drivers/compress/nitrox/ +F: drivers/common/nitrox/ +F: doc/guides/compressdevs/nitrox.rst +F: doc/guides/compressdevs/features/nitrox.ini + NVIDIA mlx5 M: Matan Azrad F: drivers/compress/mlx5/ diff --git a/doc/guides/compressdevs/features/nitrox.ini b/doc/guides/compressdevs/features/nitrox.ini new file mode 100644 index 0000000000..1b6a96ac6d --- /dev/null +++ b/doc/guides/compressdevs/features/nitrox.ini @@ -0,0 +1,17 @@ +; +; Refer to default.ini for the full list of available PMD features. +; +; Supported features of 'nitrox' compression driver. +; +[Features] +HW Accelerated = Y +Stateful Compression = Y +Stateful Decompression = Y +OOP SGL In SGL Out = Y +OOP SGL In LB Out = Y +OOP LB In SGL Out = Y +Deflate = Y +Adler32 = Y +Crc32 = Y +Fixed = Y +Dynamic = Y diff --git a/doc/guides/compressdevs/index.rst b/doc/guides/compressdevs/index.rst index 54a3ef4273..849f211688 100644 --- a/doc/guides/compressdevs/index.rst +++ b/doc/guides/compressdevs/index.rst @@ -12,6 +12,7 @@ Compression Device Drivers overview isal mlx5 + nitrox octeontx qat_comp zlib diff --git a/doc/guides/compressdevs/nitrox.rst b/doc/guides/compressdevs/nitrox.rst new file mode 100644 index 0000000000..840fd7241a --- /dev/null +++ b/doc/guides/compressdevs/nitrox.rst @@ -0,0 +1,50 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2024 Marvell. + +Marvell NITROX Compression Poll Mode Driver +=========================================== + +The Nitrox compression poll mode driver provides support for offloading +compression and decompression operations to the NITROX V processor. +Detailed information about the NITROX V processor can be obtained here: + +* https://www.marvell.com/security-solutions/nitrox-security-processors/nitrox-v/ + +Features +-------- + +NITROX V compression PMD has support for: + +Compression/Decompression algorithm: + +* DEFLATE + +Huffman code type: + +* FIXED +* DYNAMIC + +Window size support: + +* Min - 2 bytes +* Max - 32KB + +Checksum generation: + +* CRC32, Adler + +Limitations +----------- + +* Compressdev level 0, no compression, is not supported. + +Initialization +-------------- + +Nitrox compression PMD depends on Nitrox kernel PF driver being installed on +the platform. Nitrox PF driver is required to create VF devices which will +be used by the PMD. Each VF device can enable one compressdev PMD. + +Nitrox kernel PF driver is available as part of CNN55XX-Driver SDK. The SDK +and it's installation instructions can be obtained from: +`Marvell Customer Portal `_. diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst index 879bb4944c..bb91953a23 100644 --- a/doc/guides/rel_notes/release_24_03.rst +++ b/doc/guides/rel_notes/release_24_03.rst @@ -138,6 +138,9 @@ New Features to support TLS v1.2, TLS v1.3 and DTLS v1.2. * Added PMD API to allow raw submission of instructions to CPT. +* **Added Marvell NITROX compression PMD.** + + * Added support for DEFLATE compression and decompression. Removed Items ------------- diff --git a/drivers/common/nitrox/meson.build b/drivers/common/nitrox/meson.build index 99fadbbfc9..f3cb42f006 100644 --- a/drivers/common/nitrox/meson.build +++ b/drivers/common/nitrox/meson.build @@ -16,3 +16,4 @@ sources += files( ) includes += include_directories('../../crypto/nitrox') +includes += include_directories('../../compress/nitrox') diff --git a/drivers/common/nitrox/nitrox_device.c b/drivers/common/nitrox/nitrox_device.c index b2f638ec8a..39edc440a7 100644 --- a/drivers/common/nitrox/nitrox_device.c +++ b/drivers/common/nitrox/nitrox_device.c @@ -7,6 +7,7 @@ #include "nitrox_device.h" #include "nitrox_hal.h" #include "nitrox_sym.h" +#include "nitrox_comp.h" #define PCI_VENDOR_ID_CAVIUM 0x177d #define NITROX_V_PCI_VF_DEV_ID 0x13 @@ -67,7 +68,7 @@ nitrox_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pdev) { struct nitrox_device *ndev; - int err; + int err = -1; /* Nitrox CSR space */ if (!pdev->mem_resource[0].addr) @@ -79,12 +80,20 @@ nitrox_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, ndev_init(ndev, pdev); err = nitrox_sym_pmd_create(ndev); - if (err) { - ndev_release(ndev); - return err; - } + if (err) + goto sym_pmd_err; + + err = nitrox_comp_pmd_create(ndev); + if (err) + goto comp_pmd_err; return 0; + +comp_pmd_err: + nitrox_sym_pmd_destroy(ndev); +sym_pmd_err: + ndev_release(ndev); + return err; } static int @@ -101,6 +110,10 @@ nitrox_pci_remove(struct rte_pci_device *pdev) if (err) return err; + err = nitrox_comp_pmd_destroy(ndev); + if (err) + return err; + ndev_release(ndev); return 0; } @@ -134,5 +147,19 @@ nitrox_sym_pmd_destroy(struct nitrox_device *ndev) return 0; } +__rte_weak int +nitrox_comp_pmd_create(struct nitrox_device *ndev) +{ + RTE_SET_USED(ndev); + return 0; +} + +__rte_weak int +nitrox_comp_pmd_destroy(struct nitrox_device *ndev) +{ + RTE_SET_USED(ndev); + return 0; +} + RTE_PMD_REGISTER_PCI(nitrox, nitrox_pmd); RTE_PMD_REGISTER_PCI_TABLE(nitrox, pci_id_nitrox_map); diff --git a/drivers/common/nitrox/nitrox_device.h b/drivers/common/nitrox/nitrox_device.h index b7c7ffd772..877bccb321 100644 --- a/drivers/common/nitrox/nitrox_device.h +++ b/drivers/common/nitrox/nitrox_device.h @@ -8,13 +8,16 @@ #include struct nitrox_sym_device; +struct nitrox_comp_device; struct nitrox_device { TAILQ_ENTRY(nitrox_device) next; struct rte_pci_device *pdev; uint8_t *bar_addr; struct nitrox_sym_device *sym_dev; + struct nitrox_comp_device *comp_dev; struct rte_device rte_sym_dev; + struct rte_device rte_comp_dev; uint16_t nr_queues; }; diff --git a/drivers/compress/nitrox/meson.build b/drivers/compress/nitrox/meson.build new file mode 100644 index 0000000000..f137303689 --- /dev/null +++ b/drivers/compress/nitrox/meson.build @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2024 Marvell. + +if not is_linux + build = false + reason = 'only supported on Linux' +endif + +deps += ['common_nitrox', 'bus_pci', 'compressdev'] + +sources += files( + 'nitrox_comp.c', +) + +includes += include_directories('../../common/nitrox') diff --git a/drivers/compress/nitrox/nitrox_comp.c b/drivers/compress/nitrox/nitrox_comp.c new file mode 100644 index 0000000000..e97a686fbf --- /dev/null +++ b/drivers/compress/nitrox/nitrox_comp.c @@ -0,0 +1,353 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#include +#include +#include + +#include "nitrox_comp.h" +#include "nitrox_device.h" +#include "nitrox_logs.h" +#include "nitrox_comp_reqmgr.h" + +static const char nitrox_comp_drv_name[] = RTE_STR(COMPRESSDEV_NAME_NITROX_PMD); +static const struct rte_driver nitrox_rte_comp_drv = { + .name = nitrox_comp_drv_name, + .alias = nitrox_comp_drv_name +}; + +static const struct rte_compressdev_capabilities + nitrox_comp_pmd_capabilities[] = { + { .algo = RTE_COMP_ALGO_DEFLATE, + .comp_feature_flags = RTE_COMP_FF_HUFFMAN_FIXED | + RTE_COMP_FF_HUFFMAN_DYNAMIC | + RTE_COMP_FF_CRC32_CHECKSUM | + RTE_COMP_FF_ADLER32_CHECKSUM | + RTE_COMP_FF_SHAREABLE_PRIV_XFORM | + RTE_COMP_FF_OOP_SGL_IN_SGL_OUT | + RTE_COMP_FF_OOP_SGL_IN_LB_OUT | + RTE_COMP_FF_OOP_LB_IN_SGL_OUT, + .window_size = { + .min = NITROX_COMP_WINDOW_SIZE_MIN, + .max = NITROX_COMP_WINDOW_SIZE_MAX, + .increment = 1 + }, + }, + RTE_COMP_END_OF_CAPABILITIES_LIST() +}; + +static int nitrox_comp_dev_configure(struct rte_compressdev *dev, + struct rte_compressdev_config *config) +{ + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_device *ndev = comp_dev->ndev; + uint32_t xform_cnt; + char name[RTE_MEMPOOL_NAMESIZE]; + + if (config->nb_queue_pairs > ndev->nr_queues) { + NITROX_LOG(ERR, "Invalid queue pairs, max supported %d\n", + ndev->nr_queues); + return -EINVAL; + } + + xform_cnt = config->max_nb_priv_xforms + config->max_nb_streams; + if (unlikely(xform_cnt == 0)) { + NITROX_LOG(ERR, "Invalid configuration with 0 xforms\n"); + return -EINVAL; + } + + snprintf(name, sizeof(name), "%s_xform", dev->data->name); + comp_dev->xform_pool = rte_mempool_create(name, + xform_cnt, sizeof(struct nitrox_comp_xform), + 0, 0, NULL, NULL, NULL, NULL, + config->socket_id, 0); + if (comp_dev->xform_pool == NULL) { + NITROX_LOG(ERR, "Failed to create xform pool, err %d\n", + rte_errno); + return -rte_errno; + } + + return 0; +} + +static int nitrox_comp_dev_start(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); + return 0; +} + +static void nitrox_comp_dev_stop(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); +} + +static int nitrox_comp_dev_close(struct rte_compressdev *dev) +{ + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + + rte_mempool_free(comp_dev->xform_pool); + comp_dev->xform_pool = NULL; + return 0; +} + +static void nitrox_comp_stats_get(struct rte_compressdev *dev, + struct rte_compressdev_stats *stats) +{ + RTE_SET_USED(dev); + RTE_SET_USED(stats); +} + +static void nitrox_comp_stats_reset(struct rte_compressdev *dev) +{ + RTE_SET_USED(dev); +} + +static void nitrox_comp_dev_info_get(struct rte_compressdev *dev, + struct rte_compressdev_info *info) +{ + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_device *ndev = comp_dev->ndev; + + if (!info) + return; + + info->max_nb_queue_pairs = ndev->nr_queues; + info->feature_flags = dev->feature_flags; + info->capabilities = nitrox_comp_pmd_capabilities; +} + +static int nitrox_comp_queue_pair_setup(struct rte_compressdev *dev, + uint16_t qp_id, + uint32_t max_inflight_ops, int socket_id) +{ + RTE_SET_USED(dev); + RTE_SET_USED(qp_id); + RTE_SET_USED(max_inflight_ops); + RTE_SET_USED(socket_id); + return -1; +} + +static int nitrox_comp_queue_pair_release(struct rte_compressdev *dev, + uint16_t qp_id) +{ + RTE_SET_USED(dev); + RTE_SET_USED(qp_id); + return 0; +} + +static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, + const struct rte_comp_xform *xform, + void **private_xform) +{ + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_comp_xform *nxform; + enum rte_comp_checksum_type chksum_type; + int ret; + + if (unlikely(comp_dev->xform_pool == NULL)) { + NITROX_LOG(ERR, "private xform pool not yet created\n"); + return -EINVAL; + } + + if (rte_mempool_get(comp_dev->xform_pool, private_xform)) { + NITROX_LOG(ERR, "Failed to get from private xform pool\n"); + return -ENOMEM; + } + + nxform = (struct nitrox_comp_xform *)*private_xform; + memset(nxform, 0, sizeof(*nxform)); + if (xform->type == RTE_COMP_COMPRESS) { + enum rte_comp_huffman algo; + int level; + + nxform->op = NITROX_COMP_OP_COMPRESS; + if (xform->compress.algo != RTE_COMP_ALGO_DEFLATE) { + NITROX_LOG(ERR, "Only deflate is supported\n"); + ret = -ENOTSUP; + goto err_exit; + } + + algo = xform->compress.deflate.huffman; + if (algo == RTE_COMP_HUFFMAN_DEFAULT) + nxform->algo = NITROX_COMP_ALGO_DEFLATE_DEFAULT; + else if (algo == RTE_COMP_HUFFMAN_FIXED) + nxform->algo = NITROX_COMP_ALGO_DEFLATE_FIXEDHUFF; + else if (algo == RTE_COMP_HUFFMAN_DYNAMIC) + nxform->algo = NITROX_COMP_ALGO_DEFLATE_DYNHUFF; + else { + NITROX_LOG(ERR, "Invalid deflate algorithm %d\n", algo); + ret = -EINVAL; + goto err_exit; + } + + level = xform->compress.level; + if (level == RTE_COMP_LEVEL_PMD_DEFAULT) { + nxform->level = NITROX_COMP_LEVEL_MEDIUM; + } else if (level >= NITROX_COMP_LEVEL_LOWEST_START && + level <= NITROX_COMP_LEVEL_LOWEST_END) { + nxform->level = NITROX_COMP_LEVEL_LOWEST; + } else if (level >= NITROX_COMP_LEVEL_LOWER_START && + level <= NITROX_COMP_LEVEL_LOWER_END) { + nxform->level = NITROX_COMP_LEVEL_LOWER; + } else if (level >= NITROX_COMP_LEVEL_MEDIUM_START && + level <= NITROX_COMP_LEVEL_MEDIUM_END) { + nxform->level = NITROX_COMP_LEVEL_MEDIUM; + } else if (level >= NITROX_COMP_LEVEL_BEST_START && + level <= NITROX_COMP_LEVEL_BEST_END) { + nxform->level = NITROX_COMP_LEVEL_BEST; + } else { + NITROX_LOG(ERR, "Unsupported compression level %d\n", + xform->compress.level); + ret = -ENOTSUP; + goto err_exit; + } + + chksum_type = xform->compress.chksum; + } else if (xform->type == RTE_COMP_DECOMPRESS) { + nxform->op = NITROX_COMP_OP_DECOMPRESS; + if (xform->decompress.algo != RTE_COMP_ALGO_DEFLATE) { + NITROX_LOG(ERR, "Only deflate is supported\n"); + ret = -ENOTSUP; + goto err_exit; + } + + nxform->algo = NITROX_COMP_ALGO_DEFLATE_DEFAULT; + nxform->level = NITROX_COMP_LEVEL_BEST; + chksum_type = xform->decompress.chksum; + } else { + ret = -EINVAL; + goto err_exit; + } + + if (chksum_type == RTE_COMP_CHECKSUM_NONE) + nxform->chksum_type = NITROX_CHKSUM_TYPE_NONE; + else if (chksum_type == RTE_COMP_CHECKSUM_CRC32) + nxform->chksum_type = NITROX_CHKSUM_TYPE_CRC32; + else if (chksum_type == RTE_COMP_CHECKSUM_ADLER32) + nxform->chksum_type = NITROX_CHKSUM_TYPE_ADLER32; + else { + NITROX_LOG(ERR, "Unsupported checksum type %d\n", + chksum_type); + ret = -ENOTSUP; + goto err_exit; + } + + return 0; +err_exit: + memset(nxform, 0, sizeof(*nxform)); + rte_mempool_put(comp_dev->xform_pool, nxform); + return ret; +} + +static int nitrox_comp_private_xform_free(struct rte_compressdev *dev, + void *private_xform) +{ + struct nitrox_comp_xform *nxform = private_xform; + struct rte_mempool *mp = rte_mempool_from_obj(nxform); + + RTE_SET_USED(dev); + if (unlikely(nxform == NULL)) + return -EINVAL; + + memset(nxform, 0, sizeof(*nxform)); + mp = rte_mempool_from_obj(nxform); + rte_mempool_put(mp, nxform); + return 0; +} + +static uint16_t nitrox_comp_dev_enq_burst(void *qp, + struct rte_comp_op **ops, + uint16_t nb_ops) +{ + RTE_SET_USED(qp); + RTE_SET_USED(ops); + RTE_SET_USED(nb_ops); + return 0; +} + +static uint16_t nitrox_comp_dev_deq_burst(void *qp, + struct rte_comp_op **ops, + uint16_t nb_ops) +{ + RTE_SET_USED(qp); + RTE_SET_USED(ops); + RTE_SET_USED(nb_ops); + return 0; +} + +static struct rte_compressdev_ops nitrox_compressdev_ops = { + .dev_configure = nitrox_comp_dev_configure, + .dev_start = nitrox_comp_dev_start, + .dev_stop = nitrox_comp_dev_stop, + .dev_close = nitrox_comp_dev_close, + + .stats_get = nitrox_comp_stats_get, + .stats_reset = nitrox_comp_stats_reset, + + .dev_infos_get = nitrox_comp_dev_info_get, + + .queue_pair_setup = nitrox_comp_queue_pair_setup, + .queue_pair_release = nitrox_comp_queue_pair_release, + + .private_xform_create = nitrox_comp_private_xform_create, + .private_xform_free = nitrox_comp_private_xform_free, + .stream_create = NULL, + .stream_free = NULL +}; + +int +nitrox_comp_pmd_create(struct nitrox_device *ndev) +{ + char name[RTE_COMPRESSDEV_NAME_MAX_LEN]; + struct rte_compressdev_pmd_init_params init_params = { + .name = "", + .socket_id = ndev->pdev->device.numa_node, + }; + struct rte_compressdev *cdev; + + rte_pci_device_name(&ndev->pdev->addr, name, sizeof(name)); + snprintf(name + strlen(name), + RTE_COMPRESSDEV_NAME_MAX_LEN - strlen(name), + "_n5comp"); + ndev->rte_comp_dev.driver = &nitrox_rte_comp_drv; + ndev->rte_comp_dev.numa_node = ndev->pdev->device.numa_node; + ndev->rte_comp_dev.devargs = NULL; + cdev = rte_compressdev_pmd_create(name, + &ndev->rte_comp_dev, + sizeof(struct nitrox_comp_device), + &init_params); + if (!cdev) { + NITROX_LOG(ERR, "Cryptodev '%s' creation failed\n", name); + return -ENODEV; + } + + cdev->dev_ops = &nitrox_compressdev_ops; + cdev->enqueue_burst = nitrox_comp_dev_enq_burst; + cdev->dequeue_burst = nitrox_comp_dev_deq_burst; + cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; + + ndev->comp_dev = cdev->data->dev_private; + ndev->comp_dev->cdev = cdev; + ndev->comp_dev->ndev = ndev; + ndev->comp_dev->xform_pool = NULL; + NITROX_LOG(DEBUG, "Created compressdev '%s', dev_id %d\n", + cdev->data->name, cdev->data->dev_id); + return 0; +} + +int +nitrox_comp_pmd_destroy(struct nitrox_device *ndev) +{ + int err; + + if (ndev->comp_dev == NULL) + return 0; + + err = rte_compressdev_pmd_destroy(ndev->comp_dev->cdev); + if (err) + return err; + + ndev->comp_dev = NULL; + return 0; +} diff --git a/drivers/compress/nitrox/nitrox_comp.h b/drivers/compress/nitrox/nitrox_comp.h new file mode 100644 index 0000000000..90e1931b05 --- /dev/null +++ b/drivers/compress/nitrox/nitrox_comp.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#ifndef _NITROX_COMP_H_ +#define _NITROX_COMP_H_ + +#define COMPRESSDEV_NAME_NITROX_PMD compress_nitrox +#define NITROX_DECOMP_CTX_SIZE 2048 +#define NITROX_CONSTANTS_MAX_SEARCH_DEPTH 31744 +#define NITROX_COMP_WINDOW_SIZE_MIN 1 +#define NITROX_COMP_WINDOW_SIZE_MAX 15 +#define NITROX_COMP_LEVEL_LOWEST_START 1 +#define NITROX_COMP_LEVEL_LOWEST_END 2 +#define NITROX_COMP_LEVEL_LOWER_START 3 +#define NITROX_COMP_LEVEL_LOWER_END 4 +#define NITROX_COMP_LEVEL_MEDIUM_START 5 +#define NITROX_COMP_LEVEL_MEDIUM_END 6 +#define NITROX_COMP_LEVEL_BEST_START 7 +#define NITROX_COMP_LEVEL_BEST_END 9 + +struct nitrox_comp_device { + struct rte_compressdev *cdev; + struct nitrox_device *ndev; + struct rte_mempool *xform_pool; +}; + +struct nitrox_device; + +int nitrox_comp_pmd_create(struct nitrox_device *ndev); +int nitrox_comp_pmd_destroy(struct nitrox_device *ndev); + +#endif /* _NITROX_COMP_H_ */ diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.h b/drivers/compress/nitrox/nitrox_comp_reqmgr.h new file mode 100644 index 0000000000..14f35a1e5b --- /dev/null +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#ifndef _NITROX_COMP_REQMGR_H_ +#define _NITROX_COMP_REQMGR_H_ + +enum nitrox_comp_op { + NITROX_COMP_OP_DECOMPRESS, + NITROX_COMP_OP_COMPRESS, +}; + +enum nitrox_comp_algo { + NITROX_COMP_ALGO_DEFLATE_DEFAULT, + NITROX_COMP_ALGO_DEFLATE_DYNHUFF, + NITROX_COMP_ALGO_DEFLATE_FIXEDHUFF, + NITROX_COMP_ALGO_LZS, +}; + +enum nitrox_comp_level { + NITROX_COMP_LEVEL_BEST, + NITROX_COMP_LEVEL_MEDIUM, + NITROX_COMP_LEVEL_LOWER, + NITROX_COMP_LEVEL_LOWEST, +}; + +enum nitrox_chksum_type { + NITROX_CHKSUM_TYPE_CRC32, + NITROX_CHKSUM_TYPE_ADLER32, + NITROX_CHKSUM_TYPE_NONE, +}; + +struct nitrox_comp_xform { + enum nitrox_comp_op op; + enum nitrox_comp_algo algo; + enum nitrox_comp_level level; + enum nitrox_chksum_type chksum_type; +}; + +#endif /* _NITROX_COMP_REQMGR_H_ */ From patchwork Fri Mar 1 16:25:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 137685 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B547343C0C; Fri, 1 Mar 2024 17:26:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0E8544333E; Fri, 1 Mar 2024 17:26:15 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 824E54333C for ; Fri, 1 Mar 2024 17:26:13 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 4219pjgM021637; Fri, 1 Mar 2024 08:26:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=ajUY1RWoNOPsSxoJjy0r82uSuRiE4GK3MgLvg/XCdQE=; b=B1T sHdHAz9KL7RN4sZUVDZgwWAgcghOrxoZcfggq229ywq1OYLfkFPHfDAur9MZ+o7e aR+Ik4aruhbVPby2HMoU23PSPnKpOo0Eo0j/V7cWOffkVkDPy2Q83gEa937sK0LN RznWSI6FOzStl0ZIyoWPk4FQdIyiTZg0XB0D0T+FKFk36cw5IOtjAevQ4OnDA5ya ie0R6SKfYrKS+J8vMAvwznF5ukpqRX0F4PKRkhOXlkrBf7dCZz2A2ZPJ4ftyd/Ps eLMTC8As3JRpRFL1x05eQg7/zoU7wPkgplFpk+DDiBtl2FbPuC6MoOtsYawWbPuK Hii9WbN245SQzUTKvdQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3wjfay86xf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Mar 2024 08:26:12 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1258.12; Fri, 1 Mar 2024 08:26:10 -0800 Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 1 Mar 2024 08:26:09 -0800 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Fri, 1 Mar 2024 08:26:07 -0800 From: Nagadheeraj Rottela To: , , CC: , Nagadheeraj Rottela Subject: [PATCH v4 3/7] common/nitrox: add compress hardware queue management Date: Fri, 1 Mar 2024 21:55:49 +0530 Message-ID: <20240301162553.30523-4-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240301162553.30523-1-rnagadheeraj@marvell.com> References: <20240301162553.30523-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: b82sfi_9trzoDbQ9s-IwMYiCrt5X0BtB X-Proofpoint-GUID: b82sfi_9trzoDbQ9s-IwMYiCrt5X0BtB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-01_17,2024-03-01_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added compress device hardware ring initialization. Signed-off-by: Nagadheeraj Rottela --- drivers/common/nitrox/nitrox_csr.h | 12 +++ drivers/common/nitrox/nitrox_hal.c | 116 +++++++++++++++++++++++++++++ drivers/common/nitrox/nitrox_hal.h | 115 ++++++++++++++++++++++++++++ drivers/common/nitrox/nitrox_qp.c | 54 ++++++++++++-- drivers/common/nitrox/nitrox_qp.h | 49 ++++++++++-- 5 files changed, 330 insertions(+), 16 deletions(-) diff --git a/drivers/common/nitrox/nitrox_csr.h b/drivers/common/nitrox/nitrox_csr.h index de7a3c6713..97c797c2e2 100644 --- a/drivers/common/nitrox/nitrox_csr.h +++ b/drivers/common/nitrox/nitrox_csr.h @@ -25,6 +25,18 @@ /* AQM Virtual Function Registers */ #define AQMQ_QSZX(_i) (0x20008UL + ((_i) * 0x40000UL)) +/* ZQM virtual function registers */ +#define ZQMQ_DRBLX(_i) (0x30000UL + ((_i) * 0x40000UL)) +#define ZQMQ_QSZX(_i) (0x30008UL + ((_i) * 0x40000UL)) +#define ZQMQ_BADRX(_i) (0x30010UL + ((_i) * 0x40000UL)) +#define ZQMQ_NXT_CMDX(_i) (0x30018UL + ((_i) * 0x40000UL)) +#define ZQMQ_CMD_CNTX(_i) (0x30020UL + ((_i) * 0x40000UL)) +#define ZQMQ_CMP_THRX(_i) (0x30028UL + ((_i) * 0x40000UL)) +#define ZQMQ_CMP_CNTX(_i) (0x30030UL + ((_i) * 0x40000UL)) +#define ZQMQ_TIMER_LDX(_i) (0x30038UL + ((_i) * 0x40000UL)) +#define ZQMQ_ENX(_i) (0x30048UL + ((_i) * 0x40000UL)) +#define ZQMQ_ACTIVITY_STATX(_i) (0x30050UL + ((_i) * 0x40000UL)) + static inline uint64_t nitrox_read_csr(uint8_t *bar_addr, uint64_t offset) { diff --git a/drivers/common/nitrox/nitrox_hal.c b/drivers/common/nitrox/nitrox_hal.c index 433f3adb20..451549a664 100644 --- a/drivers/common/nitrox/nitrox_hal.c +++ b/drivers/common/nitrox/nitrox_hal.c @@ -9,6 +9,7 @@ #include "nitrox_hal.h" #include "nitrox_csr.h" +#include "nitrox_logs.h" #define MAX_VF_QUEUES 8 #define MAX_PF_QUEUES 64 @@ -164,6 +165,121 @@ setup_nps_pkt_solicit_output_port(uint8_t *bar_addr, uint16_t port) } } +int +zqmq_input_ring_disable(uint8_t *bar_addr, uint16_t ring) +{ + union zqmq_activity_stat zqmq_activity_stat; + union zqmq_en zqmq_en; + union zqmq_cmp_cnt zqmq_cmp_cnt; + uint64_t reg_addr; + int max_retries = 5; + + /* clear queue enable */ + reg_addr = ZQMQ_ENX(ring); + zqmq_en.u64 = nitrox_read_csr(bar_addr, reg_addr); + zqmq_en.s.queue_enable = 0; + nitrox_write_csr(bar_addr, reg_addr, zqmq_en.u64); + rte_delay_us_block(100); + + /* wait for queue active to clear */ + reg_addr = ZQMQ_ACTIVITY_STATX(ring); + zqmq_activity_stat.u64 = nitrox_read_csr(bar_addr, reg_addr); + while (zqmq_activity_stat.s.queue_active && max_retries--) { + rte_delay_ms(10); + zqmq_activity_stat.u64 = nitrox_read_csr(bar_addr, reg_addr); + } + + if (zqmq_activity_stat.s.queue_active) { + NITROX_LOG(ERR, "Failed to disable zqmq ring %d\n", ring); + return -EBUSY; + } + + /* clear commands completed count */ + reg_addr = ZQMQ_CMP_CNTX(ring); + zqmq_cmp_cnt.u64 = nitrox_read_csr(bar_addr, reg_addr); + nitrox_write_csr(bar_addr, reg_addr, zqmq_cmp_cnt.u64); + rte_delay_us_block(CSR_DELAY); + return 0; +} + +int +setup_zqmq_input_ring(uint8_t *bar_addr, uint16_t ring, uint32_t rsize, + phys_addr_t raddr) +{ + union zqmq_drbl zqmq_drbl; + union zqmq_qsz zqmq_qsz; + union zqmq_en zqmq_en; + union zqmq_cmp_thr zqmq_cmp_thr; + union zqmq_timer_ld zqmq_timer_ld; + uint64_t reg_addr = 0; + int max_retries = 5; + int err = 0; + + err = zqmq_input_ring_disable(bar_addr, ring); + if (err) + return err; + + /* clear doorbell count */ + reg_addr = ZQMQ_DRBLX(ring); + zqmq_drbl.u64 = 0; + zqmq_drbl.s.dbell_count = 0xFFFFFFFF; + nitrox_write_csr(bar_addr, reg_addr, zqmq_drbl.u64); + rte_delay_us_block(CSR_DELAY); + + reg_addr = ZQMQ_NXT_CMDX(ring); + nitrox_write_csr(bar_addr, reg_addr, 0); + rte_delay_us_block(CSR_DELAY); + + /* write queue length */ + reg_addr = ZQMQ_QSZX(ring); + zqmq_qsz.u64 = 0; + zqmq_qsz.s.host_queue_size = rsize; + nitrox_write_csr(bar_addr, reg_addr, zqmq_qsz.u64); + rte_delay_us_block(CSR_DELAY); + + /* write queue base address */ + reg_addr = ZQMQ_BADRX(ring); + nitrox_write_csr(bar_addr, reg_addr, raddr); + rte_delay_us_block(CSR_DELAY); + + /* write commands completed threshold */ + reg_addr = ZQMQ_CMP_THRX(ring); + zqmq_cmp_thr.u64 = 0; + zqmq_cmp_thr.s.commands_completed_threshold = 0; + nitrox_write_csr(bar_addr, reg_addr, zqmq_cmp_thr.u64); + rte_delay_us_block(CSR_DELAY); + + /* write timer load value */ + reg_addr = ZQMQ_TIMER_LDX(ring); + zqmq_timer_ld.u64 = 0; + zqmq_timer_ld.s.timer_load_value = 0; + nitrox_write_csr(bar_addr, reg_addr, zqmq_timer_ld.u64); + rte_delay_us_block(CSR_DELAY); + + reg_addr = ZQMQ_ENX(ring); + zqmq_en.u64 = nitrox_read_csr(bar_addr, reg_addr); + zqmq_en.s.queue_enable = 1; + nitrox_write_csr(bar_addr, reg_addr, zqmq_en.u64); + rte_delay_us_block(100); + + /* enable queue */ + zqmq_en.u64 = 0; + zqmq_en.u64 = nitrox_read_csr(bar_addr, reg_addr); + while (!zqmq_en.s.queue_enable && max_retries--) { + rte_delay_ms(10); + zqmq_en.u64 = nitrox_read_csr(bar_addr, reg_addr); + } + + if (!zqmq_en.s.queue_enable) { + NITROX_LOG(ERR, "Failed to enable zqmq ring %d\n", ring); + err = -EFAULT; + } else { + err = 0; + } + + return err; +} + int vf_get_vf_config_mode(uint8_t *bar_addr) { diff --git a/drivers/common/nitrox/nitrox_hal.h b/drivers/common/nitrox/nitrox_hal.h index dcfbd11d85..2367b967e5 100644 --- a/drivers/common/nitrox/nitrox_hal.h +++ b/drivers/common/nitrox/nitrox_hal.h @@ -146,6 +146,101 @@ union aqmq_qsz { } s; }; +union zqmq_activity_stat { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 63; + uint64_t queue_active : 1; +#else + uint64_t queue_active : 1; + uint64_t raz : 63; +#endif + } s; +}; + +union zqmq_en { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 63; + uint64_t queue_enable : 1; +#else + uint64_t queue_enable : 1; + uint64_t raz : 63; +#endif + } s; +}; + +union zqmq_cmp_cnt { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 30; + uint64_t resend : 1; + uint64_t completion_status : 1; + uint64_t commands_completed_count: 32; +#else + uint64_t commands_completed_count: 32; + uint64_t completion_status : 1; + uint64_t resend : 1; + uint64_t raz : 30; +#endif + } s; +}; + +union zqmq_drbl { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 32; + uint64_t dbell_count : 32; +#else + uint64_t dbell_count : 32; + uint64_t raz : 32; +#endif + } s; +}; + +union zqmq_qsz { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 32; + uint64_t host_queue_size: 32; +#else + uint64_t host_queue_size: 32; + uint64_t raz : 32; +#endif + } s; +}; + +union zqmq_cmp_thr { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 32; + uint64_t commands_completed_threshold : 32; +#else + uint64_t commands_completed_threshold : 32; + uint64_t raz : 32; +#endif + } s; +}; + +union zqmq_timer_ld { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz : 32; + uint64_t timer_load_value: 32; +#else + uint64_t timer_load_value: 32; + uint64_t raz : 32; +#endif + } s; +}; + enum nitrox_vf_mode { NITROX_MODE_PF = 0x0, NITROX_MODE_VF16 = 0x1, @@ -154,6 +249,23 @@ enum nitrox_vf_mode { NITROX_MODE_VF128 = 0x4, }; +static inline int +inc_zqmq_next_cmd(uint8_t *bar_addr, uint16_t ring) +{ + uint64_t reg_addr = 0; + uint64_t val; + + reg_addr = ZQMQ_NXT_CMDX(ring); + val = nitrox_read_csr(bar_addr, reg_addr); + val++; + nitrox_write_csr(bar_addr, reg_addr, val); + rte_delay_us_block(CSR_DELAY); + if (nitrox_read_csr(bar_addr, reg_addr) != val) + return -EIO; + + return 0; +} + int vf_get_vf_config_mode(uint8_t *bar_addr); int vf_config_mode_to_nr_queues(enum nitrox_vf_mode vf_mode); void setup_nps_pkt_input_ring(uint8_t *bar_addr, uint16_t ring, uint32_t rsize, @@ -161,5 +273,8 @@ void setup_nps_pkt_input_ring(uint8_t *bar_addr, uint16_t ring, uint32_t rsize, void setup_nps_pkt_solicit_output_port(uint8_t *bar_addr, uint16_t port); void nps_pkt_input_ring_disable(uint8_t *bar_addr, uint16_t ring); void nps_pkt_solicited_port_disable(uint8_t *bar_addr, uint16_t port); +int setup_zqmq_input_ring(uint8_t *bar_addr, uint16_t ring, uint32_t rsize, + phys_addr_t raddr); +int zqmq_input_ring_disable(uint8_t *bar_addr, uint16_t ring); #endif /* _NITROX_HAL_H_ */ diff --git a/drivers/common/nitrox/nitrox_qp.c b/drivers/common/nitrox/nitrox_qp.c index 79a26f0024..1665c3c40d 100644 --- a/drivers/common/nitrox/nitrox_qp.c +++ b/drivers/common/nitrox/nitrox_qp.c @@ -20,6 +20,7 @@ nitrox_setup_cmdq(struct nitrox_qp *qp, uint8_t *bar_addr, const struct rte_memzone *mz; size_t cmdq_size = qp->count * instr_size; uint64_t offset; + int err = 0; snprintf(mz_name, sizeof(mz_name), "%s_cmdq_%d", dev_name, qp->qno); mz = rte_memzone_reserve_aligned(mz_name, cmdq_size, socket_id, @@ -32,14 +33,34 @@ nitrox_setup_cmdq(struct nitrox_qp *qp, uint8_t *bar_addr, return -ENOMEM; } + switch (qp->type) { + case NITROX_QUEUE_SE: + offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(qp->qno); + qp->cmdq.dbell_csr_addr = NITROX_CSR_ADDR(bar_addr, offset); + setup_nps_pkt_input_ring(bar_addr, qp->qno, qp->count, + mz->iova); + setup_nps_pkt_solicit_output_port(bar_addr, qp->qno); + break; + case NITROX_QUEUE_ZIP: + offset = ZQMQ_DRBLX(qp->qno); + qp->cmdq.dbell_csr_addr = NITROX_CSR_ADDR(bar_addr, offset); + err = setup_zqmq_input_ring(bar_addr, qp->qno, qp->count, + mz->iova); + break; + default: + NITROX_LOG(ERR, "Invalid queue type %d\n", qp->type); + err = -EINVAL; + break; + } + + if (err) { + rte_memzone_free(mz); + return err; + } + qp->cmdq.mz = mz; - offset = NPS_PKT_IN_INSTR_BAOFF_DBELLX(qp->qno); - qp->cmdq.dbell_csr_addr = NITROX_CSR_ADDR(bar_addr, offset); qp->cmdq.ring = mz->addr; qp->cmdq.instr_size = instr_size; - setup_nps_pkt_input_ring(bar_addr, qp->qno, qp->count, mz->iova); - setup_nps_pkt_solicit_output_port(bar_addr, qp->qno); - return 0; } @@ -62,8 +83,23 @@ nitrox_setup_ridq(struct nitrox_qp *qp, int socket_id) static int nitrox_release_cmdq(struct nitrox_qp *qp, uint8_t *bar_addr) { - nps_pkt_solicited_port_disable(bar_addr, qp->qno); - nps_pkt_input_ring_disable(bar_addr, qp->qno); + int err = 0; + + switch (qp->type) { + case NITROX_QUEUE_SE: + nps_pkt_solicited_port_disable(bar_addr, qp->qno); + nps_pkt_input_ring_disable(bar_addr, qp->qno); + break; + case NITROX_QUEUE_ZIP: + err = zqmq_input_ring_disable(bar_addr, qp->qno); + break; + default: + err = -EINVAL; + } + + if (err) + return err; + return rte_memzone_free(qp->cmdq.mz); } @@ -83,9 +119,11 @@ nitrox_qp_setup(struct nitrox_qp *qp, uint8_t *bar_addr, const char *dev_name, return -EINVAL; } + qp->bar_addr = bar_addr; qp->count = count; qp->head = qp->tail = 0; - rte_atomic16_init(&qp->pending_count); + rte_atomic_store_explicit(&qp->pending_count, 0, + rte_memory_order_relaxed); err = nitrox_setup_cmdq(qp, bar_addr, dev_name, instr_size, socket_id); if (err) return err; diff --git a/drivers/common/nitrox/nitrox_qp.h b/drivers/common/nitrox/nitrox_qp.h index 23dffd1268..c328b88926 100644 --- a/drivers/common/nitrox/nitrox_qp.h +++ b/drivers/common/nitrox/nitrox_qp.h @@ -8,9 +8,16 @@ #include #include +#include "nitrox_hal.h" struct nitrox_softreq; +enum nitrox_queue_type { + NITROX_QUEUE_SE, + NITROX_QUEUE_AE, + NITROX_QUEUE_ZIP, +}; + struct command_queue { const struct rte_memzone *mz; uint8_t *dbell_csr_addr; @@ -30,6 +37,8 @@ struct nitrox_qp_stats { }; struct nitrox_qp { + enum nitrox_queue_type type; + uint8_t *bar_addr; struct command_queue cmdq; struct rid *ridq; uint32_t count; @@ -38,14 +47,16 @@ struct nitrox_qp { struct rte_mempool *sr_mp; struct nitrox_qp_stats stats; uint16_t qno; - rte_atomic16_t pending_count; + RTE_ATOMIC(uint16_t) pending_count; }; static inline uint16_t nitrox_qp_free_count(struct nitrox_qp *qp) { - uint16_t pending_count = rte_atomic16_read(&qp->pending_count); + uint16_t pending_count; + pending_count = rte_atomic_load_explicit(&qp->pending_count, + rte_memory_order_relaxed); RTE_ASSERT(qp->count >= pending_count); return (qp->count - pending_count); } @@ -53,13 +64,15 @@ nitrox_qp_free_count(struct nitrox_qp *qp) static inline bool nitrox_qp_is_empty(struct nitrox_qp *qp) { - return (rte_atomic16_read(&qp->pending_count) == 0); + return (rte_atomic_load_explicit(&qp->pending_count, + rte_memory_order_relaxed) == 0); } static inline uint16_t nitrox_qp_used_count(struct nitrox_qp *qp) { - return rte_atomic16_read(&qp->pending_count); + return rte_atomic_load_explicit(&qp->pending_count, + rte_memory_order_relaxed); } static inline struct nitrox_softreq * @@ -67,7 +80,7 @@ nitrox_qp_get_softreq(struct nitrox_qp *qp) { uint32_t tail = qp->tail % qp->count; - rte_smp_rmb(); + rte_atomic_thread_fence(rte_memory_order_acquire); return qp->ridq[tail].sr; } @@ -92,15 +105,35 @@ nitrox_qp_enqueue(struct nitrox_qp *qp, void *instr, struct nitrox_softreq *sr) memcpy(&qp->cmdq.ring[head * qp->cmdq.instr_size], instr, qp->cmdq.instr_size); qp->ridq[head].sr = sr; - rte_smp_wmb(); - rte_atomic16_inc(&qp->pending_count); + rte_atomic_thread_fence(rte_memory_order_release); + rte_atomic_fetch_add_explicit(&qp->pending_count, 1, + rte_memory_order_relaxed); +} + +static inline int +nitrox_qp_enqueue_sr(struct nitrox_qp *qp, struct nitrox_softreq *sr) +{ + uint32_t head = qp->head % qp->count; + int err; + + err = inc_zqmq_next_cmd(qp->bar_addr, qp->qno); + if (unlikely(err)) + return err; + + qp->head++; + qp->ridq[head].sr = sr; + rte_atomic_thread_fence(rte_memory_order_release); + rte_atomic_fetch_add_explicit(&qp->pending_count, 1, + rte_memory_order_relaxed); + return 0; } static inline void nitrox_qp_dequeue(struct nitrox_qp *qp) { qp->tail++; - rte_atomic16_dec(&qp->pending_count); + rte_atomic_fetch_sub_explicit(&qp->pending_count, 1, + rte_memory_order_relaxed); } __rte_internal From patchwork Fri Mar 1 16:25:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 137686 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9FBFC43C0C; Fri, 1 Mar 2024 17:26:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 53AAB43320; Fri, 1 Mar 2024 17:26:17 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1673743340 for ; Fri, 1 Mar 2024 17:26:14 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 4219mhiq013745; Fri, 1 Mar 2024 08:26:14 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=TjZGAo8cfaXnunhPOnD6pHXuAGXDH2/4Ey46Mmot5z8=; b=Dp/ BGONltEs2QH9lpS1y1oM6I950UYKLgUTmumM18zqMra65dgCTzRt9Nufmehr0utn 8XbxbimiW/Z5XWTkguRu7eGYSu0I7E4FKDZkzgU9lkSbOCiTXiz2NXoXu+/UKh+M r3z/JQXsaXijZFrjHllOsWSG0FnfkgYflGA9dik9wyziMaflTyh0BEx8DqxMhYDl 2eyQeBfy+2whDZxJtn1DdgWXiogvXLXYeorEGXvnozc7nM+F9ZkGPQ6xTFbYaERr ZmtTpOzRqF2hwyvmjCoFHJ9+YgWSsRE/SALHWnfgXiPO5RWWM0SRZz4AMubEpJOJ +UPmqJl7jvSmRR38A/w== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3wkcq593ts-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Mar 2024 08:26:14 -0800 (PST) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Fri, 1 Mar 2024 08:26:13 -0800 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Fri, 1 Mar 2024 08:26:11 -0800 From: Nagadheeraj Rottela To: , , CC: , Nagadheeraj Rottela Subject: [PATCH v4 4/7] crypto/nitrox: set queue type during queue pair setup Date: Fri, 1 Mar 2024 21:55:50 +0530 Message-ID: <20240301162553.30523-5-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240301162553.30523-1-rnagadheeraj@marvell.com> References: <20240301162553.30523-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: XufiPcGmVrAkvkIEdj-tPRF585AlYYo9 X-Proofpoint-ORIG-GUID: XufiPcGmVrAkvkIEdj-tPRF585AlYYo9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-01_17,2024-03-01_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Set queue type as SE to initialize symmetric hardware queue. Signed-off-by: Nagadheeraj Rottela --- drivers/crypto/nitrox/nitrox_sym.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c index 1244317438..03652d3ade 100644 --- a/drivers/crypto/nitrox/nitrox_sym.c +++ b/drivers/crypto/nitrox/nitrox_sym.c @@ -198,6 +198,7 @@ nitrox_sym_dev_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id, return -ENOMEM; } + qp->type = NITROX_QUEUE_SE; qp->qno = qp_id; err = nitrox_qp_setup(qp, ndev->bar_addr, cdev->data->name, qp_conf->nb_descriptors, NPS_PKT_IN_INSTR_SIZE, From patchwork Fri Mar 1 16:25:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 137687 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76D7C43C0C; Fri, 1 Mar 2024 17:26:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A5FBB43389; Fri, 1 Mar 2024 17:26:19 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D607D43380 for ; Fri, 1 Mar 2024 17:26:18 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 4219qnKP024602; Fri, 1 Mar 2024 08:26:18 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=uElbGPzDh2JtoHjmy/9cHWqCGEVxwBcZT7UpMBhRH2Y=; b=en8 3//e8di9GSDI8M7eALGRwYy42lnfKkQIKDvz86lDTa5HSyLIWWtiVlwuRZmllsbv s5E3FHNRAmnRqm/55bJeC3M+IA1S//H1LMpbv2pYfiBKkcGkiYrEr0+JdzvX+DFO 8IjPh6iqdAxDuMFG0gAvB0Z1p0DB1xMBQ1qFUZKA0OCTcyr1PM453o4YpNl4CmmW lOPgKaO1SV+YiUikRrvGMSt+GrX69iWqB35Kc0Pgqbi3K4sqTVxZnbnStJ+q2Ch5 rFR7chUlG86Oy+TY0wLkodJQ4VRTeHq1paFSFtvDMq2yiZsLALjCvEe9b/jv/3lP 1ePBl8QabLZ3CifAkXQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3wjfay86yf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Mar 2024 08:26:18 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1258.12; Fri, 1 Mar 2024 08:26:16 -0800 Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 1 Mar 2024 08:26:15 -0800 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Fri, 1 Mar 2024 08:26:13 -0800 From: Nagadheeraj Rottela To: , , CC: , Nagadheeraj Rottela Subject: [PATCH v4 5/7] compress/nitrox: add software queue management Date: Fri, 1 Mar 2024 21:55:51 +0530 Message-ID: <20240301162553.30523-6-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240301162553.30523-1-rnagadheeraj@marvell.com> References: <20240301162553.30523-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: aBHxzjY8YWg4DYZBd1ELGvQjBDWGKm3d X-Proofpoint-GUID: aBHxzjY8YWg4DYZBd1ELGvQjBDWGKm3d X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-01_17,2024-03-01_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added software queue management code corresponding to queue pair setup and release functions. Signed-off-by: Nagadheeraj Rottela --- drivers/compress/nitrox/nitrox_comp.c | 115 +++++++++++++++++++++++--- drivers/compress/nitrox/nitrox_comp.h | 1 + 2 files changed, 105 insertions(+), 11 deletions(-) diff --git a/drivers/compress/nitrox/nitrox_comp.c b/drivers/compress/nitrox/nitrox_comp.c index e97a686fbf..299cb8e783 100644 --- a/drivers/compress/nitrox/nitrox_comp.c +++ b/drivers/compress/nitrox/nitrox_comp.c @@ -5,11 +5,13 @@ #include #include #include +#include #include "nitrox_comp.h" #include "nitrox_device.h" #include "nitrox_logs.h" #include "nitrox_comp_reqmgr.h" +#include "nitrox_qp.h" static const char nitrox_comp_drv_name[] = RTE_STR(COMPRESSDEV_NAME_NITROX_PMD); static const struct rte_driver nitrox_rte_comp_drv = { @@ -17,6 +19,9 @@ static const struct rte_driver nitrox_rte_comp_drv = { .alias = nitrox_comp_drv_name }; +static int nitrox_comp_queue_pair_release(struct rte_compressdev *dev, + uint16_t qp_id); + static const struct rte_compressdev_capabilities nitrox_comp_pmd_capabilities[] = { { .algo = RTE_COMP_ALGO_DEFLATE, @@ -84,8 +89,15 @@ static void nitrox_comp_dev_stop(struct rte_compressdev *dev) static int nitrox_comp_dev_close(struct rte_compressdev *dev) { + int i, ret; struct nitrox_comp_device *comp_dev = dev->data->dev_private; + for (i = 0; i < dev->data->nb_queue_pairs; i++) { + ret = nitrox_comp_queue_pair_release(dev, i); + if (ret) + return ret; + } + rte_mempool_free(comp_dev->xform_pool); comp_dev->xform_pool = NULL; return 0; @@ -94,13 +106,33 @@ static int nitrox_comp_dev_close(struct rte_compressdev *dev) static void nitrox_comp_stats_get(struct rte_compressdev *dev, struct rte_compressdev_stats *stats) { - RTE_SET_USED(dev); - RTE_SET_USED(stats); + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct nitrox_qp *qp = dev->data->queue_pairs[qp_id]; + + if (!qp) + continue; + + stats->enqueued_count += qp->stats.enqueued_count; + stats->dequeued_count += qp->stats.dequeued_count; + stats->enqueue_err_count += qp->stats.enqueue_err_count; + stats->dequeue_err_count += qp->stats.dequeue_err_count; + } } static void nitrox_comp_stats_reset(struct rte_compressdev *dev) { - RTE_SET_USED(dev); + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct nitrox_qp *qp = dev->data->queue_pairs[qp_id]; + + if (!qp) + continue; + + memset(&qp->stats, 0, sizeof(qp->stats)); + } } static void nitrox_comp_dev_info_get(struct rte_compressdev *dev, @@ -121,19 +153,80 @@ static int nitrox_comp_queue_pair_setup(struct rte_compressdev *dev, uint16_t qp_id, uint32_t max_inflight_ops, int socket_id) { - RTE_SET_USED(dev); - RTE_SET_USED(qp_id); - RTE_SET_USED(max_inflight_ops); - RTE_SET_USED(socket_id); - return -1; + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_device *ndev = comp_dev->ndev; + struct nitrox_qp *qp = NULL; + int err; + + NITROX_LOG(DEBUG, "queue %d\n", qp_id); + if (qp_id >= ndev->nr_queues) { + NITROX_LOG(ERR, "queue %u invalid, max queues supported %d\n", + qp_id, ndev->nr_queues); + return -EINVAL; + } + + if (dev->data->queue_pairs[qp_id]) { + err = nitrox_comp_queue_pair_release(dev, qp_id); + if (err) + return err; + } + + qp = rte_zmalloc_socket("nitrox PMD qp", sizeof(*qp), + RTE_CACHE_LINE_SIZE, + socket_id); + if (!qp) { + NITROX_LOG(ERR, "Failed to allocate nitrox qp\n"); + return -ENOMEM; + } + + qp->type = NITROX_QUEUE_ZIP; + qp->qno = qp_id; + err = nitrox_qp_setup(qp, ndev->bar_addr, dev->data->name, + max_inflight_ops, ZIP_INSTR_SIZE, + socket_id); + if (unlikely(err)) + goto qp_setup_err; + + dev->data->queue_pairs[qp_id] = qp; + NITROX_LOG(DEBUG, "queue %d setup done\n", qp_id); + return 0; + +qp_setup_err: + rte_free(qp); + return err; } static int nitrox_comp_queue_pair_release(struct rte_compressdev *dev, uint16_t qp_id) { - RTE_SET_USED(dev); - RTE_SET_USED(qp_id); - return 0; + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + struct nitrox_device *ndev = comp_dev->ndev; + struct nitrox_qp *qp; + int err; + + NITROX_LOG(DEBUG, "queue %d\n", qp_id); + if (qp_id >= ndev->nr_queues) { + NITROX_LOG(ERR, "queue %u invalid, max queues supported %d\n", + qp_id, ndev->nr_queues); + return -EINVAL; + } + + qp = dev->data->queue_pairs[qp_id]; + if (!qp) { + NITROX_LOG(DEBUG, "queue %u already freed\n", qp_id); + return 0; + } + + if (!nitrox_qp_is_empty(qp)) { + NITROX_LOG(ERR, "queue %d not empty\n", qp_id); + return -EAGAIN; + } + + dev->data->queue_pairs[qp_id] = NULL; + err = nitrox_qp_release(qp, ndev->bar_addr); + rte_free(qp); + NITROX_LOG(DEBUG, "queue %d release done\n", qp_id); + return err; } static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, diff --git a/drivers/compress/nitrox/nitrox_comp.h b/drivers/compress/nitrox/nitrox_comp.h index 90e1931b05..e49debaf6b 100644 --- a/drivers/compress/nitrox/nitrox_comp.h +++ b/drivers/compress/nitrox/nitrox_comp.h @@ -18,6 +18,7 @@ #define NITROX_COMP_LEVEL_MEDIUM_END 6 #define NITROX_COMP_LEVEL_BEST_START 7 #define NITROX_COMP_LEVEL_BEST_END 9 +#define ZIP_INSTR_SIZE 64 struct nitrox_comp_device { struct rte_compressdev *cdev; From patchwork Fri Mar 1 16:25:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 137688 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A3A743C0C; Fri, 1 Mar 2024 17:26:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E61804325A; Fri, 1 Mar 2024 17:26:22 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C55FC402CD for ; Fri, 1 Mar 2024 17:26:21 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 4219mtiX013854; Fri, 1 Mar 2024 08:26:21 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=wJXZEJWRLWIBUtoMEQvFFWA+oDQA05ii+qV+YfHkTPM=; b=IKb T4sKM676s0q9Dq1zHzhaw2ZGu7kLsVSEqTBDk4Uo7ZaX4B0K3kMPGsI5Er3iV5dc 21UsCmcvzUA51bQ68WGqc4oaHcK0FcSG0RLs0zzTtTIl5ZtsQbkXk7rgmBvw/qn5 hoQArde3TUiJdzD+9cjEgfc/CnOfG/ao0skT8twUjt85YPSaWTuJMe4ZuxaJFw8O WCTnr0eHwH7UOp9MR8tR+bjUAFE9K9VlCqo6B3r8Cyo12MYEcTT3/u6qYudR3J7p sA55Jl072Xcbx+B2/5QtvTcUDciBmxVYYPldky+F8SJSutAma/L5yvJZ90610Rj3 5c/cJG+Zgx9/ozZN0ow== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3wkcq593ug-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Mar 2024 08:26:20 -0800 (PST) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Fri, 1 Mar 2024 08:26:19 -0800 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Fri, 1 Mar 2024 08:26:17 -0800 From: Nagadheeraj Rottela To: , , CC: , Nagadheeraj Rottela Subject: [PATCH v4 6/7] compress/nitrox: support stateless request Date: Fri, 1 Mar 2024 21:55:52 +0530 Message-ID: <20240301162553.30523-7-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240301162553.30523-1-rnagadheeraj@marvell.com> References: <20240301162553.30523-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: boCgUsBIJpE4_j7o33OXrAyj0OxCcYT2 X-Proofpoint-ORIG-GUID: boCgUsBIJpE4_j7o33OXrAyj0OxCcYT2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-01_17,2024-03-01_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement enqueue and dequeue burst operations for stateless request support. Signed-off-by: Nagadheeraj Rottela --- drivers/compress/nitrox/meson.build | 1 + drivers/compress/nitrox/nitrox_comp.c | 91 ++- drivers/compress/nitrox/nitrox_comp_reqmgr.c | 792 +++++++++++++++++++ drivers/compress/nitrox/nitrox_comp_reqmgr.h | 10 + 4 files changed, 885 insertions(+), 9 deletions(-) create mode 100644 drivers/compress/nitrox/nitrox_comp_reqmgr.c diff --git a/drivers/compress/nitrox/meson.build b/drivers/compress/nitrox/meson.build index f137303689..2c35aba60b 100644 --- a/drivers/compress/nitrox/meson.build +++ b/drivers/compress/nitrox/meson.build @@ -10,6 +10,7 @@ deps += ['common_nitrox', 'bus_pci', 'compressdev'] sources += files( 'nitrox_comp.c', + 'nitrox_comp_reqmgr.c', ) includes += include_directories('../../common/nitrox') diff --git a/drivers/compress/nitrox/nitrox_comp.c b/drivers/compress/nitrox/nitrox_comp.c index 299cb8e783..0ea5ed43ed 100644 --- a/drivers/compress/nitrox/nitrox_comp.c +++ b/drivers/compress/nitrox/nitrox_comp.c @@ -187,10 +187,17 @@ static int nitrox_comp_queue_pair_setup(struct rte_compressdev *dev, if (unlikely(err)) goto qp_setup_err; + qp->sr_mp = nitrox_comp_req_pool_create(dev, qp->count, qp_id, + socket_id); + if (unlikely(!qp->sr_mp)) + goto req_pool_err; + dev->data->queue_pairs[qp_id] = qp; NITROX_LOG(DEBUG, "queue %d setup done\n", qp_id); return 0; +req_pool_err: + nitrox_qp_release(qp, ndev->bar_addr); qp_setup_err: rte_free(qp); return err; @@ -224,6 +231,7 @@ static int nitrox_comp_queue_pair_release(struct rte_compressdev *dev, dev->data->queue_pairs[qp_id] = NULL; err = nitrox_qp_release(qp, ndev->bar_addr); + nitrox_comp_req_pool_free(qp->sr_mp); rte_free(qp); NITROX_LOG(DEBUG, "queue %d release done\n", qp_id); return err; @@ -349,24 +357,89 @@ static int nitrox_comp_private_xform_free(struct rte_compressdev *dev, return 0; } -static uint16_t nitrox_comp_dev_enq_burst(void *qp, +static int nitrox_enq_single_op(struct nitrox_qp *qp, struct rte_comp_op *op) +{ + struct nitrox_softreq *sr; + int err; + + if (unlikely(rte_mempool_get(qp->sr_mp, (void **)&sr))) + return -ENOMEM; + + err = nitrox_process_comp_req(op, sr); + if (unlikely(err)) { + rte_mempool_put(qp->sr_mp, sr); + return err; + } + + nitrox_qp_enqueue(qp, nitrox_comp_instr_addr(sr), sr); + return 0; +} + +static uint16_t nitrox_comp_dev_enq_burst(void *queue_pair, struct rte_comp_op **ops, uint16_t nb_ops) { - RTE_SET_USED(qp); - RTE_SET_USED(ops); - RTE_SET_USED(nb_ops); + struct nitrox_qp *qp = queue_pair; + uint16_t free_slots = 0; + uint16_t cnt = 0; + bool err = false; + + free_slots = nitrox_qp_free_count(qp); + if (nb_ops > free_slots) + nb_ops = free_slots; + + for (cnt = 0; cnt < nb_ops; cnt++) { + if (unlikely(nitrox_enq_single_op(qp, ops[cnt]))) { + err = true; + break; + } + } + + nitrox_ring_dbell(qp, cnt); + qp->stats.enqueued_count += cnt; + if (unlikely(err)) + qp->stats.enqueue_err_count++; + + return cnt; +} + +static int nitrox_deq_single_op(struct nitrox_qp *qp, + struct rte_comp_op **op_ptr) +{ + struct nitrox_softreq *sr; + int err; + + sr = nitrox_qp_get_softreq(qp); + err = nitrox_check_comp_req(sr, op_ptr); + if (err == -EAGAIN) + return err; + + nitrox_qp_dequeue(qp); + rte_mempool_put(qp->sr_mp, sr); + if (err == 0) + qp->stats.dequeued_count++; + else + qp->stats.dequeue_err_count++; + return 0; } -static uint16_t nitrox_comp_dev_deq_burst(void *qp, +static uint16_t nitrox_comp_dev_deq_burst(void *queue_pair, struct rte_comp_op **ops, uint16_t nb_ops) { - RTE_SET_USED(qp); - RTE_SET_USED(ops); - RTE_SET_USED(nb_ops); - return 0; + struct nitrox_qp *qp = queue_pair; + uint16_t filled_slots = nitrox_qp_used_count(qp); + int cnt = 0; + + if (nb_ops > filled_slots) + nb_ops = filled_slots; + + for (cnt = 0; cnt < nb_ops; cnt++) + if (nitrox_deq_single_op(qp, &ops[cnt])) + break; + + return cnt; } static struct rte_compressdev_ops nitrox_compressdev_ops = { diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.c b/drivers/compress/nitrox/nitrox_comp_reqmgr.c new file mode 100644 index 0000000000..5ad1a4439a --- /dev/null +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.c @@ -0,0 +1,792 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#include +#include +#include + +#include "nitrox_comp_reqmgr.h" +#include "nitrox_logs.h" +#include "rte_comp.h" + +#define NITROX_ZIP_SGL_COUNT 16 +#define NITROX_ZIP_MAX_ZPTRS 2048 +#define NITROX_ZIP_MAX_DATASIZE ((1 << 24) - 1) +#define NITROX_ZIP_MAX_ONFSIZE 1024 +#define CMD_TIMEOUT 2 + +union nitrox_zip_instr_word0 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 8; + uint64_t tol : 24; + uint64_t raz1 : 5; + uint64_t exn : 3; + uint64_t raz2 : 1; + uint64_t exbits : 7; + uint64_t raz3 : 3; + uint64_t ca : 1; + uint64_t sf : 1; + uint64_t ss : 2; + uint64_t cc : 2; + uint64_t ef : 1; + uint64_t bf : 1; + uint64_t co : 1; + uint64_t raz4 : 1; + uint64_t ds : 1; + uint64_t dg : 1; + uint64_t hg : 1; +#else + uint64_t hg : 1; + uint64_t dg : 1; + uint64_t ds : 1; + uint64_t raz4 : 1; + uint64_t co : 1; + uint64_t bf : 1; + uint64_t ef : 1; + uint64_t cc : 2; + uint64_t ss : 2; + uint64_t sf : 1; + uint64_t ca : 1; + uint64_t raz3 : 3; + uint64_t exbits : 7; + uint64_t raz2 : 1; + uint64_t exn : 3; + uint64_t raz1 : 5; + uint64_t tol : 24; + uint64_t raz0 : 8; +#endif + + }; +}; + +union nitrox_zip_instr_word1 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t hl : 16; + uint64_t raz0 : 16; + uint64_t adlercrc32 : 32; +#else + uint64_t adlercrc32 : 32; + uint64_t raz0 : 16; + uint64_t hl : 16; +#endif + }; +}; + +union nitrox_zip_instr_word2 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 20; + uint64_t cptr : 44; +#else + uint64_t cptr : 44; + uint64_t raz0 : 20; +#endif + }; +}; + +union nitrox_zip_instr_word3 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 4; + uint64_t hlen : 16; + uint64_t hptr : 44; +#else + uint64_t hptr : 44; + uint64_t hlen : 16; + uint64_t raz0 : 4; +#endif + }; +}; + +union nitrox_zip_instr_word4 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 4; + uint64_t ilen : 16; + uint64_t iptr : 44; +#else + uint64_t iptr : 44; + uint64_t ilen : 16; + uint64_t raz0 : 4; +#endif + }; +}; + +union nitrox_zip_instr_word5 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 4; + uint64_t olen : 16; + uint64_t optr : 44; +#else + uint64_t optr : 44; + uint64_t olen : 16; + uint64_t raz0 : 4; +#endif + }; +}; + +union nitrox_zip_instr_word6 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 20; + uint64_t rptr : 44; +#else + uint64_t rptr : 44; + uint64_t raz0 : 20; +#endif + }; +}; + +union nitrox_zip_instr_word7 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t grp : 3; + uint64_t raz0 : 41; + uint64_t addr_msb: 20; +#else + uint64_t addr_msb: 20; + uint64_t raz0 : 41; + uint64_t grp : 3; +#endif + }; +}; + +struct nitrox_zip_instr { + union nitrox_zip_instr_word0 w0; + union nitrox_zip_instr_word1 w1; + union nitrox_zip_instr_word2 w2; + union nitrox_zip_instr_word3 w3; + union nitrox_zip_instr_word4 w4; + union nitrox_zip_instr_word5 w5; + union nitrox_zip_instr_word6 w6; + union nitrox_zip_instr_word7 w7; +}; + +union nitrox_zip_result_word0 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t crc32 : 32; + uint64_t adler32: 32; +#else + uint64_t adler32: 32; + uint64_t crc32 : 32; +#endif + }; +}; + +union nitrox_zip_result_word1 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t tbyteswritten : 32; + uint64_t tbytesread : 32; +#else + uint64_t tbytesread : 32; + uint64_t tbyteswritten : 32; +#endif + }; +}; + +union nitrox_zip_result_word2 { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t tbits : 32; + uint64_t raz0 : 5; + uint64_t exn : 3; + uint64_t raz1 : 1; + uint64_t exbits : 7; + uint64_t raz2 : 7; + uint64_t ef : 1; + uint64_t compcode: 8; +#else + uint64_t compcode: 8; + uint64_t ef : 1; + uint64_t raz2 : 7; + uint64_t exbits : 7; + uint64_t raz1 : 1; + uint64_t exn : 3; + uint64_t raz0 : 5; + uint64_t tbits : 32; +#endif + }; +}; + +struct nitrox_zip_result { + union nitrox_zip_result_word0 w0; + union nitrox_zip_result_word1 w1; + union nitrox_zip_result_word2 w2; +}; + +union nitrox_zip_zptr { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t raz0 : 3; + uint64_t le : 1; + uint64_t length : 16; + uint64_t addr : 44; +#else + uint64_t addr : 44; + uint64_t length : 16; + uint64_t le : 1; + uint64_t raz0 : 3; +#endif + } s; +}; + +struct nitrox_zip_iova_addr { + union { + uint64_t u64; + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t addr_msb: 20; + uint64_t addr : 44; +#else + uint64_t addr : 44; + uint64_t addr_msb: 20; +#endif + } zda; + + struct { +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint64_t addr_msb: 20; + uint64_t addr : 41; + uint64_t align_8bytes: 3; +#else + uint64_t align_8bytes: 3; + uint64_t addr : 41; + uint64_t addr_msb: 20; +#endif + } z8a; + }; +}; + +enum nitrox_zip_comp_code { + NITROX_CC_NOTDONE = 0, + NITROX_CC_SUCCESS = 1, + NITROX_CC_DTRUNC = 2, + NITROX_CC_STOP = 3, + NITROX_CC_ITRUNK = 4, + NITROX_CC_RBLOCK = 5, + NITROX_CC_NLEN = 6, + NITROX_CC_BADCODE = 7, + NITROX_CC_BADCODE2 = 8, + NITROX_CC_ZERO_LEN = 9, + NITROX_CC_PARITY = 10, + NITROX_CC_FATAL = 11, + NITROX_CC_TIMEOUT = 12, + NITROX_CC_NPCI_ERR = 13, +}; + +struct nitrox_sgtable { + union nitrox_zip_zptr *sgl; + uint64_t addr_msb; + uint32_t total_bytes; + uint16_t nb_sgls; + uint16_t filled_sgls; +}; + +struct nitrox_softreq { + struct nitrox_zip_instr instr; + struct nitrox_zip_result zip_res __rte_aligned(8); + uint8_t decomp_threshold[NITROX_ZIP_MAX_ONFSIZE]; + struct rte_comp_op *op; + struct nitrox_sgtable src; + struct nitrox_sgtable dst; + struct nitrox_comp_xform xform; + uint64_t timeout; +}; + +static int create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, + struct rte_mbuf *mbuf, uint32_t off, + uint32_t datalen, uint8_t extra_segs, + int socket_id) +{ + struct rte_mbuf *m; + union nitrox_zip_zptr *sgl; + struct nitrox_zip_iova_addr zip_addr; + uint16_t nb_segs; + uint16_t i; + uint32_t mlen; + + if (unlikely(datalen > NITROX_ZIP_MAX_DATASIZE)) { + NITROX_LOG(ERR, "Unsupported datalen %d, max supported %d\n", + datalen, NITROX_ZIP_MAX_DATASIZE); + return -ENOTSUP; + } + + nb_segs = mbuf->nb_segs + extra_segs; + for (m = mbuf; m && off > rte_pktmbuf_data_len(m); m = m->next) { + off -= rte_pktmbuf_data_len(m); + nb_segs--; + } + + if (unlikely(nb_segs > NITROX_ZIP_MAX_ZPTRS)) { + NITROX_LOG(ERR, "Mbuf has more segments %d than supported\n", + nb_segs); + return -ENOTSUP; + } + + if (unlikely(nb_segs > sgtbl->nb_sgls)) { + union nitrox_zip_zptr *sgl; + + NITROX_LOG(INFO, "Mbuf has more segs %d than allocated %d\n", + nb_segs, sgtbl->nb_sgls); + sgl = rte_realloc_socket(sgtbl->sgl, + sizeof(*sgtbl->sgl) * nb_segs, + 8, socket_id); + if (unlikely(!sgl)) { + NITROX_LOG(ERR, "Failed to expand sglist memory\n"); + return -ENOMEM; + } + + sgtbl->sgl = sgl; + sgtbl->nb_sgls = nb_segs; + } + + sgtbl->filled_sgls = 0; + sgtbl->total_bytes = 0; + sgl = sgtbl->sgl; + if (!m) + return 0; + + mlen = rte_pktmbuf_data_len(m) - off; + if (datalen <= mlen) + mlen = datalen; + + i = 0; + zip_addr.u64 = rte_pktmbuf_iova_offset(m, off); + sgl[i].s.addr = zip_addr.zda.addr; + sgl[i].s.length = mlen; + sgl[i].s.le = 0; + sgtbl->total_bytes += mlen; + sgtbl->addr_msb = zip_addr.zda.addr_msb; + datalen -= mlen; + i++; + for (m = m->next; m && datalen; m = m->next) { + mlen = rte_pktmbuf_data_len(m) < datalen ? + rte_pktmbuf_data_len(m) : datalen; + zip_addr.u64 = rte_pktmbuf_iova(m); + if (unlikely(zip_addr.zda.addr_msb != sgtbl->addr_msb)) { + NITROX_LOG(ERR, "zip_ptrs have different msb addr\n"); + return -ENOTSUP; + } + + sgl[i].s.addr = zip_addr.zda.addr; + sgl[i].s.length = mlen; + sgl[i].s.le = 0; + sgtbl->total_bytes += mlen; + datalen -= mlen; + i++; + } + + sgtbl->filled_sgls = i; + return 0; +} + +static int softreq_init(struct nitrox_softreq *sr) +{ + struct rte_mempool *mp; + int err; + + mp = rte_mempool_from_obj(sr); + if (unlikely(mp == NULL)) + return -EINVAL; + + err = create_sglist_from_mbuf(&sr->src, sr->op->m_src, + sr->op->src.offset, + sr->op->src.length, 0, mp->socket_id); + if (unlikely(err)) + return err; + + err = create_sglist_from_mbuf(&sr->dst, sr->op->m_dst, + sr->op->dst.offset, + rte_pktmbuf_pkt_len(sr->op->m_dst) - sr->op->dst.offset, + (sr->xform.op == NITROX_COMP_OP_DECOMPRESS) ? 1 : 0, + mp->socket_id); + if (unlikely(err)) + return err; + + if (sr->xform.op == NITROX_COMP_OP_DECOMPRESS) { + struct nitrox_zip_iova_addr zip_addr; + int i; + + zip_addr.u64 = rte_mempool_virt2iova(sr) + + offsetof(struct nitrox_softreq, decomp_threshold); + i = sr->dst.filled_sgls; + sr->dst.sgl[i].s.addr = zip_addr.zda.addr; + sr->dst.sgl[i].s.length = NITROX_ZIP_MAX_ONFSIZE; + sr->dst.sgl[i].s.le = 0; + sr->dst.total_bytes += NITROX_ZIP_MAX_ONFSIZE; + sr->dst.filled_sgls++; + } + + return 0; +} + +static void nitrox_zip_instr_to_b64(struct nitrox_softreq *sr) +{ + struct nitrox_zip_instr *instr = &sr->instr; + int i; + + for (i = 0; instr->w0.dg && (i < instr->w4.ilen); i++) + sr->src.sgl[i].u64 = rte_cpu_to_be_64(sr->src.sgl[i].u64); + + for (i = 0; instr->w0.ds && (i < instr->w5.olen); i++) + sr->dst.sgl[i].u64 = rte_cpu_to_be_64(sr->dst.sgl[i].u64); + + instr->w0.u64 = rte_cpu_to_be_64(instr->w0.u64); + instr->w1.u64 = rte_cpu_to_be_64(instr->w1.u64); + instr->w2.u64 = rte_cpu_to_be_64(instr->w2.u64); + instr->w3.u64 = rte_cpu_to_be_64(instr->w3.u64); + instr->w4.u64 = rte_cpu_to_be_64(instr->w4.u64); + instr->w5.u64 = rte_cpu_to_be_64(instr->w5.u64); + instr->w6.u64 = rte_cpu_to_be_64(instr->w6.u64); + instr->w7.u64 = rte_cpu_to_be_64(instr->w7.u64); +} + +static int process_zip_stateless(struct nitrox_softreq *sr) +{ + struct nitrox_zip_instr *instr; + struct nitrox_comp_xform *xform; + struct nitrox_zip_iova_addr zip_addr; + uint64_t iptr_msb, optr_msb, rptr_msb; + int err; + + xform = sr->op->private_xform; + if (unlikely(xform == NULL)) { + NITROX_LOG(ERR, "Invalid stateless comp op\n"); + return -EINVAL; + } + + if (unlikely(xform->op == NITROX_COMP_OP_COMPRESS && + sr->op->flush_flag != RTE_COMP_FLUSH_FULL && + sr->op->flush_flag != RTE_COMP_FLUSH_FINAL)) { + NITROX_LOG(ERR, "Invalid flush flag %d in stateless op\n", + sr->op->flush_flag); + return -EINVAL; + } + + sr->xform = *xform; + err = softreq_init(sr); + if (unlikely(err)) + return err; + + instr = &sr->instr; + memset(instr, 0, sizeof(*instr)); + /* word 0 */ + instr->w0.tol = sr->dst.total_bytes; + instr->w0.exn = 0; + instr->w0.exbits = 0; + instr->w0.ca = 0; + if (xform->op == NITROX_COMP_OP_DECOMPRESS || + sr->op->flush_flag == RTE_COMP_FLUSH_FULL) + instr->w0.sf = 1; + else + instr->w0.sf = 0; + + instr->w0.ss = xform->level; + instr->w0.cc = xform->algo; + if (xform->op == NITROX_COMP_OP_COMPRESS && + sr->op->flush_flag == RTE_COMP_FLUSH_FINAL) + instr->w0.ef = 1; + else + instr->w0.ef = 0; + + instr->w0.bf = 1; + instr->w0.co = xform->op; + if (sr->dst.filled_sgls > 1) + instr->w0.ds = 1; + else + instr->w0.ds = 0; + + if (sr->src.filled_sgls > 1) + instr->w0.dg = 1; + else + instr->w0.dg = 0; + + instr->w0.hg = 0; + + /* word 1 */ + instr->w1.hl = 0; + if (sr->op->input_chksum != 0) + instr->w1.adlercrc32 = sr->op->input_chksum; + else if (xform->chksum_type == NITROX_CHKSUM_TYPE_ADLER32) + instr->w1.adlercrc32 = 1; + else if (xform->chksum_type == NITROX_CHKSUM_TYPE_CRC32) + instr->w1.adlercrc32 = 0; + + /* word 2 */ + instr->w2.cptr = 0; + + /* word 3 */ + instr->w3.hlen = 0; + instr->w3.hptr = 0; + + /* word 4 */ + if (sr->src.filled_sgls == 1) { + instr->w4.ilen = sr->src.sgl[0].s.length; + instr->w4.iptr = sr->src.sgl[0].s.addr; + iptr_msb = sr->src.addr_msb; + } else { + zip_addr.u64 = rte_malloc_virt2iova(sr->src.sgl); + instr->w4.ilen = sr->src.filled_sgls; + instr->w4.iptr = zip_addr.zda.addr; + iptr_msb = zip_addr.zda.addr_msb; + } + + /* word 5 */ + if (sr->dst.filled_sgls == 1) { + instr->w5.olen = sr->dst.sgl[0].s.length; + instr->w5.optr = sr->dst.sgl[0].s.addr; + optr_msb = sr->dst.addr_msb; + } else { + zip_addr.u64 = rte_malloc_virt2iova(sr->dst.sgl); + instr->w5.olen = sr->dst.filled_sgls; + instr->w5.optr = zip_addr.zda.addr; + optr_msb = zip_addr.zda.addr_msb; + } + + /* word 6 */ + memset(&sr->zip_res, 0, sizeof(sr->zip_res)); + zip_addr.u64 = rte_mempool_virt2iova(sr) + + offsetof(struct nitrox_softreq, zip_res); + instr->w6.rptr = zip_addr.zda.addr; + rptr_msb = zip_addr.zda.addr_msb; + + if (iptr_msb != optr_msb || iptr_msb != rptr_msb) { + NITROX_LOG(ERR, "addr_msb is not same for all addresses\n"); + return -ENOTSUP; + } + + /* word 7 */ + instr->w7.addr_msb = iptr_msb; + instr->w7.grp = 0; + + nitrox_zip_instr_to_b64(sr); + return 0; +} + +static int process_zip_request(struct nitrox_softreq *sr) +{ + int err; + + switch (sr->op->op_type) { + case RTE_COMP_OP_STATELESS: + err = process_zip_stateless(sr); + break; + default: + err = -EINVAL; + break; + } + + return err; +} + +int +nitrox_process_comp_req(struct rte_comp_op *op, struct nitrox_softreq *sr) +{ + int err; + + sr->op = op; + err = process_zip_request(sr); + if (unlikely(err)) + goto err_exit; + + sr->timeout = rte_get_timer_cycles() + CMD_TIMEOUT * rte_get_timer_hz(); + return 0; +err_exit: + if (err == -ENOMEM) + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + else + sr->op->status = RTE_COMP_OP_STATUS_INVALID_ARGS; + + return err; +} + +static struct nitrox_zip_result zip_result_to_cpu64(struct nitrox_zip_result *r) +{ + struct nitrox_zip_result out_res; + + out_res.w2.u64 = rte_be_to_cpu_64(r->w2.u64); + out_res.w1.u64 = rte_be_to_cpu_64(r->w1.u64); + out_res.w0.u64 = rte_be_to_cpu_64(r->w0.u64); + return out_res; +} + +int +nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op) +{ + struct nitrox_zip_result zip_res; + int output_unused_bytes; + int err = 0; + + zip_res = zip_result_to_cpu64(&sr->zip_res); + if (zip_res.w2.compcode == NITROX_CC_NOTDONE) { + if (rte_get_timer_cycles() >= sr->timeout) { + NITROX_LOG(ERR, "Op timedout\n"); + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + err = -ETIMEDOUT; + goto exit; + } else { + return -EAGAIN; + } + } + + if (unlikely(zip_res.w2.compcode != NITROX_CC_SUCCESS)) { + struct rte_comp_op *op = sr->op; + + NITROX_LOG(ERR, "Op dequeue error 0x%x\n", + zip_res.w2.compcode); + if (zip_res.w2.compcode == NITROX_CC_STOP || + zip_res.w2.compcode == NITROX_CC_DTRUNC) + op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED; + else + op->status = RTE_COMP_OP_STATUS_ERROR; + + op->consumed = 0; + op->produced = 0; + err = -EFAULT; + goto exit; + } + + output_unused_bytes = sr->dst.total_bytes - zip_res.w1.tbyteswritten; + if (unlikely(sr->xform.op == NITROX_COMP_OP_DECOMPRESS && + output_unused_bytes < NITROX_ZIP_MAX_ONFSIZE)) { + NITROX_LOG(ERR, "TOL %d, Total bytes written %d\n", + sr->dst.total_bytes, zip_res.w1.tbyteswritten); + sr->op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED; + sr->op->consumed = 0; + sr->op->produced = sr->dst.total_bytes - NITROX_ZIP_MAX_ONFSIZE; + err = -EIO; + goto exit; + } + + if (sr->xform.op == NITROX_COMP_OP_COMPRESS && + sr->op->flush_flag == RTE_COMP_FLUSH_FINAL && + zip_res.w2.exn) { + uint32_t datalen = zip_res.w1.tbyteswritten; + uint32_t off = sr->op->dst.offset; + struct rte_mbuf *m = sr->op->m_dst; + uint32_t mlen; + uint8_t *last_byte; + + for (; m && off > rte_pktmbuf_data_len(m); m = m->next) + off -= rte_pktmbuf_data_len(m); + + mlen = rte_pktmbuf_data_len(m) - off; + for (; m && (datalen > mlen); m = m->next) + datalen -= mlen; + + last_byte = rte_pktmbuf_mtod_offset(m, uint8_t *, datalen - 1); + *last_byte = zip_res.w2.exbits & 0xFF; + } + + sr->op->consumed = zip_res.w1.tbytesread; + sr->op->produced = zip_res.w1.tbyteswritten; + if (sr->xform.chksum_type == NITROX_CHKSUM_TYPE_CRC32) + sr->op->output_chksum = zip_res.w0.crc32; + else if (sr->xform.chksum_type == NITROX_CHKSUM_TYPE_ADLER32) + sr->op->output_chksum = zip_res.w0.adler32; + + sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; + err = 0; +exit: + *op = sr->op; + return err; +} + +void * +nitrox_comp_instr_addr(struct nitrox_softreq *sr) +{ + return &sr->instr; +} + +static void req_pool_obj_free(struct rte_mempool *mp, void *opaque, void *obj, + unsigned int obj_idx) +{ + struct nitrox_softreq *sr; + + RTE_SET_USED(mp); + RTE_SET_USED(opaque); + RTE_SET_USED(obj_idx); + sr = obj; + rte_free(sr->src.sgl); + sr->src.sgl = NULL; + rte_free(sr->dst.sgl); + sr->dst.sgl = NULL; +} + +void +nitrox_comp_req_pool_free(struct rte_mempool *mp) +{ + rte_mempool_obj_iter(mp, req_pool_obj_free, NULL); + rte_mempool_free(mp); +} + +static void req_pool_obj_init(struct rte_mempool *mp, void *arg, void *obj, + unsigned int obj_idx) +{ + struct nitrox_softreq *sr; + int *err = arg; + + RTE_SET_USED(mp); + RTE_SET_USED(obj_idx); + sr = obj; + sr->src.sgl = rte_zmalloc_socket(NULL, + sizeof(*sr->src.sgl) * NITROX_ZIP_SGL_COUNT, + 8, mp->socket_id); + sr->dst.sgl = rte_zmalloc_socket(NULL, + sizeof(*sr->dst.sgl) * NITROX_ZIP_SGL_COUNT, + 8, mp->socket_id); + if (sr->src.sgl == NULL || sr->dst.sgl == NULL) { + NITROX_LOG(ERR, "Failed to allocate zip_sgl memory\n"); + *err = -ENOMEM; + } + + sr->src.nb_sgls = NITROX_ZIP_SGL_COUNT; + sr->src.filled_sgls = 0; + sr->dst.nb_sgls = NITROX_ZIP_SGL_COUNT; + sr->dst.filled_sgls = 0; +} + +struct rte_mempool * +nitrox_comp_req_pool_create(struct rte_compressdev *dev, uint32_t nobjs, + uint16_t qp_id, int socket_id) +{ + char softreq_pool_name[RTE_RING_NAMESIZE]; + struct rte_mempool *mp; + int err = 0; + + snprintf(softreq_pool_name, RTE_RING_NAMESIZE, "%s_sr_%d", + dev->data->name, qp_id); + mp = rte_mempool_create(softreq_pool_name, + RTE_ALIGN_MUL_CEIL(nobjs, 64), + sizeof(struct nitrox_softreq), + 64, 0, NULL, NULL, req_pool_obj_init, &err, + socket_id, 0); + if (unlikely(!mp)) + NITROX_LOG(ERR, "Failed to create req pool, qid %d, err %d\n", + qp_id, rte_errno); + + if (unlikely(err)) { + nitrox_comp_req_pool_free(mp); + return NULL; + } + + return mp; +} diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.h b/drivers/compress/nitrox/nitrox_comp_reqmgr.h index 14f35a1e5b..07c65f0d5e 100644 --- a/drivers/compress/nitrox/nitrox_comp_reqmgr.h +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.h @@ -5,6 +5,8 @@ #ifndef _NITROX_COMP_REQMGR_H_ #define _NITROX_COMP_REQMGR_H_ +struct nitrox_softreq; + enum nitrox_comp_op { NITROX_COMP_OP_DECOMPRESS, NITROX_COMP_OP_COMPRESS, @@ -37,4 +39,12 @@ struct nitrox_comp_xform { enum nitrox_chksum_type chksum_type; }; +int nitrox_process_comp_req(struct rte_comp_op *op, struct nitrox_softreq *sr); +int nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op); +void *nitrox_comp_instr_addr(struct nitrox_softreq *sr); +struct rte_mempool *nitrox_comp_req_pool_create(struct rte_compressdev *cdev, + uint32_t nobjs, uint16_t qp_id, + int socket_id); +void nitrox_comp_req_pool_free(struct rte_mempool *mp); + #endif /* _NITROX_COMP_REQMGR_H_ */ From patchwork Fri Mar 1 16:25:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 137689 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1BFFE43C0C; Fri, 1 Mar 2024 17:27:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A9B76433C5; Fri, 1 Mar 2024 17:26:26 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 7AEA1433BD for ; Fri, 1 Mar 2024 17:26:24 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 4219pWUd021293; Fri, 1 Mar 2024 08:26:24 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=JmNmm50tt3uOflXvDzdfSnMbIOGWuZ2r7qs+D/pkv+0=; b=jwr 4Fl3k9bKHQihVTW4cF02V2t1S66iqLpiQjZxoHoHXbilKhk2Nm+Eo+nJxKG5MjI3 Id2nQgMxxm3AIhI444p57dtxUrrWNX5V0xseaAb7VAv8gGJ/oHbKge9ZAu2lzlR8 8uxfZ6JSQE3aSwmTJDQcvxEc2g4xUktHi6UZDajwiPsilYGQl0L1PfnbWtTWfsF5 IgsdyWM8+cI0TYL+Rx4sTcVVO9CESS09ksnvVzfPCqisM/e/X0X6Ae0u6IZgJCI0 YbAblbkE48Z8DotiAarBF/X7a+xSlD1DSQHJQi6dpMmEmG8UKTOH1u8trAt0Gaf8 PSHX6SvsIQtk0QhdsXA== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3wjfay86yv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 01 Mar 2024 08:26:23 -0800 (PST) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Fri, 1 Mar 2024 08:26:21 -0800 Received: from hyd1399.caveonetworks.com.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Fri, 1 Mar 2024 08:26:19 -0800 From: Nagadheeraj Rottela To: , , CC: , Nagadheeraj Rottela Subject: [PATCH v4 7/7] compress/nitrox: support stateful request Date: Fri, 1 Mar 2024 21:55:53 +0530 Message-ID: <20240301162553.30523-8-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240301162553.30523-1-rnagadheeraj@marvell.com> References: <20240301162553.30523-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: HgNzCqWF6ZYg20SYp2W0ey3r9JNzdw06 X-Proofpoint-GUID: HgNzCqWF6ZYg20SYp2W0ey3r9JNzdw06 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-01_17,2024-03-01_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement enqueue and dequeue burst operations for stateful request support. Signed-off-by: Nagadheeraj Rottela --- drivers/compress/nitrox/nitrox_comp.c | 97 +++- drivers/compress/nitrox/nitrox_comp.h | 1 + drivers/compress/nitrox/nitrox_comp_reqmgr.c | 550 ++++++++++++++++--- drivers/compress/nitrox/nitrox_comp_reqmgr.h | 8 + 4 files changed, 576 insertions(+), 80 deletions(-) diff --git a/drivers/compress/nitrox/nitrox_comp.c b/drivers/compress/nitrox/nitrox_comp.c index 0ea5ed43ed..97d2c4a0e8 100644 --- a/drivers/compress/nitrox/nitrox_comp.c +++ b/drivers/compress/nitrox/nitrox_comp.c @@ -32,7 +32,9 @@ static const struct rte_compressdev_capabilities RTE_COMP_FF_SHAREABLE_PRIV_XFORM | RTE_COMP_FF_OOP_SGL_IN_SGL_OUT | RTE_COMP_FF_OOP_SGL_IN_LB_OUT | - RTE_COMP_FF_OOP_LB_IN_SGL_OUT, + RTE_COMP_FF_OOP_LB_IN_SGL_OUT | + RTE_COMP_FF_STATEFUL_COMPRESSION | + RTE_COMP_FF_STATEFUL_DECOMPRESSION, .window_size = { .min = NITROX_COMP_WINDOW_SIZE_MIN, .max = NITROX_COMP_WINDOW_SIZE_MAX, @@ -334,6 +336,13 @@ static int nitrox_comp_private_xform_create(struct rte_compressdev *dev, goto err_exit; } + nxform->context = NULL; + nxform->history_window = NULL; + nxform->window_size = 0; + nxform->hlen = 0; + nxform->exn = 0; + nxform->exbits = 0; + nxform->bf = true; return 0; err_exit: memset(nxform, 0, sizeof(*nxform)); @@ -357,6 +366,74 @@ static int nitrox_comp_private_xform_free(struct rte_compressdev *dev, return 0; } +static int nitrox_comp_stream_free(struct rte_compressdev *dev, void *stream) +{ + struct nitrox_comp_xform *nxform = stream; + + if (unlikely(nxform == NULL)) + return -EINVAL; + + rte_free(nxform->history_window); + nxform->history_window = NULL; + rte_free(nxform->context); + nxform->context = NULL; + return nitrox_comp_private_xform_free(dev, stream); +} + +static int nitrox_comp_stream_create(struct rte_compressdev *dev, + const struct rte_comp_xform *xform, void **stream) +{ + int err; + struct nitrox_comp_xform *nxform; + struct nitrox_comp_device *comp_dev = dev->data->dev_private; + + err = nitrox_comp_private_xform_create(dev, xform, stream); + if (unlikely(err)) + return err; + + nxform = *stream; + if (xform->type == RTE_COMP_COMPRESS) { + uint8_t window_size = xform->compress.window_size; + + if (unlikely(window_size < NITROX_COMP_WINDOW_SIZE_MIN || + window_size > NITROX_COMP_WINDOW_SIZE_MAX)) { + NITROX_LOG(ERR, "Invalid window size %d\n", + window_size); + return -EINVAL; + } + + if (window_size == NITROX_COMP_WINDOW_SIZE_MAX) + nxform->window_size = NITROX_CONSTANTS_MAX_SEARCH_DEPTH; + else + nxform->window_size = RTE_BIT32(window_size); + } else { + nxform->window_size = NITROX_DEFAULT_DEFLATE_SEARCH_DEPTH; + } + + nxform->history_window = rte_zmalloc_socket(NULL, nxform->window_size, + 8, comp_dev->xform_pool->socket_id); + if (unlikely(nxform->history_window == NULL)) { + err = -ENOMEM; + goto err_exit; + } + + if (xform->type == RTE_COMP_COMPRESS) + return 0; + + nxform->context = rte_zmalloc_socket(NULL, + NITROX_DECOMP_CTX_SIZE, 8, + comp_dev->xform_pool->socket_id); + if (unlikely(nxform->context == NULL)) { + err = -ENOMEM; + goto err_exit; + } + + return 0; +err_exit: + nitrox_comp_stream_free(dev, *stream); + return err; +} + static int nitrox_enq_single_op(struct nitrox_qp *qp, struct rte_comp_op *op) { struct nitrox_softreq *sr; @@ -371,8 +448,12 @@ static int nitrox_enq_single_op(struct nitrox_qp *qp, struct rte_comp_op *op) return err; } - nitrox_qp_enqueue(qp, nitrox_comp_instr_addr(sr), sr); - return 0; + if (op->status == RTE_COMP_OP_STATUS_SUCCESS) + err = nitrox_qp_enqueue_sr(qp, sr); + else + nitrox_qp_enqueue(qp, nitrox_comp_instr_addr(sr), sr); + + return err; } static uint16_t nitrox_comp_dev_enq_burst(void *queue_pair, @@ -382,6 +463,7 @@ static uint16_t nitrox_comp_dev_enq_burst(void *queue_pair, struct nitrox_qp *qp = queue_pair; uint16_t free_slots = 0; uint16_t cnt = 0; + uint16_t dbcnt = 0; bool err = false; free_slots = nitrox_qp_free_count(qp); @@ -393,9 +475,12 @@ static uint16_t nitrox_comp_dev_enq_burst(void *queue_pair, err = true; break; } + + if (ops[cnt]->status != RTE_COMP_OP_STATUS_SUCCESS) + dbcnt++; } - nitrox_ring_dbell(qp, cnt); + nitrox_ring_dbell(qp, dbcnt); qp->stats.enqueued_count += cnt; if (unlikely(err)) qp->stats.enqueue_err_count++; @@ -458,8 +543,8 @@ static struct rte_compressdev_ops nitrox_compressdev_ops = { .private_xform_create = nitrox_comp_private_xform_create, .private_xform_free = nitrox_comp_private_xform_free, - .stream_create = NULL, - .stream_free = NULL + .stream_create = nitrox_comp_stream_create, + .stream_free = nitrox_comp_stream_free, }; int diff --git a/drivers/compress/nitrox/nitrox_comp.h b/drivers/compress/nitrox/nitrox_comp.h index e49debaf6b..83e5902076 100644 --- a/drivers/compress/nitrox/nitrox_comp.h +++ b/drivers/compress/nitrox/nitrox_comp.h @@ -8,6 +8,7 @@ #define COMPRESSDEV_NAME_NITROX_PMD compress_nitrox #define NITROX_DECOMP_CTX_SIZE 2048 #define NITROX_CONSTANTS_MAX_SEARCH_DEPTH 31744 +#define NITROX_DEFAULT_DEFLATE_SEARCH_DEPTH 32768 #define NITROX_COMP_WINDOW_SIZE_MIN 1 #define NITROX_COMP_WINDOW_SIZE_MAX 15 #define NITROX_COMP_LEVEL_LOWEST_START 1 diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.c b/drivers/compress/nitrox/nitrox_comp_reqmgr.c index 5ad1a4439a..0a25672d6e 100644 --- a/drivers/compress/nitrox/nitrox_comp_reqmgr.c +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.c @@ -5,11 +5,13 @@ #include #include #include +#include #include "nitrox_comp_reqmgr.h" #include "nitrox_logs.h" #include "rte_comp.h" +#define NITROX_INSTR_BUFFER_DEBUG 0 #define NITROX_ZIP_SGL_COUNT 16 #define NITROX_ZIP_MAX_ZPTRS 2048 #define NITROX_ZIP_MAX_DATASIZE ((1 << 24) - 1) @@ -307,10 +309,217 @@ struct nitrox_softreq { struct rte_comp_op *op; struct nitrox_sgtable src; struct nitrox_sgtable dst; - struct nitrox_comp_xform xform; uint64_t timeout; }; +#if NITROX_INSTR_BUFFER_DEBUG +static void nitrox_dump_databuf(const char *name, struct rte_mbuf *m, + uint32_t off, uint32_t datalen) +{ + uint32_t mlen; + + if (!rte_log_can_log(nitrox_logtype, RTE_LOG_DEBUG)) + return; + + for (; m && off > rte_pktmbuf_data_len(m); m = m->next) + off -= rte_pktmbuf_data_len(m); + + mlen = rte_pktmbuf_data_len(m) - off; + if (datalen <= mlen) + mlen = datalen; + + rte_hexdump(rte_log_get_stream(), name, + rte_pktmbuf_mtod_offset(m, char *, off), mlen); + for (m = m->next; m && datalen; m = m->next) { + mlen = rte_pktmbuf_data_len(m) < datalen ? + rte_pktmbuf_data_len(m) : datalen; + rte_hexdump(rte_log_get_stream(), name, + rte_pktmbuf_mtod(m, char *), mlen); + } + + NITROX_LOG(DEBUG, "\n"); +} + +static void nitrox_dump_zip_instr(struct nitrox_zip_instr *instr, + union nitrox_zip_zptr *hptr_arr, + union nitrox_zip_zptr *iptr_arr, + union nitrox_zip_zptr *optr_arr) +{ + uint64_t value; + int i = 0; + + NITROX_LOG(DEBUG, "\nZIP instruction..(%p)\n", instr); + NITROX_LOG(DEBUG, "\tWORD0 = 0x%016"PRIx64"\n", instr->w0.u64); + NITROX_LOG(DEBUG, "\t\tTOL = %d\n", instr->w0.tol); + NITROX_LOG(DEBUG, "\t\tEXNUM = %d\n", instr->w0.exn); + NITROX_LOG(DEBUG, "\t\tEXBITS = %x\n", instr->w0.exbits); + NITROX_LOG(DEBUG, "\t\tCA = %d\n", instr->w0.ca); + NITROX_LOG(DEBUG, "\t\tSF = %d\n", instr->w0.sf); + NITROX_LOG(DEBUG, "\t\tSS = %d\n", instr->w0.ss); + NITROX_LOG(DEBUG, "\t\tCC = %d\n", instr->w0.cc); + NITROX_LOG(DEBUG, "\t\tEF = %d\n", instr->w0.ef); + NITROX_LOG(DEBUG, "\t\tBF = %d\n", instr->w0.bf); + NITROX_LOG(DEBUG, "\t\tCO = %d\n", instr->w0.co); + NITROX_LOG(DEBUG, "\t\tDS = %d\n", instr->w0.ds); + NITROX_LOG(DEBUG, "\t\tDG = %d\n", instr->w0.dg); + NITROX_LOG(DEBUG, "\t\tHG = %d\n", instr->w0.hg); + NITROX_LOG(DEBUG, "\n"); + + NITROX_LOG(DEBUG, "\tWORD1 = 0x%016"PRIx64"\n", instr->w1.u64); + NITROX_LOG(DEBUG, "\t\tHL = %d\n", instr->w1.hl); + NITROX_LOG(DEBUG, "\t\tADLERCRC32 = 0x%08x\n", instr->w1.adlercrc32); + NITROX_LOG(DEBUG, "\n"); + + value = instr->w2.cptr; + NITROX_LOG(DEBUG, "\tWORD2 = 0x%016"PRIx64"\n", instr->w2.u64); + NITROX_LOG(DEBUG, "\t\tCPTR = 0x%11"PRIx64"\n", value); + NITROX_LOG(DEBUG, "\n"); + + value = instr->w3.hptr; + NITROX_LOG(DEBUG, "\tWORD3 = 0x%016"PRIx64"\n", instr->w3.u64); + NITROX_LOG(DEBUG, "\t\tHLEN = %d\n", instr->w3.hlen); + NITROX_LOG(DEBUG, "\t\tHPTR = 0x%11"PRIx64"\n", value); + + if (instr->w0.hg && hptr_arr) { + for (i = 0; i < instr->w3.hlen; i++) { + value = hptr_arr[i].s.addr; + NITROX_LOG(DEBUG, "\t\t\tZPTR[%d] : Length = %d Addr = 0x%11"PRIx64"\n", + i, hptr_arr[i].s.length, value); + } + } + + NITROX_LOG(DEBUG, "\n"); + + value = instr->w4.iptr; + NITROX_LOG(DEBUG, "\tWORD4 = 0x%016"PRIx64"\n", instr->w4.u64); + NITROX_LOG(DEBUG, "\t\tILEN = %d\n", instr->w4.ilen); + NITROX_LOG(DEBUG, "\t\tIPTR = 0x%11"PRIx64"\n", value); + if (instr->w0.dg && iptr_arr) { + for (i = 0; i < instr->w4.ilen; i++) { + value = iptr_arr[i].s.addr; + NITROX_LOG(DEBUG, "\t\t\tZPTR[%d] : Length = %d Addr = 0x%11"PRIx64"\n", + i, iptr_arr[i].s.length, value); + } + } + + NITROX_LOG(DEBUG, "\n"); + + value = instr->w5.optr; + NITROX_LOG(DEBUG, "\tWORD5 = 0x%016"PRIx64"\n", instr->w5.u64); + NITROX_LOG(DEBUG, "\t\t OLEN = %d\n", instr->w5.olen); + NITROX_LOG(DEBUG, "\t\t OPTR = 0x%11"PRIx64"\n", value); + if (instr->w0.ds && optr_arr) { + for (i = 0; i < instr->w5.olen; i++) { + value = optr_arr[i].s.addr; + NITROX_LOG(DEBUG, "\t\t\tZPTR[%d] : Length = %d Addr = 0x%11"PRIx64"\n", + i, optr_arr[i].s.length, value); + } + } + + NITROX_LOG(DEBUG, "\n"); + + value = instr->w6.rptr; + NITROX_LOG(DEBUG, "\tWORD6 = 0x%016"PRIx64"\n", instr->w6.u64); + NITROX_LOG(DEBUG, "\t\tRPTR = 0x%11"PRIx64"\n", value); + NITROX_LOG(DEBUG, "\n"); + + NITROX_LOG(DEBUG, "\tWORD7 = 0x%016"PRIx64"\n", instr->w7.u64); + NITROX_LOG(DEBUG, "\t\tGRP = %x\n", instr->w7.grp); + NITROX_LOG(DEBUG, "\t\tADDR_MSB = 0x%5x\n", instr->w7.addr_msb); + NITROX_LOG(DEBUG, "\n"); +} + +static void nitrox_dump_zip_result(struct nitrox_zip_instr *instr, + struct nitrox_zip_result *result) +{ + NITROX_LOG(DEBUG, "ZIP result..(instr %p)\n", instr); + NITROX_LOG(DEBUG, "\tWORD0 = 0x%016"PRIx64"\n", result->w0.u64); + NITROX_LOG(DEBUG, "\t\tCRC32 = 0x%8x\n", result->w0.crc32); + NITROX_LOG(DEBUG, "\t\tADLER32 = 0x%8x\n", result->w0.adler32); + NITROX_LOG(DEBUG, "\n"); + + NITROX_LOG(DEBUG, "\tWORD1 = 0x%016"PRIx64"\n", result->w1.u64); + NITROX_LOG(DEBUG, "\t\tTBYTESWRITTEN = %u\n", result->w1.tbyteswritten); + NITROX_LOG(DEBUG, "\t\tTBYTESREAD = %u\n", result->w1.tbytesread); + NITROX_LOG(DEBUG, "\n"); + + NITROX_LOG(DEBUG, "\tWORD2 = 0x%016"PRIx64"\n", result->w2.u64); + NITROX_LOG(DEBUG, "\t\tTBITS = %u\n", result->w2.tbits); + NITROX_LOG(DEBUG, "\t\tEXN = %d\n", result->w2.exn); + NITROX_LOG(DEBUG, "\t\tEBITS = %x\n", result->w2.exbits); + NITROX_LOG(DEBUG, "\t\tEF = %d\n", result->w2.ef); + NITROX_LOG(DEBUG, "\t\tCOMPCODE = 0x%2x\n", result->w2.compcode); + NITROX_LOG(DEBUG, "\n"); +} +#else +#define nitrox_dump_databuf(name, m, off, datalen) +#define nitrox_dump_zip_instr(instr, hptr_arr, iptr_arr, optr_arr) +#define nitrox_dump_zip_result(instr, result) +#endif + +static int handle_zero_length_compression(struct nitrox_softreq *sr, + struct nitrox_comp_xform *xform) +{ + union { + uint32_t num; + uint8_t bytes[4]; + } fblk; + uint32_t dstlen, rlen; + struct rte_mbuf *m; + uint32_t off; + uint32_t mlen; + uint32_t i = 0; + uint8_t *ptr; + + fblk.num = xform->exn ? (xform->exbits & 0x7F) : 0; + fblk.num |= (0x3 << xform->exn); + memset(&sr->zip_res, 0, sizeof(sr->zip_res)); + sr->zip_res.w1.tbytesread = xform->hlen; + sr->zip_res.w1.tbyteswritten = 2; + sr->zip_res.w2.ef = 1; + if (xform->exn == 7) + sr->zip_res.w1.tbyteswritten++; + + rlen = sr->zip_res.w1.tbyteswritten; + dstlen = rte_pktmbuf_pkt_len(sr->op->m_dst) - sr->op->dst.offset; + if (unlikely(dstlen < rlen)) + return -EIO; + + off = sr->op->dst.offset; + for (m = sr->op->m_dst; m && off > rte_pktmbuf_data_len(m); m = m->next) + off -= rte_pktmbuf_data_len(m); + + if (unlikely(!m)) + return -EIO; + + mlen = rte_pktmbuf_data_len(m) - off; + if (rlen <= mlen) + mlen = rlen; + + ptr = rte_pktmbuf_mtod_offset(m, uint8_t *, off); + memcpy(ptr, fblk.bytes, mlen); + i += mlen; + rlen -= mlen; + for (m = m->next; m && rlen; m = m->next) { + mlen = rte_pktmbuf_data_len(m) < rlen ? + rte_pktmbuf_data_len(m) : rlen; + ptr = rte_pktmbuf_mtod(m, uint8_t *); + memcpy(ptr, &fblk.bytes[i], mlen); + i += mlen; + rlen -= mlen; + } + + if (unlikely(rlen != 0)) + return -EIO; + + sr->zip_res.w2.compcode = NITROX_CC_SUCCESS; + sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; + sr->zip_res.w0.u64 = rte_cpu_to_be_64(sr->zip_res.w0.u64); + sr->zip_res.w1.u64 = rte_cpu_to_be_64(sr->zip_res.w1.u64); + sr->zip_res.w2.u64 = rte_cpu_to_be_64(sr->zip_res.w2.u64); + return 0; +} + static int create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf, uint32_t off, uint32_t datalen, uint8_t extra_segs, @@ -398,10 +607,12 @@ static int create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, return 0; } -static int softreq_init(struct nitrox_softreq *sr) +static int softreq_init(struct nitrox_softreq *sr, + struct nitrox_comp_xform *xform) { struct rte_mempool *mp; int err; + bool need_decomp_threshold; mp = rte_mempool_from_obj(sr); if (unlikely(mp == NULL)) @@ -413,15 +624,17 @@ static int softreq_init(struct nitrox_softreq *sr) if (unlikely(err)) return err; + need_decomp_threshold = (sr->op->op_type == RTE_COMP_OP_STATELESS && + xform->op == NITROX_COMP_OP_DECOMPRESS); err = create_sglist_from_mbuf(&sr->dst, sr->op->m_dst, sr->op->dst.offset, rte_pktmbuf_pkt_len(sr->op->m_dst) - sr->op->dst.offset, - (sr->xform.op == NITROX_COMP_OP_DECOMPRESS) ? 1 : 0, + need_decomp_threshold ? 1 : 0, mp->socket_id); if (unlikely(err)) return err; - if (sr->xform.op == NITROX_COMP_OP_DECOMPRESS) { + if (need_decomp_threshold) { struct nitrox_zip_iova_addr zip_addr; int i; @@ -459,12 +672,12 @@ static void nitrox_zip_instr_to_b64(struct nitrox_softreq *sr) instr->w7.u64 = rte_cpu_to_be_64(instr->w7.u64); } -static int process_zip_stateless(struct nitrox_softreq *sr) +static int process_zip_request(struct nitrox_softreq *sr) { struct nitrox_zip_instr *instr; struct nitrox_comp_xform *xform; struct nitrox_zip_iova_addr zip_addr; - uint64_t iptr_msb, optr_msb, rptr_msb; + uint64_t iptr_msb, optr_msb, rptr_msb, cptr_msb, hptr_msb; int err; xform = sr->op->private_xform; @@ -473,7 +686,14 @@ static int process_zip_stateless(struct nitrox_softreq *sr) return -EINVAL; } - if (unlikely(xform->op == NITROX_COMP_OP_COMPRESS && + if (unlikely(sr->op->op_type == RTE_COMP_OP_STATEFUL && + xform->op == NITROX_COMP_OP_COMPRESS && + sr->op->flush_flag == RTE_COMP_FLUSH_FINAL && + sr->op->src.length == 0)) + return handle_zero_length_compression(sr, xform); + + if (unlikely(sr->op->op_type == RTE_COMP_OP_STATELESS && + xform->op == NITROX_COMP_OP_COMPRESS && sr->op->flush_flag != RTE_COMP_FLUSH_FULL && sr->op->flush_flag != RTE_COMP_FLUSH_FINAL)) { NITROX_LOG(ERR, "Invalid flush flag %d in stateless op\n", @@ -481,8 +701,7 @@ static int process_zip_stateless(struct nitrox_softreq *sr) return -EINVAL; } - sr->xform = *xform; - err = softreq_init(sr); + err = softreq_init(sr, xform); if (unlikely(err)) return err; @@ -490,10 +709,11 @@ static int process_zip_stateless(struct nitrox_softreq *sr) memset(instr, 0, sizeof(*instr)); /* word 0 */ instr->w0.tol = sr->dst.total_bytes; - instr->w0.exn = 0; - instr->w0.exbits = 0; + instr->w0.exn = xform->exn; + instr->w0.exbits = xform->exbits; instr->w0.ca = 0; if (xform->op == NITROX_COMP_OP_DECOMPRESS || + sr->op->flush_flag == RTE_COMP_FLUSH_SYNC || sr->op->flush_flag == RTE_COMP_FLUSH_FULL) instr->w0.sf = 1; else @@ -501,13 +721,12 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w0.ss = xform->level; instr->w0.cc = xform->algo; - if (xform->op == NITROX_COMP_OP_COMPRESS && - sr->op->flush_flag == RTE_COMP_FLUSH_FINAL) + if (sr->op->flush_flag == RTE_COMP_FLUSH_FINAL) instr->w0.ef = 1; else instr->w0.ef = 0; - instr->w0.bf = 1; + instr->w0.bf = xform->bf; instr->w0.co = xform->op; if (sr->dst.filled_sgls > 1) instr->w0.ds = 1; @@ -522,8 +741,11 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w0.hg = 0; /* word 1 */ - instr->w1.hl = 0; - if (sr->op->input_chksum != 0) + instr->w1.hl = xform->hlen; + if (sr->op->op_type == RTE_COMP_OP_STATEFUL && !xform->bf) + instr->w1.adlercrc32 = xform->chksum; + else if (sr->op->op_type == RTE_COMP_OP_STATELESS && + sr->op->input_chksum != 0) instr->w1.adlercrc32 = sr->op->input_chksum; else if (xform->chksum_type == NITROX_CHKSUM_TYPE_ADLER32) instr->w1.adlercrc32 = 1; @@ -531,11 +753,23 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w1.adlercrc32 = 0; /* word 2 */ - instr->w2.cptr = 0; + if (xform->context) + zip_addr.u64 = rte_malloc_virt2iova(xform->context); + else + zip_addr.u64 = 0; + + instr->w2.cptr = zip_addr.zda.addr; + cptr_msb = zip_addr.zda.addr_msb; /* word 3 */ - instr->w3.hlen = 0; - instr->w3.hptr = 0; + instr->w3.hlen = xform->hlen; + if (xform->history_window) + zip_addr.u64 = rte_malloc_virt2iova(xform->history_window); + else + zip_addr.u64 = 0; + + instr->w3.hptr = zip_addr.zda.addr; + hptr_msb = zip_addr.zda.addr_msb; /* word 4 */ if (sr->src.filled_sgls == 1) { @@ -568,7 +802,9 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w6.rptr = zip_addr.zda.addr; rptr_msb = zip_addr.zda.addr_msb; - if (iptr_msb != optr_msb || iptr_msb != rptr_msb) { + if (unlikely(iptr_msb != optr_msb || iptr_msb != rptr_msb || + (xform->history_window && (iptr_msb != hptr_msb)) || + (xform->context && (iptr_msb != cptr_msb)))) { NITROX_LOG(ERR, "addr_msb is not same for all addresses\n"); return -ENOTSUP; } @@ -577,32 +813,20 @@ static int process_zip_stateless(struct nitrox_softreq *sr) instr->w7.addr_msb = iptr_msb; instr->w7.grp = 0; + nitrox_dump_zip_instr(instr, NULL, sr->src.sgl, sr->dst.sgl); + nitrox_dump_databuf("IN", sr->op->m_src, sr->op->src.offset, + sr->op->src.length); nitrox_zip_instr_to_b64(sr); return 0; } -static int process_zip_request(struct nitrox_softreq *sr) -{ - int err; - - switch (sr->op->op_type) { - case RTE_COMP_OP_STATELESS: - err = process_zip_stateless(sr); - break; - default: - err = -EINVAL; - break; - } - - return err; -} - int nitrox_process_comp_req(struct rte_comp_op *op, struct nitrox_softreq *sr) { int err; sr->op = op; + sr->op->status = RTE_COMP_OP_STATUS_NOT_PROCESSED; err = process_zip_request(sr); if (unlikely(err)) goto err_exit; @@ -628,55 +852,239 @@ static struct nitrox_zip_result zip_result_to_cpu64(struct nitrox_zip_result *r) return out_res; } -int -nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op) +static int post_process_zip_stateless(struct nitrox_softreq *sr, + struct nitrox_comp_xform *xform, + struct nitrox_zip_result *zip_res) { - struct nitrox_zip_result zip_res; int output_unused_bytes; - int err = 0; - - zip_res = zip_result_to_cpu64(&sr->zip_res); - if (zip_res.w2.compcode == NITROX_CC_NOTDONE) { - if (rte_get_timer_cycles() >= sr->timeout) { - NITROX_LOG(ERR, "Op timedout\n"); - sr->op->status = RTE_COMP_OP_STATUS_ERROR; - err = -ETIMEDOUT; - goto exit; - } else { - return -EAGAIN; - } - } - if (unlikely(zip_res.w2.compcode != NITROX_CC_SUCCESS)) { + if (unlikely(zip_res->w2.compcode != NITROX_CC_SUCCESS)) { struct rte_comp_op *op = sr->op; - NITROX_LOG(ERR, "Op dequeue error 0x%x\n", - zip_res.w2.compcode); - if (zip_res.w2.compcode == NITROX_CC_STOP || - zip_res.w2.compcode == NITROX_CC_DTRUNC) + NITROX_LOG(ERR, "Dequeue error 0x%x\n", + zip_res->w2.compcode); + if (zip_res->w2.compcode == NITROX_CC_STOP || + zip_res->w2.compcode == NITROX_CC_DTRUNC) op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED; else op->status = RTE_COMP_OP_STATUS_ERROR; op->consumed = 0; op->produced = 0; - err = -EFAULT; - goto exit; + return -EFAULT; } - output_unused_bytes = sr->dst.total_bytes - zip_res.w1.tbyteswritten; - if (unlikely(sr->xform.op == NITROX_COMP_OP_DECOMPRESS && + output_unused_bytes = sr->dst.total_bytes - zip_res->w1.tbyteswritten; + if (unlikely(xform->op == NITROX_COMP_OP_DECOMPRESS && output_unused_bytes < NITROX_ZIP_MAX_ONFSIZE)) { NITROX_LOG(ERR, "TOL %d, Total bytes written %d\n", - sr->dst.total_bytes, zip_res.w1.tbyteswritten); + sr->dst.total_bytes, zip_res->w1.tbyteswritten); sr->op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED; sr->op->consumed = 0; sr->op->produced = sr->dst.total_bytes - NITROX_ZIP_MAX_ONFSIZE; - err = -EIO; - goto exit; + return -EIO; + } + + if (xform->chksum_type == NITROX_CHKSUM_TYPE_CRC32) + sr->op->output_chksum = zip_res->w0.crc32; + else if (xform->chksum_type == NITROX_CHKSUM_TYPE_ADLER32) + sr->op->output_chksum = zip_res->w0.adler32; + + sr->op->consumed = RTE_MIN(sr->op->src.length, + (uint32_t)zip_res->w1.tbytesread); + sr->op->produced = zip_res->w1.tbyteswritten; + sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; + return 0; +} + +static int update_history(struct rte_mbuf *mbuf, uint32_t off, uint16_t datalen, + uint8_t *dst) +{ + struct rte_mbuf *m; + uint32_t mlen; + uint16_t copied = 0; + + for (m = mbuf; m && off > rte_pktmbuf_data_len(m); m = m->next) + off -= rte_pktmbuf_data_len(m); + + if (unlikely(!m)) { + NITROX_LOG(ERR, "Failed to update history. Invalid mbuf\n"); + return -EINVAL; + } + + mlen = rte_pktmbuf_data_len(m) - off; + if (datalen <= mlen) + mlen = datalen; + + memcpy(&dst[copied], rte_pktmbuf_mtod_offset(m, char *, off), mlen); + copied += mlen; + datalen -= mlen; + for (m = m->next; m && datalen; m = m->next) { + mlen = rte_pktmbuf_data_len(m) < datalen ? + rte_pktmbuf_data_len(m) : datalen; + memcpy(&dst[copied], rte_pktmbuf_mtod(m, char *), mlen); + copied += mlen; + datalen -= mlen; + } + + if (unlikely(datalen != 0)) { + NITROX_LOG(ERR, "Failed to update history. Invalid datalen\n"); + return -EINVAL; + } + + return 0; +} + +static void reset_nitrox_xform(struct nitrox_comp_xform *xform) +{ + xform->hlen = 0; + xform->exn = 0; + xform->exbits = 0; + xform->bf = true; +} + +static int post_process_zip_stateful(struct nitrox_softreq *sr, + struct nitrox_comp_xform *xform, + struct nitrox_zip_result *zip_res) +{ + uint32_t bytesread = 0; + uint32_t chksum = 0; + + if (unlikely(zip_res->w2.compcode == NITROX_CC_DTRUNC)) { + sr->op->consumed = 0; + sr->op->produced = 0; + xform->hlen = 0; + sr->op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE; + NITROX_LOG(ERR, "Dequeue compress DTRUNC error\n"); + return 0; + } else if (unlikely(zip_res->w2.compcode == NITROX_CC_STOP)) { + sr->op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE; + NITROX_LOG(NOTICE, "Dequeue decompress dynamic STOP\n"); + } else if (zip_res->w2.compcode == NITROX_CC_SUCCESS) { + sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; + } else { + xform->hlen = 0; + xform->exn = 0; + xform->exbits = 0; + xform->bf = true; + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + NITROX_LOG(ERR, "Dequeue error 0x%x\n", + zip_res->w2.compcode); + return -EFAULT; + } + + if (xform->op == NITROX_COMP_OP_COMPRESS) { + if (zip_res->w1.tbytesread < xform->hlen) { + NITROX_LOG(ERR, "Invalid bytesread\n"); + reset_nitrox_xform(xform); + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + return -EFAULT; + } + + bytesread = zip_res->w1.tbytesread - xform->hlen; + } else { + bytesread = RTE_MIN(sr->op->src.length, + (uint32_t)zip_res->w1.tbytesread); + } + + if ((xform->op == NITROX_COMP_OP_COMPRESS && + (sr->op->flush_flag == RTE_COMP_FLUSH_NONE || + sr->op->flush_flag == RTE_COMP_FLUSH_SYNC)) || + (xform->op == NITROX_COMP_OP_DECOMPRESS && !zip_res->w2.ef)) { + struct rte_mbuf *mbuf; + uint32_t pktlen, m_off; + int err; + + if (xform->op == NITROX_COMP_OP_COMPRESS) { + mbuf = sr->op->m_src; + pktlen = bytesread; + m_off = sr->op->src.offset; + } else { + mbuf = sr->op->m_dst; + pktlen = zip_res->w1.tbyteswritten; + m_off = sr->op->dst.offset; + } + + if (pktlen >= xform->window_size) { + m_off += pktlen - xform->window_size; + err = update_history(mbuf, m_off, xform->window_size, + xform->history_window); + xform->hlen = xform->window_size; + } else if ((xform->hlen + pktlen) <= xform->window_size) { + err = update_history(mbuf, m_off, pktlen, + &xform->history_window[xform->hlen]); + xform->hlen += pktlen; + } else { + uint16_t shift_off, shift_len; + + shift_off = pktlen + xform->hlen - xform->window_size; + shift_len = xform->hlen - shift_off; + memmove(xform->history_window, + &xform->history_window[shift_off], + shift_len); + err = update_history(mbuf, m_off, pktlen, + &xform->history_window[shift_len]); + xform->hlen = xform->window_size; + + } + + if (unlikely(err)) { + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + return err; + } + + if (xform->op == NITROX_COMP_OP_COMPRESS) { + xform->exn = zip_res->w2.exn; + xform->exbits = zip_res->w2.exbits; + } + + xform->bf = false; + } else { + reset_nitrox_xform(xform); } - if (sr->xform.op == NITROX_COMP_OP_COMPRESS && + if (xform->chksum_type == NITROX_CHKSUM_TYPE_CRC32) + chksum = zip_res->w0.crc32; + else if (xform->chksum_type == NITROX_CHKSUM_TYPE_ADLER32) + chksum = zip_res->w0.adler32; + + if (xform->bf) + sr->op->output_chksum = chksum; + else + xform->chksum = chksum; + + sr->op->consumed = bytesread; + sr->op->produced = zip_res->w1.tbyteswritten; + return 0; +} + +int +nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op) +{ + struct nitrox_zip_result zip_res; + struct nitrox_comp_xform *xform; + int err = 0; + + zip_res = zip_result_to_cpu64(&sr->zip_res); + if (zip_res.w2.compcode == NITROX_CC_NOTDONE) { + if (rte_get_timer_cycles() >= sr->timeout) { + NITROX_LOG(ERR, "Op timedout\n"); + sr->op->status = RTE_COMP_OP_STATUS_ERROR; + err = -ETIMEDOUT; + goto exit; + } else { + return -EAGAIN; + } + } + + xform = sr->op->private_xform; + if (sr->op->op_type == RTE_COMP_OP_STATELESS) + err = post_process_zip_stateless(sr, xform, &zip_res); + else + err = post_process_zip_stateful(sr, xform, &zip_res); + + if (sr->op->status == RTE_COMP_OP_STATUS_SUCCESS && + xform->op == NITROX_COMP_OP_COMPRESS && sr->op->flush_flag == RTE_COMP_FLUSH_FINAL && zip_res.w2.exn) { uint32_t datalen = zip_res.w1.tbyteswritten; @@ -696,17 +1104,11 @@ nitrox_check_comp_req(struct nitrox_softreq *sr, struct rte_comp_op **op) *last_byte = zip_res.w2.exbits & 0xFF; } - sr->op->consumed = zip_res.w1.tbytesread; - sr->op->produced = zip_res.w1.tbyteswritten; - if (sr->xform.chksum_type == NITROX_CHKSUM_TYPE_CRC32) - sr->op->output_chksum = zip_res.w0.crc32; - else if (sr->xform.chksum_type == NITROX_CHKSUM_TYPE_ADLER32) - sr->op->output_chksum = zip_res.w0.adler32; - - sr->op->status = RTE_COMP_OP_STATUS_SUCCESS; - err = 0; exit: *op = sr->op; + nitrox_dump_zip_result(&sr->instr, &zip_res); + nitrox_dump_databuf("OUT after", sr->op->m_dst, sr->op->dst.offset, + sr->op->produced); return err; } diff --git a/drivers/compress/nitrox/nitrox_comp_reqmgr.h b/drivers/compress/nitrox/nitrox_comp_reqmgr.h index 07c65f0d5e..f0cd1eb8fd 100644 --- a/drivers/compress/nitrox/nitrox_comp_reqmgr.h +++ b/drivers/compress/nitrox/nitrox_comp_reqmgr.h @@ -37,6 +37,14 @@ struct nitrox_comp_xform { enum nitrox_comp_algo algo; enum nitrox_comp_level level; enum nitrox_chksum_type chksum_type; + uint8_t *context; + uint8_t *history_window; + uint32_t chksum; + uint16_t window_size; + uint16_t hlen; + uint8_t exn; + uint8_t exbits; + bool bf; }; int nitrox_process_comp_req(struct rte_comp_op *op, struct nitrox_softreq *sr);