[v22,06/13] compress/zsda: add zsda compressdev driver skeleton
Checks
Commit Message
Add zsda compressdev driver interface skeleton
Signed-off-by: Hanxiao Li <li.hanxiao@zte.com.cn>
---
MAINTAINERS | 3 +
doc/guides/compressdevs/features/zsda.ini | 6 +
doc/guides/compressdevs/index.rst | 1 +
doc/guides/compressdevs/zsda.rst | 178 ++++++++++++++++++++++
drivers/common/zsda/meson.build | 12 +-
drivers/common/zsda/zsda_device.h | 29 +++-
drivers/common/zsda/zsda_qp.c | 30 +++-
drivers/common/zsda/zsda_qp.h | 16 +-
drivers/common/zsda/zsda_qp_common.h | 7 +
drivers/compress/zsda/zsda_comp_pmd.c | 128 ++++++++++++++++
drivers/compress/zsda/zsda_comp_pmd.h | 20 +++
11 files changed, 418 insertions(+), 12 deletions(-)
create mode 100644 doc/guides/compressdevs/features/zsda.ini
create mode 100644 doc/guides/compressdevs/zsda.rst
create mode 100644 drivers/compress/zsda/zsda_comp_pmd.c
create mode 100644 drivers/compress/zsda/zsda_comp_pmd.h
--
2.27.0
Comments
Hi Hanxiao,
Please see comments inline.
> Add zsda compressdev driver interface skeleton
>
> Signed-off-by: Hanxiao Li <li.hanxiao@zte.com.cn>
> ---
> MAINTAINERS | 3 +
> doc/guides/compressdevs/features/zsda.ini | 6 +
> doc/guides/compressdevs/index.rst | 1 +
> doc/guides/compressdevs/zsda.rst | 178 ++++++++++++++++++++++
> drivers/common/zsda/meson.build | 12 +-
> drivers/common/zsda/zsda_device.h | 29 +++-
> drivers/common/zsda/zsda_qp.c | 30 +++-
> drivers/common/zsda/zsda_qp.h | 16 +-
> drivers/common/zsda/zsda_qp_common.h | 7 +
> drivers/compress/zsda/zsda_comp_pmd.c | 128 ++++++++++++++++
> drivers/compress/zsda/zsda_comp_pmd.h | 20 +++
> 11 files changed, 418 insertions(+), 12 deletions(-)
> create mode 100644 doc/guides/compressdevs/features/zsda.ini
> create mode 100644 doc/guides/compressdevs/zsda.rst
> create mode 100644 drivers/compress/zsda/zsda_comp_pmd.c
> create mode 100644 drivers/compress/zsda/zsda_comp_pmd.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 0318d7357c..dc3fa2097a 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1271,6 +1271,9 @@ F: doc/guides/compressdevs/features/zlib.ini
> ZTE Storage Data Accelerator(ZSDA)
> M: Hanxiao Li <li.hanxiao@zte.com.cn>
> F: drivers/common/zsda/
> +F: drivers/compress/zsda/
> +F: doc/guides/compressdevs/zsda.rst
> +F: doc/guides/compressdevs/features/zsda.ini
>
> DMAdev Drivers
> --------------
> diff --git a/doc/guides/compressdevs/features/zsda.ini
> b/doc/guides/compressdevs/features/zsda.ini
> new file mode 100644
> index 0000000000..5cc9a3b1a6
> --- /dev/null
> +++ b/doc/guides/compressdevs/features/zsda.ini
> @@ -0,0 +1,6 @@
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +; Supported features of 'ZSDA' compression driver.
> +;
> +[Features]
> diff --git a/doc/guides/compressdevs/index.rst
> b/doc/guides/compressdevs/index.rst
> index 87ed4f72a4..bab226ffbc 100644
> --- a/doc/guides/compressdevs/index.rst
> +++ b/doc/guides/compressdevs/index.rst
> @@ -17,3 +17,4 @@ Compression Device Drivers
> qat_comp
> uadk
> zlib
> + zsda
> diff --git a/doc/guides/compressdevs/zsda.rst
> b/doc/guides/compressdevs/zsda.rst
> new file mode 100644
> index 0000000000..c02423d650
> --- /dev/null
> +++ b/doc/guides/compressdevs/zsda.rst
> @@ -0,0 +1,178 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2024 ZTE Corporation.
> +
> +ZTE Storage Data Accelerator (ZSDA) Poll Mode Driver
> +=======================================================
> +
> +The ZSDA compression PMD provides poll mode compression & decompression
> driver
> +support for the following hardware accelerator devices:
> +
> +* ``ZTE Processing accelerators 1cf2``
> +
> +
> +Features
> +--------
> +
> +
> +Installation
> +------------
> +
> +The ZSDA compression PMD is built by default with a standard DPDK build.
> +
> +It depends on a ZSDA kernel driver, see :ref:`building_zsda`.
I do not see details of zsda kernel driver.
> +
> +
> +.. _building_zsda:
> +
> +Building PMDs on ZSDA
> +---------------------
> +
> +A ZSDA device can host multiple acceleration services:
> +
> +* data compression
> +
> +These services are provided to DPDK applications via PMDs which register to
> +implement the compressdev APIs. The PMDs use common ZSDA driver code
> +which manages the ZSDA PCI device.
> +
> +
> +Configuring and Building the DPDK ZSDA PMDs
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Further information on configuring, building and installing DPDK is described
> +:doc:`here <../linux_gsg/build_dpdk>`.
> +
> +.. _building_zsda_config:
> +
> +Build Configuration
> +~~~~~~~~~~~~~~~~~~~
> +These is the build configuration options affecting ZSDA, and its default values:
> +
> +.. code-block:: console
> +
> + RTE_PMD_ZSDA_MAX_PCI_DEVICES=256
> +
> +
> +Device and driver naming
> +~~~~~~~~~~~~~~~~~~~~~~~~
This is not a sub section of build, so it should be --- instead of ~~~
Check the html output of the file.
> +
> +* The zsda compressdev driver name is "compress_zsda".
> + The rte_compressdev_devices_get() returns the devices exposed by this driver.
> +
> +* Each zsda compression device has a unique name, in format
> + <pci bdf>, e.g. "0000:cc:00.3_zsda".
> + This name can be passed to rte_compressdev_get_dev_id() to get the
> device_id.
> +
> +
> +Enable VFs
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are extra ~~~ and also it should be --- I believe.
> +
> +Instructions for installation are below, but first an explanation of the
> +relationships between the PF/VF devices and the PMDs visible to
> +DPDK applications.
> +
> +Each ZSDA PF device exposes a number of VF devices. Each VF device can
> +enable one compressdev PMD.
> +
> +These ZSDA PMDs share the same underlying device and pci-mgmt code, but are
> +enumerated independently on their respective APIs and appear as independent
> +devices to applications.
> +
> +.. Note::
> +
> + Each VF can only be used by one DPDK process. It is not possible to share
> + the same VF across multiple processes, even if these processes are using
> + different acceleration services.
> + Conversely one DPDK process can use one or more ZSDA VFs and can expose
> + compressdev instances on each of those VFs.
> +
> +
> +The examples below are based on the 1cf2 device, if you have a different device
> +use the corresponding values in the above table.
> +
> +In BIOS ensure that SRIOV is enabled and either:
> +
> +* Disable VT-d or
> +* Enable VT-d and set ``"intel_iommu=on iommu=pt"`` in the grub file.
> +
> +you need to expose the Virtual Functions (VFs) using the sysfs file system.
> +
> +First find the BDFs (Bus-Device-Function) of the physical functions (PFs) of
> +your device, e.g.::
> +
> + lspci -d:8050
> +
> +You should see output similar to::
> +
> +
> + cc:00.4 Processing accelerators: Device 1cf2:8050 (rev 01)
> + ce:00.3 Processing accelerators: Device 1cf2:8050 (rev 01)
> + d0:00.3 Processing accelerators: Device 1cf2:8050 (rev 01)
> + d2:00.3 Processing accelerators: Device 1cf2:8050 (rev 01)
> +
> +Enable the VFs for each PF by echoing the number of VFs per PF to the pci
> driver::
> +
> + echo 31 > /sys/bus/pci/device/0000:cc:00.4/sriov_numvfs
> + echo 31 > /sys/bus/pci/device/0000:ce:00.3/sriov_numvfs
> + echo 31 > /sys/bus/pci/device/0000:d0:00.3/sriov_numvfs
> + echo 31 > /sys/bus/pci/device/0000:d2:00.3/sriov_numvfs
> +
> +Check that the VFs are available for use. For example ``lspci -d:8051`` should
> +list 124 VF devices available.
> +
> +To complete the installation follow the instructions in
> +`Binding the available VFs to the vfio-pci driver`_.
> +
> +.. Note::
> +
> + If you see the following warning in ``/var/log/messages`` it can be ignored:
> + ``IOMMU should be enabled for SR-IOV to work correctly``.
> +
> +
> +Binding the available VFs to the vfio-pci driver
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Note:
> +
> +* Please note that due to security issues, the usage of older DPDK igb_uio
> + driver is not recommended. This document shows how to use the more secure
> + vfio-pci driver.
> +
> +Unbind the VFs from the stock driver so they can be bound to the vfio-pci driver.
> +
> +
> +Bind to the vfio-pci driver
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Load the vfio-pci driver, bind the VF PCI Device id to it using the
> +``dpdk-devbind.py`` script then use the ``--status`` option
> +to confirm the VF devices are now in use by vfio-pci kernel driver,
> +e.g. for the 1cf2 device::
> +
> + cd to the top-level DPDK directory
> + modprobe vfio-pci
> + usertools/dpdk-devbind.py -b vfio-pci 0000:cc:01.4
> + usertools/dpdk-devbind.py --status
> +
> +Use ``modprobe vfio-pci disable_denylist=1`` from kernel 5.9 onwards.
> +See note in the section `Binding the available VFs to the vfio-pci driver`_
> +above.
> +
> +
> +Testing
> +~~~~~~~
Check here as well
> +
> +ZSDA compression PMD can be tested by running the test application::
> +
> + cd ./<build_dir>/app/test
> + ./dpdk-test -l1 -n1 -a <your zsda bdf>
> + RTE>>compressdev_autotest
> +
> +
> +Debugging
> +~~~~~~~~~
Check here as well
> +
> +ZSDA logging feature can be enabled using the log-level option (where
> 8=maximum
> +log level) on the process cmdline, e.g. using any of the following::
> +
> + --log-level="gen,8"
> diff --git a/drivers/common/zsda/meson.build
> b/drivers/common/zsda/meson.build
> index 4c910d7e7d..6ee2a68f4b 100644
> --- a/drivers/common/zsda/meson.build
> +++ b/drivers/common/zsda/meson.build
> @@ -7,9 +7,19 @@ if is_windows
> subdir_done()
> endif
>
> -deps += ['bus_pci', 'mbuf']
> +deps += ['bus_pci', 'mbuf', 'compressdev']
> sources += files(
> 'zsda_device.c',
> 'zsda_logs.c',
> 'zsda_qp.c',
> )
> +
> +zsda_compress = true
> +zsda_compress_path = 'compress/zsda'
> +zsda_compress_relpath = '../../' + zsda_compress_path
> +includes += include_directories(zsda_compress_relpath)
> +if zsda_compress
> + foreach f: ['zsda_comp_pmd.c']
> + sources += files(join_paths(zsda_compress_relpath, f))
> + endforeach
> +endif
> diff --git a/drivers/common/zsda/zsda_device.h
> b/drivers/common/zsda/zsda_device.h
> index 0c9f332ca2..564d68ac6a 100644
> --- a/drivers/common/zsda/zsda_device.h
> +++ b/drivers/common/zsda/zsda_device.h
> @@ -7,10 +7,10 @@
>
> #include <rte_memzone.h>
> #include "bus_pci_driver.h"
> +#include "zsda_qp_common.h"
>
> #define MAX_QPS_ON_FUNCTION 128
> #define ZSDA_DEV_NAME_MAX_LEN 64
> -#define ZSDA_MAX_SERVICES (0)
> #define ZSDA_MAX_DEV RTE_PMD_ZSDA_MAX_PCI_DEVICES
>
> struct zsda_device_info {
> @@ -18,7 +18,11 @@ struct zsda_device_info {
> /**< mz to store the: struct zsda_pci_device , so it can be
> * shared across processes
> */
> -
> + struct rte_device comp_rte_dev;
> + /**< This represents the compression subset of this pci device.
> + * Register with this rather than with the one in
> + * pci_dev so that its driver can have a compression-specific name
> + */
> struct rte_pci_device *pci_dev;
> };
>
> @@ -37,6 +41,23 @@ struct zsda_qp_hw {
> struct zsda_qp_hw_data data[MAX_QPS_ON_FUNCTION];
> };
>
> +/** private data structure for a ZSDA compression device.
> + * This ZSDA device is a device offering only a compression service,
> + * there can be one of these on each zsda_pci_device (VF).
> + */
> +struct zsda_comp_dev_private {
> + struct zsda_pci_device *zsda_pci_dev;
> + /**< The zsda pci device hosting the service */
> + struct rte_compressdev *compressdev;
> + /**< The pointer to this compression device structure */
> + const struct rte_compressdev_capabilities *zsda_dev_capabilities;
> + /* ZSDA device compression capabilities */
> + struct rte_mempool *xformpool;
> + /**< The device's pool for zsda_comp_xforms */
> + const struct rte_memzone *capa_mz;
> + /* Shared memzone for storing capabilities */
> +};
This struct is private and specific to compression device.
Can we move this to drivers/compress/zsda?
Similarly other compression specific things can be moved to drivers/compress/zsda as well.
> +
> struct zsda_pci_device {
> /* Data used by all services */
> char name[ZSDA_DEV_NAME_MAX_LEN];
> @@ -46,6 +67,10 @@ struct zsda_pci_device {
>
> struct rte_pci_device *pci_dev;
>
> + /* Data relating to compression service */
> + struct zsda_comp_dev_private *comp_dev;
> + /**< link back to compressdev private data */
> +
> struct zsda_qp_hw zsda_hw_qps[ZSDA_MAX_SERVICES];
> uint16_t zsda_qp_hw_num[ZSDA_MAX_SERVICES];
> };
> diff --git a/drivers/common/zsda/zsda_qp.c b/drivers/common/zsda/zsda_qp.c
> index 0bb0f598b7..7e000d5b3f 100644
> --- a/drivers/common/zsda/zsda_qp.c
> +++ b/drivers/common/zsda/zsda_qp.c
> @@ -3,15 +3,12 @@
> */
>
> #include <stdint.h>
> -
> -#include <rte_malloc.h>
Better to remove unused headers in the original patch where they were introduced.
> +#include <rte_mempool.h>
>
> #include "zsda_logs.h"
> -#include "zsda_device.h"
> #include "zsda_qp.h"
> #include "zsda_qp_common.h"
>
> -
> #define MAGIC_SEND 0xab
> #define MAGIC_RECV 0xcd
> #define ADMIN_VER 1
> @@ -400,7 +397,8 @@ zsda_get_queue_cfg_by_id(const struct zsda_pci_device
> *zsda_pci_dev,
> }
>
> static struct ring_size zsda_qp_hw_ring_size[ZSDA_MAX_SERVICES] = {
> -
> + [ZSDA_SERVICE_ENCOMPRESSION] = {32, 16},
ENCOMPRESSION?
Can this be COMPRESSION?
> + [ZSDA_SERVICE_DECOMPRESSION] = {32, 16},
> };
>
> static int
> @@ -468,6 +466,26 @@ zsda_unmask_flr(const struct zsda_pci_device
> *zsda_pci_dev)
> return ZSDA_SUCCESS;
> }
>
> +static uint16_t
> +zsda_qps_per_service(const struct zsda_pci_device *zsda_pci_dev,
> + const enum zsda_service_type service)
> +{
> + uint16_t qp_hw_num = 0;
> +
> + if (service < ZSDA_SERVICE_INVALID)
> + qp_hw_num = zsda_pci_dev->zsda_qp_hw_num[service];
> + return qp_hw_num;
> +}
> +
> +struct zsda_num_qps zsda_nb_qps;
> +static void
> +zsda_get_nb_qps(const struct zsda_pci_device *zsda_pci_dev)
> +{
> + zsda_nb_qps.encomp =
> + zsda_qps_per_service(zsda_pci_dev, ZSDA_SERVICE_ENCOMPRESSION);
> + zsda_nb_qps.decomp =
> + zsda_qps_per_service(zsda_pci_dev, ZSDA_SERVICE_DECOMPRESSION);
> +}
>
> int
> zsda_queue_init(struct zsda_pci_device *zsda_pci_dev)
> @@ -501,5 +519,7 @@ zsda_queue_init(struct zsda_pci_device *zsda_pci_dev)
> return ret;
> }
>
> + zsda_get_nb_qps(zsda_pci_dev);
> +
> return ret;
> }
> diff --git a/drivers/common/zsda/zsda_qp.h b/drivers/common/zsda/zsda_qp.h
> index c3fc284239..0c8f36061a 100644
> --- a/drivers/common/zsda/zsda_qp.h
> +++ b/drivers/common/zsda/zsda_qp.h
> @@ -5,6 +5,8 @@
> #ifndef _ZSDA_QP_H_
> #define _ZSDA_QP_H_
>
> +#include "zsda_device.h"
> +
> #define ZSDA_ADMIN_Q_START 0x100
> #define ZSDA_ADMIN_Q_STOP 0x100
> #define ZSDA_ADMIN_Q_STOP_RESP 0x104
> @@ -72,15 +74,21 @@ enum zsda_admin_msg_id {
> ZSDA_ADMIN_INT_TEST
> };
>
> -enum zsda_service_type {
> - ZSDA_SERVICE_INVALID,
> -};
> -
Remove the enum from original patch.
Please do not add unnecessary lines in one patch and delete in subsequent patches.
> struct ring_size {
> uint16_t tx_msg_size;
> uint16_t rx_msg_size;
> };
>
> +struct zsda_num_qps {
> + uint16_t encomp;
> + uint16_t decomp;
> + uint16_t encrypt;
> + uint16_t decrypt;
> + uint16_t hash;
> +};
> +
> +extern struct zsda_num_qps zsda_nb_qps;
> +
> int zsda_queue_start(const struct rte_pci_device *pci_dev);
> int zsda_queue_stop(const struct rte_pci_device *pci_dev);
>
> diff --git a/drivers/common/zsda/zsda_qp_common.h
> b/drivers/common/zsda/zsda_qp_common.h
> index 75271d7823..722fd730b2 100644
> --- a/drivers/common/zsda/zsda_qp_common.h
> +++ b/drivers/common/zsda/zsda_qp_common.h
> @@ -17,6 +17,13 @@
> #define ZSDA_SUCCESS 0
> #define ZSDA_FAILED (-1)
>
> +enum zsda_service_type {
> + ZSDA_SERVICE_ENCOMPRESSION = 0,
> + ZSDA_SERVICE_DECOMPRESSION = 1,
> + ZSDA_SERVICE_INVALID,
> +};
> +#define ZSDA_MAX_SERVICES (2)
> +
> #define ZSDA_CSR_READ32(addr) rte_read32((addr))
> #define ZSDA_CSR_WRITE32(addr, value) rte_write32((value), (addr))
> #define ZSDA_CSR_READ16(addr) rte_read16((addr))
> diff --git a/drivers/compress/zsda/zsda_comp_pmd.c
> b/drivers/compress/zsda/zsda_comp_pmd.c
> new file mode 100644
> index 0000000000..d1c33f448c
> --- /dev/null
> +++ b/drivers/compress/zsda/zsda_comp_pmd.c
> @@ -0,0 +1,128 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 ZTE Corporation
> + */
> +
> +#include <rte_malloc.h>
> +
> +#include "zsda_logs.h"
> +#include "zsda_qp_common.h"
> +#include "zsda_comp_pmd.h"
> +
> +static struct rte_compressdev_ops compress_zsda_ops = {
> +
> + .dev_configure = NULL,
> + .dev_start = NULL,
> + .dev_stop = NULL,
> + .dev_close = NULL,
> + .dev_infos_get = NULL,
> +
> + .stats_get = NULL,
> + .stats_reset = NULL,
> + .queue_pair_setup = NULL,
> + .queue_pair_release = NULL,
> +
> + .private_xform_create = NULL,
> + .private_xform_free = NULL
> +};
> +
> +/* An rte_driver is needed in the registration of the device with compressdev.
> + * The actual zsda pci's rte_driver can't be used as its name represents
> + * the whole pci device with all services. Think of this as a holder for a name
> + * for the compression part of the pci device.
> + */
> +static const char zsda_comp_drv_name[] =
> RTE_STR(COMPRESSDEV_NAME_ZSDA_PMD);
> +static const struct rte_driver compdev_zsda_driver = {
> + .name = zsda_comp_drv_name, .alias = zsda_comp_drv_name};
> +
> +int
> +zsda_comp_dev_create(struct zsda_pci_device *zsda_pci_dev)
> +{
> + struct zsda_device_info *dev_info =
> + &zsda_devs[zsda_pci_dev->zsda_dev_id];
> +
> + struct rte_compressdev_pmd_init_params init_params = {
> + .name = "",
> + .socket_id = (int)rte_socket_id(),
> + };
> +
> + char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
> + char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
> + struct rte_compressdev *compressdev;
> + struct zsda_comp_dev_private *comp_dev;
> + const struct rte_compressdev_capabilities *capabilities;
> + uint16_t capa_size = sizeof(struct rte_compressdev_capabilities);
> +
> + snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s",
> + zsda_pci_dev->name, "comp");
> +
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> + return 0;
> +
> + dev_info->comp_rte_dev.driver = &compdev_zsda_driver;
> + dev_info->comp_rte_dev.numa_node = dev_info->pci_dev-
> >device.numa_node;
> + dev_info->comp_rte_dev.devargs = NULL;
> +
> + compressdev = rte_compressdev_pmd_create(
> + name, &(dev_info->comp_rte_dev),
> + sizeof(struct zsda_comp_dev_private), &init_params);
> +
> + if (compressdev == NULL)
> + return -ENODEV;
> +
> + compressdev->dev_ops = &compress_zsda_ops;
> +
> + compressdev->enqueue_burst = NULL;
> + compressdev->dequeue_burst = NULL;
> +
> + compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
> +
> + snprintf(capa_memz_name, RTE_COMPRESSDEV_NAME_MAX_LEN,
> + "ZSDA_COMP_CAPA");
> +
> + comp_dev = compressdev->data->dev_private;
> + comp_dev->zsda_pci_dev = zsda_pci_dev;
> + comp_dev->compressdev = compressdev;
> +
> + capabilities = NULL;
> +
> + comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name);
> + if (comp_dev->capa_mz == NULL) {
> + comp_dev->capa_mz = rte_memzone_reserve(
> + capa_memz_name, capa_size, rte_socket_id(), 0);
> + }
> + if (comp_dev->capa_mz == NULL) {
> + ZSDA_LOG(DEBUG, "Failed! comp_dev->capa_mz is NULL");
> + memset(&dev_info->comp_rte_dev, 0,
> + sizeof(dev_info->comp_rte_dev));
> + rte_compressdev_pmd_destroy(compressdev);
> + return -EFAULT;
> + }
> +
> + memcpy(comp_dev->capa_mz->addr, capabilities, capa_size);
> + comp_dev->zsda_dev_capabilities = comp_dev->capa_mz->addr;
> +
> + zsda_pci_dev->comp_dev = comp_dev;
> +
> + return ZSDA_SUCCESS;
> +}
> +
> +int
> +zsda_comp_dev_destroy(struct zsda_pci_device *zsda_pci_dev)
> +{
> + struct zsda_comp_dev_private *comp_dev;
> +
> + if (zsda_pci_dev == NULL)
> + return -ENODEV;
> +
> + comp_dev = zsda_pci_dev->comp_dev;
> + if (comp_dev == NULL)
> + return ZSDA_SUCCESS;
> +
> + if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> + rte_memzone_free(zsda_pci_dev->comp_dev->capa_mz);
> +
> + rte_compressdev_pmd_destroy(comp_dev->compressdev);
> + zsda_pci_dev->comp_dev = NULL;
> +
> + return ZSDA_SUCCESS;
> +}
> diff --git a/drivers/compress/zsda/zsda_comp_pmd.h
> b/drivers/compress/zsda/zsda_comp_pmd.h
> new file mode 100644
> index 0000000000..c6ef57af8e
> --- /dev/null
> +++ b/drivers/compress/zsda/zsda_comp_pmd.h
> @@ -0,0 +1,20 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 ZTE Corporation
> + */
> +
> +#ifndef _ZSDA_COMP_PMD_H_
> +#define _ZSDA_COMP_PMD_H_
> +
> +#include <rte_compressdev_pmd.h>
> +
> +#include "zsda_qp.h"
> +#include "zsda_device.h"
> +
> +/**< ZSDA Compression PMD driver name */
> +#define COMPRESSDEV_NAME_ZSDA_PMD compress_zsda
> +
> +int zsda_comp_dev_create(struct zsda_pci_device *zsda_pci_dev);
> +
> +int zsda_comp_dev_destroy(struct zsda_pci_device *zsda_pci_dev);
> +
> +#endif /* _ZSDA_COMP_PMD_H_ */
> --
> 2.27.0
@@ -1271,6 +1271,9 @@ F: doc/guides/compressdevs/features/zlib.ini
ZTE Storage Data Accelerator(ZSDA)
M: Hanxiao Li <li.hanxiao@zte.com.cn>
F: drivers/common/zsda/
+F: drivers/compress/zsda/
+F: doc/guides/compressdevs/zsda.rst
+F: doc/guides/compressdevs/features/zsda.ini
DMAdev Drivers
--------------
new file mode 100644
@@ -0,0 +1,6 @@
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; Supported features of 'ZSDA' compression driver.
+;
+[Features]
@@ -17,3 +17,4 @@ Compression Device Drivers
qat_comp
uadk
zlib
+ zsda
new file mode 100644
@@ -0,0 +1,178 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2024 ZTE Corporation.
+
+ZTE Storage Data Accelerator (ZSDA) Poll Mode Driver
+=======================================================
+
+The ZSDA compression PMD provides poll mode compression & decompression driver
+support for the following hardware accelerator devices:
+
+* ``ZTE Processing accelerators 1cf2``
+
+
+Features
+--------
+
+
+Installation
+------------
+
+The ZSDA compression PMD is built by default with a standard DPDK build.
+
+It depends on a ZSDA kernel driver, see :ref:`building_zsda`.
+
+
+.. _building_zsda:
+
+Building PMDs on ZSDA
+---------------------
+
+A ZSDA device can host multiple acceleration services:
+
+* data compression
+
+These services are provided to DPDK applications via PMDs which register to
+implement the compressdev APIs. The PMDs use common ZSDA driver code
+which manages the ZSDA PCI device.
+
+
+Configuring and Building the DPDK ZSDA PMDs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Further information on configuring, building and installing DPDK is described
+:doc:`here <../linux_gsg/build_dpdk>`.
+
+.. _building_zsda_config:
+
+Build Configuration
+~~~~~~~~~~~~~~~~~~~
+These is the build configuration options affecting ZSDA, and its default values:
+
+.. code-block:: console
+
+ RTE_PMD_ZSDA_MAX_PCI_DEVICES=256
+
+
+Device and driver naming
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+* The zsda compressdev driver name is "compress_zsda".
+ The rte_compressdev_devices_get() returns the devices exposed by this driver.
+
+* Each zsda compression device has a unique name, in format
+ <pci bdf>, e.g. "0000:cc:00.3_zsda".
+ This name can be passed to rte_compressdev_get_dev_id() to get the device_id.
+
+
+Enable VFs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Instructions for installation are below, but first an explanation of the
+relationships between the PF/VF devices and the PMDs visible to
+DPDK applications.
+
+Each ZSDA PF device exposes a number of VF devices. Each VF device can
+enable one compressdev PMD.
+
+These ZSDA PMDs share the same underlying device and pci-mgmt code, but are
+enumerated independently on their respective APIs and appear as independent
+devices to applications.
+
+.. Note::
+
+ Each VF can only be used by one DPDK process. It is not possible to share
+ the same VF across multiple processes, even if these processes are using
+ different acceleration services.
+ Conversely one DPDK process can use one or more ZSDA VFs and can expose
+ compressdev instances on each of those VFs.
+
+
+The examples below are based on the 1cf2 device, if you have a different device
+use the corresponding values in the above table.
+
+In BIOS ensure that SRIOV is enabled and either:
+
+* Disable VT-d or
+* Enable VT-d and set ``"intel_iommu=on iommu=pt"`` in the grub file.
+
+you need to expose the Virtual Functions (VFs) using the sysfs file system.
+
+First find the BDFs (Bus-Device-Function) of the physical functions (PFs) of
+your device, e.g.::
+
+ lspci -d:8050
+
+You should see output similar to::
+
+
+ cc:00.4 Processing accelerators: Device 1cf2:8050 (rev 01)
+ ce:00.3 Processing accelerators: Device 1cf2:8050 (rev 01)
+ d0:00.3 Processing accelerators: Device 1cf2:8050 (rev 01)
+ d2:00.3 Processing accelerators: Device 1cf2:8050 (rev 01)
+
+Enable the VFs for each PF by echoing the number of VFs per PF to the pci driver::
+
+ echo 31 > /sys/bus/pci/device/0000:cc:00.4/sriov_numvfs
+ echo 31 > /sys/bus/pci/device/0000:ce:00.3/sriov_numvfs
+ echo 31 > /sys/bus/pci/device/0000:d0:00.3/sriov_numvfs
+ echo 31 > /sys/bus/pci/device/0000:d2:00.3/sriov_numvfs
+
+Check that the VFs are available for use. For example ``lspci -d:8051`` should
+list 124 VF devices available.
+
+To complete the installation follow the instructions in
+`Binding the available VFs to the vfio-pci driver`_.
+
+.. Note::
+
+ If you see the following warning in ``/var/log/messages`` it can be ignored:
+ ``IOMMU should be enabled for SR-IOV to work correctly``.
+
+
+Binding the available VFs to the vfio-pci driver
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note:
+
+* Please note that due to security issues, the usage of older DPDK igb_uio
+ driver is not recommended. This document shows how to use the more secure
+ vfio-pci driver.
+
+Unbind the VFs from the stock driver so they can be bound to the vfio-pci driver.
+
+
+Bind to the vfio-pci driver
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Load the vfio-pci driver, bind the VF PCI Device id to it using the
+``dpdk-devbind.py`` script then use the ``--status`` option
+to confirm the VF devices are now in use by vfio-pci kernel driver,
+e.g. for the 1cf2 device::
+
+ cd to the top-level DPDK directory
+ modprobe vfio-pci
+ usertools/dpdk-devbind.py -b vfio-pci 0000:cc:01.4
+ usertools/dpdk-devbind.py --status
+
+Use ``modprobe vfio-pci disable_denylist=1`` from kernel 5.9 onwards.
+See note in the section `Binding the available VFs to the vfio-pci driver`_
+above.
+
+
+Testing
+~~~~~~~
+
+ZSDA compression PMD can be tested by running the test application::
+
+ cd ./<build_dir>/app/test
+ ./dpdk-test -l1 -n1 -a <your zsda bdf>
+ RTE>>compressdev_autotest
+
+
+Debugging
+~~~~~~~~~
+
+ZSDA logging feature can be enabled using the log-level option (where 8=maximum
+log level) on the process cmdline, e.g. using any of the following::
+
+ --log-level="gen,8"
@@ -7,9 +7,19 @@ if is_windows
subdir_done()
endif
-deps += ['bus_pci', 'mbuf']
+deps += ['bus_pci', 'mbuf', 'compressdev']
sources += files(
'zsda_device.c',
'zsda_logs.c',
'zsda_qp.c',
)
+
+zsda_compress = true
+zsda_compress_path = 'compress/zsda'
+zsda_compress_relpath = '../../' + zsda_compress_path
+includes += include_directories(zsda_compress_relpath)
+if zsda_compress
+ foreach f: ['zsda_comp_pmd.c']
+ sources += files(join_paths(zsda_compress_relpath, f))
+ endforeach
+endif
@@ -7,10 +7,10 @@
#include <rte_memzone.h>
#include "bus_pci_driver.h"
+#include "zsda_qp_common.h"
#define MAX_QPS_ON_FUNCTION 128
#define ZSDA_DEV_NAME_MAX_LEN 64
-#define ZSDA_MAX_SERVICES (0)
#define ZSDA_MAX_DEV RTE_PMD_ZSDA_MAX_PCI_DEVICES
struct zsda_device_info {
@@ -18,7 +18,11 @@ struct zsda_device_info {
/**< mz to store the: struct zsda_pci_device , so it can be
* shared across processes
*/
-
+ struct rte_device comp_rte_dev;
+ /**< This represents the compression subset of this pci device.
+ * Register with this rather than with the one in
+ * pci_dev so that its driver can have a compression-specific name
+ */
struct rte_pci_device *pci_dev;
};
@@ -37,6 +41,23 @@ struct zsda_qp_hw {
struct zsda_qp_hw_data data[MAX_QPS_ON_FUNCTION];
};
+/** private data structure for a ZSDA compression device.
+ * This ZSDA device is a device offering only a compression service,
+ * there can be one of these on each zsda_pci_device (VF).
+ */
+struct zsda_comp_dev_private {
+ struct zsda_pci_device *zsda_pci_dev;
+ /**< The zsda pci device hosting the service */
+ struct rte_compressdev *compressdev;
+ /**< The pointer to this compression device structure */
+ const struct rte_compressdev_capabilities *zsda_dev_capabilities;
+ /* ZSDA device compression capabilities */
+ struct rte_mempool *xformpool;
+ /**< The device's pool for zsda_comp_xforms */
+ const struct rte_memzone *capa_mz;
+ /* Shared memzone for storing capabilities */
+};
+
struct zsda_pci_device {
/* Data used by all services */
char name[ZSDA_DEV_NAME_MAX_LEN];
@@ -46,6 +67,10 @@ struct zsda_pci_device {
struct rte_pci_device *pci_dev;
+ /* Data relating to compression service */
+ struct zsda_comp_dev_private *comp_dev;
+ /**< link back to compressdev private data */
+
struct zsda_qp_hw zsda_hw_qps[ZSDA_MAX_SERVICES];
uint16_t zsda_qp_hw_num[ZSDA_MAX_SERVICES];
};
@@ -3,15 +3,12 @@
*/
#include <stdint.h>
-
-#include <rte_malloc.h>
+#include <rte_mempool.h>
#include "zsda_logs.h"
-#include "zsda_device.h"
#include "zsda_qp.h"
#include "zsda_qp_common.h"
-
#define MAGIC_SEND 0xab
#define MAGIC_RECV 0xcd
#define ADMIN_VER 1
@@ -400,7 +397,8 @@ zsda_get_queue_cfg_by_id(const struct zsda_pci_device *zsda_pci_dev,
}
static struct ring_size zsda_qp_hw_ring_size[ZSDA_MAX_SERVICES] = {
-
+ [ZSDA_SERVICE_ENCOMPRESSION] = {32, 16},
+ [ZSDA_SERVICE_DECOMPRESSION] = {32, 16},
};
static int
@@ -468,6 +466,26 @@ zsda_unmask_flr(const struct zsda_pci_device *zsda_pci_dev)
return ZSDA_SUCCESS;
}
+static uint16_t
+zsda_qps_per_service(const struct zsda_pci_device *zsda_pci_dev,
+ const enum zsda_service_type service)
+{
+ uint16_t qp_hw_num = 0;
+
+ if (service < ZSDA_SERVICE_INVALID)
+ qp_hw_num = zsda_pci_dev->zsda_qp_hw_num[service];
+ return qp_hw_num;
+}
+
+struct zsda_num_qps zsda_nb_qps;
+static void
+zsda_get_nb_qps(const struct zsda_pci_device *zsda_pci_dev)
+{
+ zsda_nb_qps.encomp =
+ zsda_qps_per_service(zsda_pci_dev, ZSDA_SERVICE_ENCOMPRESSION);
+ zsda_nb_qps.decomp =
+ zsda_qps_per_service(zsda_pci_dev, ZSDA_SERVICE_DECOMPRESSION);
+}
int
zsda_queue_init(struct zsda_pci_device *zsda_pci_dev)
@@ -501,5 +519,7 @@ zsda_queue_init(struct zsda_pci_device *zsda_pci_dev)
return ret;
}
+ zsda_get_nb_qps(zsda_pci_dev);
+
return ret;
}
@@ -5,6 +5,8 @@
#ifndef _ZSDA_QP_H_
#define _ZSDA_QP_H_
+#include "zsda_device.h"
+
#define ZSDA_ADMIN_Q_START 0x100
#define ZSDA_ADMIN_Q_STOP 0x100
#define ZSDA_ADMIN_Q_STOP_RESP 0x104
@@ -72,15 +74,21 @@ enum zsda_admin_msg_id {
ZSDA_ADMIN_INT_TEST
};
-enum zsda_service_type {
- ZSDA_SERVICE_INVALID,
-};
-
struct ring_size {
uint16_t tx_msg_size;
uint16_t rx_msg_size;
};
+struct zsda_num_qps {
+ uint16_t encomp;
+ uint16_t decomp;
+ uint16_t encrypt;
+ uint16_t decrypt;
+ uint16_t hash;
+};
+
+extern struct zsda_num_qps zsda_nb_qps;
+
int zsda_queue_start(const struct rte_pci_device *pci_dev);
int zsda_queue_stop(const struct rte_pci_device *pci_dev);
@@ -17,6 +17,13 @@
#define ZSDA_SUCCESS 0
#define ZSDA_FAILED (-1)
+enum zsda_service_type {
+ ZSDA_SERVICE_ENCOMPRESSION = 0,
+ ZSDA_SERVICE_DECOMPRESSION = 1,
+ ZSDA_SERVICE_INVALID,
+};
+#define ZSDA_MAX_SERVICES (2)
+
#define ZSDA_CSR_READ32(addr) rte_read32((addr))
#define ZSDA_CSR_WRITE32(addr, value) rte_write32((value), (addr))
#define ZSDA_CSR_READ16(addr) rte_read16((addr))
new file mode 100644
@@ -0,0 +1,128 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#include <rte_malloc.h>
+
+#include "zsda_logs.h"
+#include "zsda_qp_common.h"
+#include "zsda_comp_pmd.h"
+
+static struct rte_compressdev_ops compress_zsda_ops = {
+
+ .dev_configure = NULL,
+ .dev_start = NULL,
+ .dev_stop = NULL,
+ .dev_close = NULL,
+ .dev_infos_get = NULL,
+
+ .stats_get = NULL,
+ .stats_reset = NULL,
+ .queue_pair_setup = NULL,
+ .queue_pair_release = NULL,
+
+ .private_xform_create = NULL,
+ .private_xform_free = NULL
+};
+
+/* An rte_driver is needed in the registration of the device with compressdev.
+ * The actual zsda pci's rte_driver can't be used as its name represents
+ * the whole pci device with all services. Think of this as a holder for a name
+ * for the compression part of the pci device.
+ */
+static const char zsda_comp_drv_name[] = RTE_STR(COMPRESSDEV_NAME_ZSDA_PMD);
+static const struct rte_driver compdev_zsda_driver = {
+ .name = zsda_comp_drv_name, .alias = zsda_comp_drv_name};
+
+int
+zsda_comp_dev_create(struct zsda_pci_device *zsda_pci_dev)
+{
+ struct zsda_device_info *dev_info =
+ &zsda_devs[zsda_pci_dev->zsda_dev_id];
+
+ struct rte_compressdev_pmd_init_params init_params = {
+ .name = "",
+ .socket_id = (int)rte_socket_id(),
+ };
+
+ char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
+ char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
+ struct rte_compressdev *compressdev;
+ struct zsda_comp_dev_private *comp_dev;
+ const struct rte_compressdev_capabilities *capabilities;
+ uint16_t capa_size = sizeof(struct rte_compressdev_capabilities);
+
+ snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s",
+ zsda_pci_dev->name, "comp");
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ dev_info->comp_rte_dev.driver = &compdev_zsda_driver;
+ dev_info->comp_rte_dev.numa_node = dev_info->pci_dev->device.numa_node;
+ dev_info->comp_rte_dev.devargs = NULL;
+
+ compressdev = rte_compressdev_pmd_create(
+ name, &(dev_info->comp_rte_dev),
+ sizeof(struct zsda_comp_dev_private), &init_params);
+
+ if (compressdev == NULL)
+ return -ENODEV;
+
+ compressdev->dev_ops = &compress_zsda_ops;
+
+ compressdev->enqueue_burst = NULL;
+ compressdev->dequeue_burst = NULL;
+
+ compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+
+ snprintf(capa_memz_name, RTE_COMPRESSDEV_NAME_MAX_LEN,
+ "ZSDA_COMP_CAPA");
+
+ comp_dev = compressdev->data->dev_private;
+ comp_dev->zsda_pci_dev = zsda_pci_dev;
+ comp_dev->compressdev = compressdev;
+
+ capabilities = NULL;
+
+ comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name);
+ if (comp_dev->capa_mz == NULL) {
+ comp_dev->capa_mz = rte_memzone_reserve(
+ capa_memz_name, capa_size, rte_socket_id(), 0);
+ }
+ if (comp_dev->capa_mz == NULL) {
+ ZSDA_LOG(DEBUG, "Failed! comp_dev->capa_mz is NULL");
+ memset(&dev_info->comp_rte_dev, 0,
+ sizeof(dev_info->comp_rte_dev));
+ rte_compressdev_pmd_destroy(compressdev);
+ return -EFAULT;
+ }
+
+ memcpy(comp_dev->capa_mz->addr, capabilities, capa_size);
+ comp_dev->zsda_dev_capabilities = comp_dev->capa_mz->addr;
+
+ zsda_pci_dev->comp_dev = comp_dev;
+
+ return ZSDA_SUCCESS;
+}
+
+int
+zsda_comp_dev_destroy(struct zsda_pci_device *zsda_pci_dev)
+{
+ struct zsda_comp_dev_private *comp_dev;
+
+ if (zsda_pci_dev == NULL)
+ return -ENODEV;
+
+ comp_dev = zsda_pci_dev->comp_dev;
+ if (comp_dev == NULL)
+ return ZSDA_SUCCESS;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ rte_memzone_free(zsda_pci_dev->comp_dev->capa_mz);
+
+ rte_compressdev_pmd_destroy(comp_dev->compressdev);
+ zsda_pci_dev->comp_dev = NULL;
+
+ return ZSDA_SUCCESS;
+}
new file mode 100644
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 ZTE Corporation
+ */
+
+#ifndef _ZSDA_COMP_PMD_H_
+#define _ZSDA_COMP_PMD_H_
+
+#include <rte_compressdev_pmd.h>
+
+#include "zsda_qp.h"
+#include "zsda_device.h"
+
+/**< ZSDA Compression PMD driver name */
+#define COMPRESSDEV_NAME_ZSDA_PMD compress_zsda
+
+int zsda_comp_dev_create(struct zsda_pci_device *zsda_pci_dev);
+
+int zsda_comp_dev_destroy(struct zsda_pci_device *zsda_pci_dev);
+
+#endif /* _ZSDA_COMP_PMD_H_ */