From patchwork Fri Sep 24 10:53:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 99588 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 087AFA0548; Fri, 24 Sep 2021 12:58:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E1E924132C; Fri, 24 Sep 2021 12:58:22 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 4F9D341300 for ; Fri, 24 Sep 2021 12:58:14 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HG86G0tPbzbmh7; Fri, 24 Sep 2021 18:53:58 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:11 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:11 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , , , Date: Fri, 24 Sep 2021 18:53:52 +0800 Message-ID: <20210924105357.15386-2-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210924105357.15386-1-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <20210924105357.15386-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v23 1/6] dmadev: introduce DMA device library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The 'dmadevice' is a generic type of DMA device. This patch introduce the 'dmadevice' device allocation APIs. The infrastructure is prepared to welcome drivers in drivers/dma/ Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup Acked-by: Jerin Jacob Reviewed-by: Kevin Laatz Reviewed-by: Conor Walsh --- MAINTAINERS | 5 + config/rte_config.h | 3 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf.in | 1 + doc/guides/dmadevs/index.rst | 12 ++ doc/guides/index.rst | 1 + doc/guides/prog_guide/dmadev.rst | 64 ++++++ doc/guides/prog_guide/img/dmadev.svg | 283 +++++++++++++++++++++++++ doc/guides/prog_guide/index.rst | 1 + doc/guides/rel_notes/release_21_11.rst | 4 + drivers/dma/meson.build | 4 + drivers/meson.build | 1 + lib/dmadev/meson.build | 7 + lib/dmadev/rte_dmadev.c | 263 +++++++++++++++++++++++ lib/dmadev/rte_dmadev.h | 134 ++++++++++++ lib/dmadev/rte_dmadev_core.h | 51 +++++ lib/dmadev/rte_dmadev_pmd.h | 60 ++++++ lib/dmadev/version.map | 20 ++ lib/meson.build | 1 + 19 files changed, 916 insertions(+) create mode 100644 doc/guides/dmadevs/index.rst create mode 100644 doc/guides/prog_guide/dmadev.rst create mode 100644 doc/guides/prog_guide/img/dmadev.svg create mode 100644 drivers/dma/meson.build create mode 100644 lib/dmadev/meson.build create mode 100644 lib/dmadev/rte_dmadev.c create mode 100644 lib/dmadev/rte_dmadev.h create mode 100644 lib/dmadev/rte_dmadev_core.h create mode 100644 lib/dmadev/rte_dmadev_pmd.h create mode 100644 lib/dmadev/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 77a549a5e8..a5b11ac70b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -454,6 +454,11 @@ F: app/test-regex/ F: doc/guides/prog_guide/regexdev.rst F: doc/guides/regexdevs/features/default.ini +DMA device API - EXPERIMENTAL +M: Chengwen Feng +F: lib/dmadev/ +F: doc/guides/prog_guide/dmadev.rst + Eventdev API M: Jerin Jacob T: git://dpdk.org/next/dpdk-next-eventdev diff --git a/config/rte_config.h b/config/rte_config.h index 590903c07d..6e397a62ab 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -70,6 +70,9 @@ /* regexdev defines */ #define RTE_MAX_REGEXDEV_DEVS 32 +/* dmadev defines */ +#define RTE_DMADEV_DEFAULT_MAX_DEVS 64 + /* eventdev defines */ #define RTE_EVENT_MAX_DEVS 16 #define RTE_EVENT_MAX_QUEUES_PER_DEV 255 diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 1992107a03..2939050431 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -21,6 +21,7 @@ The public API headers are grouped by topics: [compressdev] (@ref rte_compressdev.h), [compress] (@ref rte_comp.h), [regexdev] (@ref rte_regexdev.h), + [dmadev] (@ref rte_dmadev.h), [eventdev] (@ref rte_eventdev.h), [event_eth_rx_adapter] (@ref rte_event_eth_rx_adapter.h), [event_eth_tx_adapter] (@ref rte_event_eth_tx_adapter.h), diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index 325a0195c6..109ec1f682 100644 --- a/doc/api/doxy-api.conf.in +++ b/doc/api/doxy-api.conf.in @@ -35,6 +35,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \ @TOPDIR@/lib/compressdev \ @TOPDIR@/lib/cryptodev \ @TOPDIR@/lib/distributor \ + @TOPDIR@/lib/dmadev \ @TOPDIR@/lib/efd \ @TOPDIR@/lib/ethdev \ @TOPDIR@/lib/eventdev \ diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst new file mode 100644 index 0000000000..0bce29d766 --- /dev/null +++ b/doc/guides/dmadevs/index.rst @@ -0,0 +1,12 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2021 HiSilicon Limited + +DMA Device Drivers +================== + +The following are a list of DMA device drivers, which can be used from +an application through DMA API. + +.. toctree:: + :maxdepth: 2 + :numbered: diff --git a/doc/guides/index.rst b/doc/guides/index.rst index 857f0363d3..919825992e 100644 --- a/doc/guides/index.rst +++ b/doc/guides/index.rst @@ -21,6 +21,7 @@ DPDK documentation compressdevs/index vdpadevs/index regexdevs/index + dmadevs/index eventdevs/index rawdevs/index mempool/index diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst new file mode 100644 index 0000000000..822282213c --- /dev/null +++ b/doc/guides/prog_guide/dmadev.rst @@ -0,0 +1,64 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2021 HiSilicon Limited + +DMA Device Library +================== + +The DMA library provides a DMA device framework for management and provisioning +of hardware and software DMA poll mode drivers, defining generic APIs which +support a number of different DMA operations. + + +Design Principles +----------------- + +The DMA library follows the same basic principles as those used in DPDK's +Ethernet Device framework and the RegEx framework. The DMA framework provides +a generic DMA device framework which supports both physical (hardware) +and virtual (software) DMA devices as well as a generic DMA API which allows +DMA devices to be managed and configured and supports DMA operations to be +provisioned on DMA poll mode driver. + +.. _figure_dmadev: + +.. figure:: img/dmadev.* + +The above figure shows the model on which the DMA framework is built on: + + * The DMA controller could have multiple hardware DMA channels (aka. hardware + DMA queues), each hardware DMA channel should be represented by a dmadev. + * The dmadev could create multiple virtual DMA channels, each virtual DMA + channel represents a different transfer context. The DMA operation request + must be submitted to the virtual DMA channel. e.g. Application could create + virtual DMA channel 0 for memory-to-memory transfer scenario, and create + virtual DMA channel 1 for memory-to-device transfer scenario. + + +Device Management +----------------- + +Device Creation +~~~~~~~~~~~~~~~ + +Physical DMA controllers are discovered during the PCI probe/enumeration of the +EAL function which is executed at DPDK initialization, this is based on their +PCI BDF (bus/bridge, device, function). Specific physical DMA controllers, like +other physical devices in DPDK can be listed using the EAL command line options. + +The dmadevs are dynamically allocated by using the API +``rte_dma_pmd_allocate`` based on the number of hardware DMA channels. After the +dmadev initialized successfully, the driver needs to switch the dmadev state to +``RTE_DMA_DEV_READY``. + + +Device Identification +~~~~~~~~~~~~~~~~~~~~~ + +Each DMA device, whether physical or virtual is uniquely designated by two +identifiers: + +- A unique device index used to designate the DMA device in all functions + exported by the DMA API. + +- A device name used to designate the DMA device in console messages, for + administration or debugging purposes. diff --git a/doc/guides/prog_guide/img/dmadev.svg b/doc/guides/prog_guide/img/dmadev.svg new file mode 100644 index 0000000000..157d7eb7dc --- /dev/null +++ b/doc/guides/prog_guide/img/dmadev.svg @@ -0,0 +1,283 @@ + + + + + + + + + + + + + + virtual DMA channel + + virtual DMA channel + + virtual DMA channel + + + dmadev + + hardware DMA channel + + hardware DMA channel + + hardware DMA controller + + dmadev + + + + + + + + + diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index 2dce507f46..89af28dacb 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -27,6 +27,7 @@ Programmer's Guide cryptodev_lib compressdev regexdev + dmadev rte_security rawdev link_bonding_poll_mode_drv_lib diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 19356ac53c..74639f1e81 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -106,6 +106,10 @@ New Features Added command-line options to specify total number of processes and current process ID. Each process owns subset of Rx and Tx queues. +* **Introduced dmadev library with:** + + * Device allocation APIs. + Removed Items ------------- diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build new file mode 100644 index 0000000000..a24c56d8ff --- /dev/null +++ b/drivers/dma/meson.build @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2021 HiSilicon Limited + +drivers = [] diff --git a/drivers/meson.build b/drivers/meson.build index 3d08540581..b7d680868a 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -18,6 +18,7 @@ subdirs = [ 'vdpa', # depends on common, bus and mempool. 'event', # depends on common, bus, mempool and net. 'baseband', # depends on common and bus. + 'dma', # depends on common and bus. ] if meson.is_cross_build() diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build new file mode 100644 index 0000000000..d2fc85e8c7 --- /dev/null +++ b/lib/dmadev/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2021 HiSilicon Limited. + +sources = files('rte_dmadev.c') +headers = files('rte_dmadev.h') +indirect_headers += files('rte_dmadev_core.h') +driver_sdk_headers += files('rte_dmadev_pmd.h') diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c new file mode 100644 index 0000000000..96af3f0772 --- /dev/null +++ b/lib/dmadev/rte_dmadev.c @@ -0,0 +1,263 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited + * Copyright(c) 2021 Intel Corporation + */ + +#include + +#include +#include +#include +#include +#include +#include + +#include "rte_dmadev.h" +#include "rte_dmadev_pmd.h" + +struct rte_dma_dev *rte_dma_devices; +static int16_t dma_devices_max; + +RTE_LOG_REGISTER_DEFAULT(rte_dma_logtype, INFO); +#define RTE_DMA_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, rte_dma_logtype, "%s(): " fmt "\n", \ + __func__, ##args) + +/* Macros to check for valid device id */ +#define RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ + if (!rte_dma_is_valid(dev_id)) { \ + RTE_DMA_LOG(ERR, "Invalid dev_id=%d", dev_id); \ + return retval; \ + } \ +} while (0) + +int +rte_dma_dev_max(size_t dev_max) +{ + /* This function may be called before rte_eal_init(), so no rte library + * function can be called in this function. + */ + if (dev_max == 0 || dev_max > INT16_MAX) + return -EINVAL; + + if (dma_devices_max > 0) + return -EINVAL; + + dma_devices_max = dev_max; + + return 0; +} + +static int +dma_check_name(const char *name) +{ + size_t name_len; + + if (name == NULL) { + RTE_DMA_LOG(ERR, "Name can't be NULL"); + return -EINVAL; + } + + name_len = strnlen(name, RTE_DEV_NAME_MAX_LEN); + if (name_len == 0) { + RTE_DMA_LOG(ERR, "Zero length DMA device name"); + return -EINVAL; + } + if (name_len >= RTE_DEV_NAME_MAX_LEN) { + RTE_DMA_LOG(ERR, "DMA device name is too long"); + return -EINVAL; + } + + return 0; +} + +static int16_t +dma_find_free_dev(void) +{ + int16_t i; + + if (rte_dma_devices == NULL) + return -1; + + for (i = 0; i < dma_devices_max; i++) { + if (rte_dma_devices[i].dev_name[0] == '\0') + return i; + } + + return -1; +} + +static struct rte_dma_dev* +dma_find(const char *name) +{ + int16_t i; + + if (rte_dma_devices == NULL) + return NULL; + + for (i = 0; i < dma_devices_max; i++) { + if ((rte_dma_devices[i].state != RTE_DMA_DEV_UNUSED) && + (!strcmp(name, rte_dma_devices[i].dev_name))) + return &rte_dma_devices[i]; + } + + return NULL; +} + +static int +dma_process_data_prepare(void) +{ + size_t size; + void *ptr; + + if (rte_dma_devices != NULL) + return 0; + + /* The return value of malloc may not be aligned to the cache line. + * Therefore, extra memory is applied for realignment. + * note: We do not call posix_memalign/aligned_alloc because it is + * version dependent on libc. + */ + size = dma_devices_max * sizeof(struct rte_dma_dev) + + RTE_CACHE_LINE_SIZE; + ptr = malloc(size); + if (ptr == NULL) + return -ENOMEM; + memset(ptr, 0, size); + + rte_dma_devices = RTE_PTR_ALIGN(ptr, RTE_CACHE_LINE_SIZE); + + return 0; +} + +static int +dma_data_prepare(void) +{ + if (dma_devices_max == 0) + dma_devices_max = RTE_DMADEV_DEFAULT_MAX_DEVS; + return dma_process_data_prepare(); +} + +static struct rte_dma_dev * +dma_allocate(const char *name, int numa_node, size_t private_data_size) +{ + struct rte_dma_dev *dev; + void *dev_private; + int16_t dev_id; + int ret; + + ret = dma_data_prepare(); + if (ret < 0) { + RTE_DMA_LOG(ERR, "Cannot initialize dmadevs data"); + return NULL; + } + + dev = dma_find(name); + if (dev != NULL) { + RTE_DMA_LOG(ERR, "DMA device already allocated"); + return NULL; + } + + dev_private = rte_zmalloc_socket(name, private_data_size, + RTE_CACHE_LINE_SIZE, numa_node); + if (dev_private == NULL) { + RTE_DMA_LOG(ERR, "Cannot allocate private data"); + return NULL; + } + + dev_id = dma_find_free_dev(); + if (dev_id < 0) { + RTE_DMA_LOG(ERR, "Reached maximum number of DMA devices"); + rte_free(dev_private); + return NULL; + } + + dev = &rte_dma_devices[dev_id]; + dev->dev_private = dev_private; + rte_strscpy(dev->dev_name, name, sizeof(dev->dev_name)); + dev->dev_id = dev_id; + dev->numa_node = numa_node; + dev->dev_private = dev_private; + + return dev; +} + +static void +dma_release(struct rte_dma_dev *dev) +{ + rte_free(dev->dev_private); + memset(dev, 0, sizeof(struct rte_dma_dev)); +} + +struct rte_dma_dev * +rte_dma_pmd_allocate(const char *name, int numa_node, size_t private_data_size) +{ + struct rte_dma_dev *dev; + + if (dma_check_name(name) != 0 || private_data_size == 0) + return NULL; + + dev = dma_allocate(name, numa_node, private_data_size); + if (dev == NULL) + return NULL; + + dev->state = RTE_DMA_DEV_REGISTERED; + + return dev; +} + +int +rte_dma_pmd_release(const char *name) +{ + struct rte_dma_dev *dev; + + if (dma_check_name(name) != 0) + return -EINVAL; + + dev = dma_find(name); + if (dev == NULL) + return -EINVAL; + + dma_release(dev); + return 0; +} + +int +rte_dma_get_dev_id(const char *name) +{ + struct rte_dma_dev *dev; + + if (dma_check_name(name) != 0) + return -EINVAL; + + dev = dma_find(name); + if (dev == NULL) + return -EINVAL; + + return dev->dev_id; +} + +bool +rte_dma_is_valid(int16_t dev_id) +{ + return (dev_id >= 0) && (dev_id < dma_devices_max) && + rte_dma_devices != NULL && + rte_dma_devices[dev_id].state != RTE_DMA_DEV_UNUSED; +} + +uint16_t +rte_dma_count_avail(void) +{ + uint16_t count = 0; + uint16_t i; + + if (rte_dma_devices == NULL) + return count; + + for (i = 0; i < dma_devices_max; i++) { + if (rte_dma_devices[i].state != RTE_DMA_DEV_UNUSED) + count++; + } + + return count; +} diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h new file mode 100644 index 0000000000..17dc0d1226 --- /dev/null +++ b/lib/dmadev/rte_dmadev.h @@ -0,0 +1,134 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited + * Copyright(c) 2021 Intel Corporation + * Copyright(c) 2021 Marvell International Ltd + * Copyright(c) 2021 SmartShare Systems + */ + +#ifndef RTE_DMADEV_H +#define RTE_DMADEV_H + +/** + * @file rte_dmadev.h + * + * DMA (Direct Memory Access) device API. + * + * The DMA framework is built on the following model: + * + * --------------- --------------- --------------- + * | virtual DMA | | virtual DMA | | virtual DMA | + * | channel | | channel | | channel | + * --------------- --------------- --------------- + * | | | + * ------------------ | + * | | + * ------------ ------------ + * | dmadev | | dmadev | + * ------------ ------------ + * | | + * ------------------ ------------------ + * | HW DMA channel | | HW DMA channel | + * ------------------ ------------------ + * | | + * -------------------------------- + * | + * --------------------- + * | HW DMA Controller | + * --------------------- + * + * The DMA controller could have multiple HW-DMA-channels (aka. HW-DMA-queues), + * each HW-DMA-channel should be represented by a dmadev. + * + * The dmadev could create multiple virtual DMA channels, each virtual DMA + * channel represents a different transfer context. The DMA operation request + * must be submitted to the virtual DMA channel. e.g. Application could create + * virtual DMA channel 0 for memory-to-memory transfer scenario, and create + * virtual DMA channel 1 for memory-to-device transfer scenario. + * + * The dmadev are dynamically allocated by rte_dma_pmd_allocate() during the + * PCI/SoC device probing phase performed at EAL initialization time. And could + * be released by rte_dma_pmd_release() during the PCI/SoC device removing + * phase. + * + * This framework uses 'int16_t dev_id' as the device identifier of a dmadev, + * and 'uint16_t vchan' as the virtual DMA channel identifier in one dmadev. + * + */ + +#include + +#include +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure the maximum number of dmadevs. + * @note This function can be invoked before the primary process rte_eal_init() + * to change the maximum number of dmadevs. + * + * @param dev_max + * maximum number of dmadevs. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_dev_max(size_t dev_max); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get the device identifier for the named DMA device. + * + * @param name + * DMA device name. + * + * @return + * Returns DMA device identifier on success. + * - <0: Failure to find named DMA device. + */ +__rte_experimental +int rte_dma_get_dev_id(const char *name); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * @param dev_id + * DMA device index. + * + * @return + * - If the device index is valid (true) or not (false). + */ +__rte_experimental +bool rte_dma_is_valid(int16_t dev_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get the total number of DMA devices that have been successfully + * initialised. + * + * @return + * The total number of usable DMA devices. + */ +__rte_experimental +uint16_t rte_dma_count_avail(void); + +#include "rte_dmadev_core.h" + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_DMADEV_H */ diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h new file mode 100644 index 0000000000..5ed96853b2 --- /dev/null +++ b/lib/dmadev/rte_dmadev_core.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited + * Copyright(c) 2021 Intel Corporation + */ + +#ifndef RTE_DMADEV_CORE_H +#define RTE_DMADEV_CORE_H + +/** + * @file + * + * DMA Device internal header. + * + * This header contains internal data types, that are used by the DMA devices + * in order to expose their ops to the class. + * + * Applications should not use these API directly. + * + */ + +/** + * Possible states of a DMA device. + * + * @see struct rte_dmadev::state + */ +enum rte_dma_dev_state { + RTE_DMA_DEV_UNUSED = 0, /**< Device is unused. */ + /** Device is registered, but not ready to be used. */ + RTE_DMA_DEV_REGISTERED, + /** Device is ready for use. This is set by the PMD. */ + RTE_DMA_DEV_READY, +}; + +/** + * @internal + * The generic data structure associated with each DMA device. + */ +struct rte_dma_dev { + char dev_name[RTE_DEV_NAME_MAX_LEN]; /**< Unique identifier name */ + int16_t dev_id; /**< Device [external] identifier. */ + int16_t numa_node; /**< Local NUMA memory ID. -1 if unknown. */ + void *dev_private; /**< PMD-specific private data. */ + /** Device info which supplied during device initialization. */ + struct rte_device *device; + enum rte_dma_dev_state state; /**< Flag indicating the device state. */ + uint64_t reserved[2]; /**< Reserved for future fields. */ +} __rte_cache_aligned; + +extern struct rte_dma_dev *rte_dma_devices; + +#endif /* RTE_DMADEV_CORE_H */ diff --git a/lib/dmadev/rte_dmadev_pmd.h b/lib/dmadev/rte_dmadev_pmd.h new file mode 100644 index 0000000000..02281c74fd --- /dev/null +++ b/lib/dmadev/rte_dmadev_pmd.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited + */ + +#ifndef RTE_DMADEV_PMD_H +#define RTE_DMADEV_PMD_H + +/** + * @file + * + * DMA Device PMD APIs + * + * Driver facing APIs for a DMA device. These are not to be called directly by + * any application. + */ + +#include "rte_dmadev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @internal + * Allocates a new dmadev slot for an DMA device and returns the pointer + * to that slot for the driver to use. + * + * @param name + * DMA device name. + * @param numa_node + * Driver's private data's numa node. + * @param private_data_size + * Driver's private data size. + * + * @return + * A pointer to the DMA device slot case of success, + * NULL otherwise. + */ +__rte_internal +struct rte_dma_dev *rte_dma_pmd_allocate(const char *name, int numa_node, + size_t private_data_size); + +/** + * @internal + * Release the specified dmadev. + * + * @param name + * DMA device name. + * + * @return + * - 0 on success, negative on error. + */ +__rte_internal +int rte_dma_pmd_release(const char *name); + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_DMADEV_PMD_H */ diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map new file mode 100644 index 0000000000..56ea0332cb --- /dev/null +++ b/lib/dmadev/version.map @@ -0,0 +1,20 @@ +EXPERIMENTAL { + global: + + rte_dma_count_avail; + rte_dma_dev_max; + rte_dma_get_dev_id; + rte_dma_is_valid; + + local: *; +}; + +INTERNAL { + global: + + rte_dma_devices; + rte_dma_pmd_allocate; + rte_dma_pmd_release; + + local: *; +}; diff --git a/lib/meson.build b/lib/meson.build index 1673ca4323..3dd920f5c5 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -45,6 +45,7 @@ libraries = [ 'pdump', 'rawdev', 'regexdev', + 'dmadev', 'rib', 'reorder', 'sched', From patchwork Fri Sep 24 10:53:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 99586 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 11093A0548; Fri, 24 Sep 2021 12:58:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A288741320; Fri, 24 Sep 2021 12:58:20 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 487E8412F5 for ; Fri, 24 Sep 2021 12:58:14 +0200 (CEST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HG89k70CCzWQgP; Fri, 24 Sep 2021 18:56:58 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:11 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:11 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , , , Date: Fri, 24 Sep 2021 18:53:53 +0800 Message-ID: <20210924105357.15386-3-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210924105357.15386-1-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <20210924105357.15386-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v23 2/6] dmadev: add control plane function support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add control plane functions for dmadev. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup Reviewed-by: Kevin Laatz Reviewed-by: Conor Walsh --- doc/guides/prog_guide/dmadev.rst | 41 +++ doc/guides/rel_notes/release_21_11.rst | 1 + lib/dmadev/rte_dmadev.c | 359 ++++++++++++++++++ lib/dmadev/rte_dmadev.h | 480 +++++++++++++++++++++++++ lib/dmadev/rte_dmadev_core.h | 62 +++- lib/dmadev/version.map | 9 + 6 files changed, 951 insertions(+), 1 deletion(-) diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst index 822282213c..c2b0b0420b 100644 --- a/doc/guides/prog_guide/dmadev.rst +++ b/doc/guides/prog_guide/dmadev.rst @@ -62,3 +62,44 @@ identifiers: - A device name used to designate the DMA device in console messages, for administration or debugging purposes. + + +Device Configuration +~~~~~~~~~~~~~~~~~~~~ + +The rte_dma_configure API is used to configure a DMA device. + +.. code-block:: c + + int rte_dma_configure(int16_t dev_id, + const struct rte_dma_conf *dev_conf); + +The ``rte_dma_conf`` structure is used to pass the configuration parameters +for the DMA device for example the number of virtual DMA channels to set up, +indication of whether to enable silent mode. + + +Configuration of Virtual DMA Channels +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The rte_dma_vchan_setup API is used to configure a virtual DMA channel. + +.. code-block:: c + + int rte_dma_vchan_setup(int16_t dev_id, uint16_t vchan, + const struct rte_dma_vchan_conf *conf); + +The ``rte_dma_vchan_conf`` structure is used to pass the configuration +parameters for the virtual DMA channel for example transfer direction, number of +descriptor for the virtual DMA channel, source device access port parameter, +destination device access port parameter. + + +Device Features and Capabilities +-------------------------------- + +DMA devices may support different feature sets. The ``rte_dma_info_get`` API +can be used to get the device info and supported features. + +Silent mode is a special device capability which does not require the +application to invoke dequeue APIs. diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 74639f1e81..0aceaa8837 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -109,6 +109,7 @@ New Features * **Introduced dmadev library with:** * Device allocation APIs. + * Control plane APIs. Removed Items diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 96af3f0772..e0134b9eec 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -218,6 +218,9 @@ rte_dma_pmd_release(const char *name) if (dev == NULL) return -EINVAL; + if (dev->state == RTE_DMA_DEV_READY) + return rte_dma_close(dev->dev_id); + dma_release(dev); return 0; } @@ -261,3 +264,359 @@ rte_dma_count_avail(void) return count; } + +int +rte_dma_info_get(int16_t dev_id, struct rte_dma_info *dev_info) +{ + const struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + int ret; + + RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (dev_info == NULL) + return -EINVAL; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_info_get, -ENOTSUP); + memset(dev_info, 0, sizeof(struct rte_dma_info)); + ret = (*dev->dev_ops->dev_info_get)(dev, dev_info, + sizeof(struct rte_dma_info)); + if (ret != 0) + return ret; + + dev_info->numa_node = dev->device->numa_node; + dev_info->nb_vchans = dev->dev_conf.nb_vchans; + + return 0; +} + +int +rte_dma_configure(int16_t dev_id, const struct rte_dma_conf *dev_conf) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + struct rte_dma_info dev_info; + int ret; + + RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (dev_conf == NULL) + return -EINVAL; + + if (dev->dev_started != 0) { + RTE_DMA_LOG(ERR, + "Device %d must be stopped to allow configuration", + dev_id); + return -EBUSY; + } + + ret = rte_dma_info_get(dev_id, &dev_info); + if (ret != 0) { + RTE_DMA_LOG(ERR, "Device %d get device info fail", dev_id); + return -EINVAL; + } + if (dev_conf->nb_vchans == 0) { + RTE_DMA_LOG(ERR, + "Device %d configure zero vchans", dev_id); + return -EINVAL; + } + if (dev_conf->nb_vchans > dev_info.max_vchans) { + RTE_DMA_LOG(ERR, + "Device %d configure too many vchans", dev_id); + return -EINVAL; + } + if (dev_conf->enable_silent && + !(dev_info.dev_capa & RTE_DMA_CAPA_SILENT)) { + RTE_DMA_LOG(ERR, "Device %d don't support silent", dev_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP); + ret = (*dev->dev_ops->dev_configure)(dev, dev_conf, + sizeof(struct rte_dma_conf)); + if (ret == 0) + memcpy(&dev->dev_conf, dev_conf, sizeof(struct rte_dma_conf)); + + return ret; +} + +int +rte_dma_start(int16_t dev_id) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + int ret; + + RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + + if (dev->dev_conf.nb_vchans == 0) { + RTE_DMA_LOG(ERR, "Device %d must be configured first", dev_id); + return -EINVAL; + } + + if (dev->dev_started != 0) { + RTE_DMA_LOG(WARNING, "Device %d already started", dev_id); + return 0; + } + + if (dev->dev_ops->dev_start == NULL) + goto mark_started; + + ret = (*dev->dev_ops->dev_start)(dev); + if (ret != 0) + return ret; + +mark_started: + dev->dev_started = 1; + return 0; +} + +int +rte_dma_stop(int16_t dev_id) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + int ret; + + RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + + if (dev->dev_started == 0) { + RTE_DMA_LOG(WARNING, "Device %d already stopped", dev_id); + return 0; + } + + if (dev->dev_ops->dev_stop == NULL) + goto mark_stopped; + + ret = (*dev->dev_ops->dev_stop)(dev); + if (ret != 0) + return ret; + +mark_stopped: + dev->dev_started = 0; + return 0; +} + +int +rte_dma_close(int16_t dev_id) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + int ret; + + RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + + /* Device must be stopped before it can be closed */ + if (dev->dev_started == 1) { + RTE_DMA_LOG(ERR, + "Device %d must be stopped before closing", dev_id); + return -EBUSY; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP); + ret = (*dev->dev_ops->dev_close)(dev); + if (ret == 0) + dma_release(dev); + + return ret; +} + +int +rte_dma_vchan_setup(int16_t dev_id, uint16_t vchan, + const struct rte_dma_vchan_conf *conf) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + struct rte_dma_info dev_info; + bool src_is_dev, dst_is_dev; + int ret; + + RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (conf == NULL) + return -EINVAL; + + if (dev->dev_started != 0) { + RTE_DMA_LOG(ERR, + "Device %d must be stopped to allow configuration", + dev_id); + return -EBUSY; + } + + ret = rte_dma_info_get(dev_id, &dev_info); + if (ret != 0) { + RTE_DMA_LOG(ERR, "Device %d get device info fail", dev_id); + return -EINVAL; + } + if (dev->dev_conf.nb_vchans == 0) { + RTE_DMA_LOG(ERR, "Device %d must be configured first", dev_id); + return -EINVAL; + } + if (vchan >= dev_info.nb_vchans) { + RTE_DMA_LOG(ERR, "Device %d vchan out range!", dev_id); + return -EINVAL; + } + if (conf->direction != RTE_DMA_DIR_MEM_TO_MEM && + conf->direction != RTE_DMA_DIR_MEM_TO_DEV && + conf->direction != RTE_DMA_DIR_DEV_TO_MEM && + conf->direction != RTE_DMA_DIR_DEV_TO_DEV) { + RTE_DMA_LOG(ERR, "Device %d direction invalid!", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_MEM_TO_MEM && + !(dev_info.dev_capa & RTE_DMA_CAPA_MEM_TO_MEM)) { + RTE_DMA_LOG(ERR, + "Device %d don't support mem2mem transfer", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_MEM_TO_DEV && + !(dev_info.dev_capa & RTE_DMA_CAPA_MEM_TO_DEV)) { + RTE_DMA_LOG(ERR, + "Device %d don't support mem2dev transfer", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_DEV_TO_MEM && + !(dev_info.dev_capa & RTE_DMA_CAPA_DEV_TO_MEM)) { + RTE_DMA_LOG(ERR, + "Device %d don't support dev2mem transfer", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_DEV_TO_DEV && + !(dev_info.dev_capa & RTE_DMA_CAPA_DEV_TO_DEV)) { + RTE_DMA_LOG(ERR, + "Device %d don't support dev2dev transfer", dev_id); + return -EINVAL; + } + if (conf->nb_desc < dev_info.min_desc || + conf->nb_desc > dev_info.max_desc) { + RTE_DMA_LOG(ERR, + "Device %d number of descriptors invalid", dev_id); + return -EINVAL; + } + src_is_dev = conf->direction == RTE_DMA_DIR_DEV_TO_MEM || + conf->direction == RTE_DMA_DIR_DEV_TO_DEV; + if ((conf->src_port.port_type == RTE_DMA_PORT_NONE && src_is_dev) || + (conf->src_port.port_type != RTE_DMA_PORT_NONE && !src_is_dev)) { + RTE_DMA_LOG(ERR, "Device %d source port type invalid", dev_id); + return -EINVAL; + } + dst_is_dev = conf->direction == RTE_DMA_DIR_MEM_TO_DEV || + conf->direction == RTE_DMA_DIR_DEV_TO_DEV; + if ((conf->dst_port.port_type == RTE_DMA_PORT_NONE && dst_is_dev) || + (conf->dst_port.port_type != RTE_DMA_PORT_NONE && !dst_is_dev)) { + RTE_DMA_LOG(ERR, + "Device %d destination port type invalid", dev_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_setup, -ENOTSUP); + return (*dev->dev_ops->vchan_setup)(dev, vchan, conf, + sizeof(struct rte_dma_vchan_conf)); +} + +int +rte_dma_stats_get(int16_t dev_id, uint16_t vchan, struct rte_dma_stats *stats) +{ + const struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + + RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (stats == NULL) + return -EINVAL; + if (vchan >= dev->dev_conf.nb_vchans && + vchan != RTE_DMA_ALL_VCHAN) { + RTE_DMA_LOG(ERR, + "Device %d vchan %u out of range", dev_id, vchan); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP); + memset(stats, 0, sizeof(struct rte_dma_stats)); + return (*dev->dev_ops->stats_get)(dev, vchan, stats, + sizeof(struct rte_dma_stats)); +} + +int +rte_dma_stats_reset(int16_t dev_id, uint16_t vchan) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + + RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (vchan >= dev->dev_conf.nb_vchans && + vchan != RTE_DMA_ALL_VCHAN) { + RTE_DMA_LOG(ERR, + "Device %d vchan %u out of range", dev_id, vchan); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_reset, -ENOTSUP); + return (*dev->dev_ops->stats_reset)(dev, vchan); +} + +static const char * +dma_capability_name(uint64_t capability) +{ + static const struct { + uint64_t capability; + const char *name; + } capa_names[] = { + { RTE_DMA_CAPA_MEM_TO_MEM, "mem2mem" }, + { RTE_DMA_CAPA_MEM_TO_DEV, "mem2dev" }, + { RTE_DMA_CAPA_DEV_TO_MEM, "dev2mem" }, + { RTE_DMA_CAPA_DEV_TO_DEV, "dev2dev" }, + { RTE_DMA_CAPA_SVA, "sva" }, + { RTE_DMA_CAPA_SILENT, "silent" }, + { RTE_DMA_CAPA_OPS_COPY, "copy" }, + { RTE_DMA_CAPA_OPS_COPY_SG, "copy_sg" }, + { RTE_DMA_CAPA_OPS_FILL, "fill" }, + }; + + const char *name = "unknown"; + uint32_t i; + + for (i = 0; i < RTE_DIM(capa_names); i++) { + if (capability == capa_names[i].capability) { + name = capa_names[i].name; + break; + } + } + + return name; +} + +static void +dma_dump_capability(FILE *f, uint64_t dev_capa) +{ + uint64_t capa; + + (void)fprintf(f, " dev_capa: 0x%" PRIx64 " -", dev_capa); + while (dev_capa > 0) { + capa = 1ull << __builtin_ctzll(dev_capa); + (void)fprintf(f, " %s", dma_capability_name(capa)); + dev_capa &= ~capa; + } + (void)fprintf(f, "\n"); +} + +int +rte_dma_dump(int16_t dev_id, FILE *f) +{ + const struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + struct rte_dma_info dev_info; + int ret; + + RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (f == NULL) + return -EINVAL; + + ret = rte_dma_info_get(dev_id, &dev_info); + if (ret != 0) { + RTE_DMA_LOG(ERR, "Device %d get device info fail", dev_id); + return -EINVAL; + } + + (void)fprintf(f, "DMA Dev %d, '%s' [%s]\n", + dev->dev_id, + dev->dev_name, + dev->dev_started ? "started" : "stopped"); + dma_dump_capability(f, dev_info.dev_capa); + (void)fprintf(f, " max_vchans_supported: %u\n", dev_info.max_vchans); + (void)fprintf(f, " nb_vchans_configured: %u\n", dev_info.nb_vchans); + (void)fprintf(f, " silent_mode: %s\n", + dev->dev_conf.enable_silent ? "on" : "off"); + + if (dev->dev_ops->dev_dump != NULL) + return (*dev->dev_ops->dev_dump)(dev, f); + + return 0; +} diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index 17dc0d1226..5114c37446 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -53,6 +53,28 @@ * This framework uses 'int16_t dev_id' as the device identifier of a dmadev, * and 'uint16_t vchan' as the virtual DMA channel identifier in one dmadev. * + * The functions exported by the dmadev API to setup a device designated by its + * device identifier must be invoked in the following order: + * - rte_dma_configure() + * - rte_dma_vchan_setup() + * - rte_dma_start() + * + * If the application wants to change the configuration (i.e. invoke + * rte_dma_configure() or rte_dma_vchan_setup()), it must invoke + * rte_dma_stop() first to stop the device and then do the reconfiguration + * before invoking rte_dma_start() again. The dataplane functions should not + * be invoked when the device is stopped. + * + * Finally, an application can close a dmadev by invoking the rte_dma_close() + * function. + * + * About MT-safe, all the functions of the dmadev API exported by a PMD are + * lock-free functions which assume to not be invoked in parallel on different + * logical cores to work on the same target dmadev object. + * @note Different virtual DMA channels on the same dmadev *DO NOT* support + * parallel invocation because these virtual DMA channels share the same + * HW-DMA-channel. + * */ #include @@ -125,6 +147,464 @@ bool rte_dma_is_valid(int16_t dev_id); __rte_experimental uint16_t rte_dma_count_avail(void); +/** DMA device support memory-to-memory transfer. + * + * @see struct rte_dma_info::dev_capa + */ +#define RTE_DMA_CAPA_MEM_TO_MEM RTE_BIT64(0) +/** DMA device support memory-to-device transfer. + * + * @see struct rte_dma_info::dev_capa + */ +#define RTE_DMA_CAPA_MEM_TO_DEV RTE_BIT64(1) +/** DMA device support device-to-memory transfer. + * + * @see struct rte_dma_info::dev_capa + */ +#define RTE_DMA_CAPA_DEV_TO_MEM RTE_BIT64(2) +/** DMA device support device-to-device transfer. + * + * @see struct rte_dma_info::dev_capa + */ +#define RTE_DMA_CAPA_DEV_TO_DEV RTE_BIT64(3) +/** DMA device support SVA which could use VA as DMA address. + * If device support SVA then application could pass any VA address like memory + * from rte_malloc(), rte_memzone(), malloc, stack memory. + * If device don't support SVA, then application should pass IOVA address which + * from rte_malloc(), rte_memzone(). + * + * @see struct rte_dma_info::dev_capa + */ +#define RTE_DMA_CAPA_SVA RTE_BIT64(4) +/** DMA device support work in silent mode. + * In this mode, application don't required to invoke rte_dma_completed*() + * API. + * + * @see struct rte_dma_conf::silent_mode + */ +#define RTE_DMA_CAPA_SILENT RTE_BIT64(5) +/** DMA device support copy ops. + * This capability start with index of 32, so that it could leave gap between + * normal capability and ops capability. + * + * @see struct rte_dma_info::dev_capa + */ +#define RTE_DMA_CAPA_OPS_COPY RTE_BIT64(32) +/** DMA device support scatter-gather list copy ops. + * + * @see struct rte_dma_info::dev_capa + */ +#define RTE_DMA_CAPA_OPS_COPY_SG RTE_BIT64(33) +/** DMA device support fill ops. + * + * @see struct rte_dma_info::dev_capa + */ +#define RTE_DMA_CAPA_OPS_FILL RTE_BIT64(34) + +/** + * A structure used to retrieve the information of a DMA device. + * + * @see rte_dma_info_get + */ +struct rte_dma_info { + /** Device capabilities (RTE_DMA_CAPA_*). */ + uint64_t dev_capa; + /** Maximum number of virtual DMA channels supported. */ + uint16_t max_vchans; + /** Maximum allowed number of virtual DMA channel descriptors. */ + uint16_t max_desc; + /** Minimum allowed number of virtual DMA channel descriptors. */ + uint16_t min_desc; + /** Maximum number of source or destination scatter-gather entry + * supported. + * If the device does not support COPY_SG capability, this value can be + * zero. + * If the device supports COPY_SG capability, then rte_dma_copy_sg() + * parameter nb_src/nb_dst should not exceed this value. + */ + uint16_t max_sges; + /** NUMA node connection, -1 if unknown. */ + int16_t numa_node; + /** Number of virtual DMA channel configured. */ + uint16_t nb_vchans; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve information of a DMA device. + * + * @param dev_id + * The identifier of the device. + * @param[out] dev_info + * A pointer to a structure of type *rte_dma_info* to be filled with the + * information of the device. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_info_get(int16_t dev_id, struct rte_dma_info *dev_info); + +/** + * A structure used to configure a DMA device. + * + * @see rte_dma_configure + */ +struct rte_dma_conf { + /** The number of virtual DMA channels to set up for the DMA device. + * This value cannot be greater than the field 'max_vchans' of struct + * rte_dma_info which get from rte_dma_info_get(). + */ + uint16_t nb_vchans; + /** Indicates whether to enable silent mode. + * false-default mode, true-silent mode. + * This value can be set to true only when the SILENT capability is + * supported. + * + * @see RTE_DMA_CAPA_SILENT + */ + bool enable_silent; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure a DMA device. + * + * This function must be invoked first before any other function in the + * API. This function can also be re-invoked when a device is in the + * stopped state. + * + * @param dev_id + * The identifier of the device to configure. + * @param dev_conf + * The DMA device configuration structure encapsulated into rte_dma_conf + * object. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_configure(int16_t dev_id, const struct rte_dma_conf *dev_conf); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Start a DMA device. + * + * The device start step is the last one and consists of setting the DMA + * to start accepting jobs. + * + * @param dev_id + * The identifier of the device. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_start(int16_t dev_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Stop a DMA device. + * + * The device can be restarted with a call to rte_dma_start(). + * + * @param dev_id + * The identifier of the device. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_stop(int16_t dev_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Close a DMA device. + * + * The device cannot be restarted after this call. + * + * @param dev_id + * The identifier of the device. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_close(int16_t dev_id); + +/** + * DMA transfer direction defines. + * + * @see struct rte_dma_vchan_conf::direction + */ +enum rte_dma_direction { + /** DMA transfer direction - from memory to memory. + * + * @see struct rte_dma_vchan_conf::direction + */ + RTE_DMA_DIR_MEM_TO_MEM, + /** DMA transfer direction - from memory to device. + * In a typical scenario, the SoCs are installed on host servers as + * iNICs through the PCIe interface. In this case, the SoCs works in + * EP(endpoint) mode, it could initiate a DMA move request from memory + * (which is SoCs memory) to device (which is host memory). + * + * @see struct rte_dma_vchan_conf::direction + */ + RTE_DMA_DIR_MEM_TO_DEV, + /** DMA transfer direction - from device to memory. + * In a typical scenario, the SoCs are installed on host servers as + * iNICs through the PCIe interface. In this case, the SoCs works in + * EP(endpoint) mode, it could initiate a DMA move request from device + * (which is host memory) to memory (which is SoCs memory). + * + * @see struct rte_dma_vchan_conf::direction + */ + RTE_DMA_DIR_DEV_TO_MEM, + /** DMA transfer direction - from device to device. + * In a typical scenario, the SoCs are installed on host servers as + * iNICs through the PCIe interface. In this case, the SoCs works in + * EP(endpoint) mode, it could initiate a DMA move request from device + * (which is host memory) to the device (which is another host memory). + * + * @see struct rte_dma_vchan_conf::direction + */ + RTE_DMA_DIR_DEV_TO_DEV, +}; + +/** + * DMA access port type defines. + * + * @see struct rte_dma_port_param::port_type + */ +enum rte_dma_port_type { + RTE_DMA_PORT_NONE, + RTE_DMA_PORT_PCIE, /**< The DMA access port is PCIe. */ +}; + +/** + * A structure used to descript DMA access port parameters. + * + * @see struct rte_dma_vchan_conf::src_port + * @see struct rte_dma_vchan_conf::dst_port + */ +struct rte_dma_port_param { + /** The device access port type. + * + * @see enum rte_dma_port_type + */ + enum rte_dma_port_type port_type; + union { + /** PCIe access port parameters. + * + * The following model shows SoC's PCIe module connects to + * multiple PCIe hosts and multiple endpoints. The PCIe module + * has an integrated DMA controller. + * + * If the DMA wants to access the memory of host A, it can be + * initiated by PF1 in core0, or by VF0 of PF0 in core0. + * + * \code{.unparsed} + * System Bus + * | ----------PCIe module---------- + * | Bus + * | Interface + * | ----- ------------------ + * | | | | PCIe Core0 | + * | | | | | ----------- + * | | | | PF-0 -- VF-0 | | Host A | + * | | |--------| |- VF-1 |--------| Root | + * | | | | PF-1 | | Complex | + * | | | | PF-2 | ----------- + * | | | ------------------ + * | | | + * | | | ------------------ + * | | | | PCIe Core1 | + * | | | | | ----------- + * | | | | PF-0 -- VF-0 | | Host B | + * |-----| |--------| PF-1 -- VF-0 |--------| Root | + * | | | | |- VF-1 | | Complex | + * | | | | PF-2 | ----------- + * | | | ------------------ + * | | | + * | | | ------------------ + * | |DMA| | | ------ + * | | | | |--------| EP | + * | | |--------| PCIe Core2 | ------ + * | | | | | ------ + * | | | | |--------| EP | + * | | | | | ------ + * | ----- ------------------ + * + * \endcode + * + * @note If some fields can not be supported by the + * hardware/driver, then the driver ignores those fields. + * Please check driver-specific documentation for limitations + * and capablites. + */ + struct { + uint64_t coreid : 4; /**< PCIe core id used. */ + uint64_t pfid : 8; /**< PF id used. */ + uint64_t vfen : 1; /**< VF enable bit. */ + uint64_t vfid : 16; /**< VF id used. */ + /** The pasid filed in TLP packet. */ + uint64_t pasid : 20; + /** The attributes filed in TLP packet. */ + uint64_t attr : 3; + /** The processing hint filed in TLP packet. */ + uint64_t ph : 2; + /** The steering tag filed in TLP packet. */ + uint64_t st : 16; + } pcie; + }; + uint64_t reserved[2]; /**< Reserved for future fields. */ +}; + +/** + * A structure used to configure a virtual DMA channel. + * + * @see rte_dma_vchan_setup + */ +struct rte_dma_vchan_conf { + /** Transfer direction + * + * @see enum rte_dma_direction + */ + enum rte_dma_direction direction; + /** Number of descriptor for the virtual DMA channel */ + uint16_t nb_desc; + /** 1) Used to describes the device access port parameter in the + * device-to-memory transfer scenario. + * 2) Used to describes the source device access port parameter in the + * device-to-device transfer scenario. + * + * @see struct rte_dma_port_param + */ + struct rte_dma_port_param src_port; + /** 1) Used to describes the device access port parameter in the + * memory-to-device transfer scenario. + * 2) Used to describes the destination device access port parameter in + * the device-to-device transfer scenario. + * + * @see struct rte_dma_port_param + */ + struct rte_dma_port_param dst_port; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Allocate and set up a virtual DMA channel. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. The value must be in the range + * [0, nb_vchans - 1] previously supplied to rte_dma_configure(). + * @param conf + * The virtual DMA channel configuration structure encapsulated into + * rte_dma_vchan_conf object. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_vchan_setup(int16_t dev_id, uint16_t vchan, + const struct rte_dma_vchan_conf *conf); + +/** + * A structure used to retrieve statistics. + * + * @see rte_dma_stats_get + */ +struct rte_dma_stats { + /** Count of operations which were submitted to hardware. */ + uint64_t submitted; + /** Count of operations which were completed, including successful and + * failed completions. + */ + uint64_t completed; + /** Count of operations which failed to complete. */ + uint64_t errors; +}; + +/** + * Special ID, which is used to represent all virtual DMA channels. + * + * @see rte_dma_stats_get + * @see rte_dma_stats_reset + */ +#define RTE_DMA_ALL_VCHAN 0xFFFFu + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve basic statistics of a or all virtual DMA channel(s). + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * If equal RTE_DMA_ALL_VCHAN means all channels. + * @param[out] stats + * The basic statistics structure encapsulated into rte_dma_stats + * object. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_stats_get(int16_t dev_id, uint16_t vchan, + struct rte_dma_stats *stats); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Reset basic statistics of a or all virtual DMA channel(s). + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * If equal RTE_DMA_ALL_VCHAN means all channels. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_stats_reset(int16_t dev_id, uint16_t vchan); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Dump DMA device info. + * + * @param dev_id + * The identifier of the device. + * @param f + * The file to write the output to. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int rte_dma_dump(int16_t dev_id, FILE *f); + #include "rte_dmadev_core.h" #ifdef __cplusplus diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h index 5ed96853b2..d6f885527a 100644 --- a/lib/dmadev/rte_dmadev_core.h +++ b/lib/dmadev/rte_dmadev_core.h @@ -18,6 +18,43 @@ * */ +struct rte_dma_dev; + +/** @internal Used to get device information of a device. */ +typedef int (*rte_dma_info_get_t)(const struct rte_dma_dev *dev, + struct rte_dma_info *dev_info, + uint32_t info_sz); + +/** @internal Used to configure a device. */ +typedef int (*rte_dma_configure_t)(struct rte_dma_dev *dev, + const struct rte_dma_conf *dev_conf, + uint32_t conf_sz); + +/** @internal Used to start a configured device. */ +typedef int (*rte_dma_start_t)(struct rte_dma_dev *dev); + +/** @internal Used to stop a configured device. */ +typedef int (*rte_dma_stop_t)(struct rte_dma_dev *dev); + +/** @internal Used to close a configured device. */ +typedef int (*rte_dma_close_t)(struct rte_dma_dev *dev); + +/** @internal Used to allocate and set up a virtual DMA channel. */ +typedef int (*rte_dma_vchan_setup_t)(struct rte_dma_dev *dev, uint16_t vchan, + const struct rte_dma_vchan_conf *conf, + uint32_t conf_sz); + +/** @internal Used to retrieve basic statistics. */ +typedef int (*rte_dma_stats_get_t)(const struct rte_dma_dev *dev, + uint16_t vchan, struct rte_dma_stats *stats, + uint32_t stats_sz); + +/** @internal Used to reset basic statistics. */ +typedef int (*rte_dma_stats_reset_t)(struct rte_dma_dev *dev, uint16_t vchan); + +/** @internal Used to dump internal information. */ +typedef int (*rte_dma_dump_t)(const struct rte_dma_dev *dev, FILE *f); + /** * Possible states of a DMA device. * @@ -32,7 +69,26 @@ enum rte_dma_dev_state { }; /** - * @internal + * DMA device operations function pointer table. + * + * @see struct rte_dma_dev:dev_ops + */ +struct rte_dma_dev_ops { + rte_dma_info_get_t dev_info_get; + rte_dma_configure_t dev_configure; + rte_dma_start_t dev_start; + rte_dma_stop_t dev_stop; + rte_dma_close_t dev_close; + + rte_dma_vchan_setup_t vchan_setup; + + rte_dma_stats_get_t stats_get; + rte_dma_stats_reset_t stats_reset; + + rte_dma_dump_t dev_dump; +}; + +/** @internal * The generic data structure associated with each DMA device. */ struct rte_dma_dev { @@ -40,9 +96,13 @@ struct rte_dma_dev { int16_t dev_id; /**< Device [external] identifier. */ int16_t numa_node; /**< Local NUMA memory ID. -1 if unknown. */ void *dev_private; /**< PMD-specific private data. */ + /** Functions exported by PMD. */ + const struct rte_dma_dev_ops *dev_ops; + struct rte_dma_conf dev_conf; /**< DMA device configuration. */ /** Device info which supplied during device initialization. */ struct rte_device *device; enum rte_dma_dev_state state; /**< Flag indicating the device state. */ + uint8_t dev_started : 1; /**< Device state: STARTED(1)/STOPPED(0). */ uint64_t reserved[2]; /**< Reserved for future fields. */ } __rte_cache_aligned; diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map index 56ea0332cb..6b7939b10f 100644 --- a/lib/dmadev/version.map +++ b/lib/dmadev/version.map @@ -1,10 +1,19 @@ EXPERIMENTAL { global: + rte_dma_close; + rte_dma_configure; rte_dma_count_avail; rte_dma_dev_max; + rte_dma_dump; rte_dma_get_dev_id; + rte_dma_info_get; rte_dma_is_valid; + rte_dma_start; + rte_dma_stats_get; + rte_dma_stats_reset; + rte_dma_stop; + rte_dma_vchan_setup; local: *; }; From patchwork Fri Sep 24 10:53:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 99582 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76A8FA0548; Fri, 24 Sep 2021 12:58:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 64202412F5; Fri, 24 Sep 2021 12:58:16 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id E4B714122D for ; Fri, 24 Sep 2021 12:58:13 +0200 (CEST) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HG85s4HYdz8ylm; Fri, 24 Sep 2021 18:53:37 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:11 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:11 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , , , Date: Fri, 24 Sep 2021 18:53:54 +0800 Message-ID: <20210924105357.15386-4-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210924105357.15386-1-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <20210924105357.15386-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v23 3/6] dmadev: add data plane function support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add data plane functions for dmadev. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup Reviewed-by: Kevin Laatz Reviewed-by: Conor Walsh --- doc/guides/prog_guide/dmadev.rst | 22 ++ doc/guides/rel_notes/release_21_11.rst | 1 + lib/dmadev/rte_dmadev.h | 460 +++++++++++++++++++++++++ lib/dmadev/rte_dmadev_core.h | 51 ++- lib/dmadev/version.map | 6 + 5 files changed, 537 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst index c2b0b0420b..de8b599d96 100644 --- a/doc/guides/prog_guide/dmadev.rst +++ b/doc/guides/prog_guide/dmadev.rst @@ -103,3 +103,25 @@ can be used to get the device info and supported features. Silent mode is a special device capability which does not require the application to invoke dequeue APIs. + + +Enqueue / Dequeue APIs +~~~~~~~~~~~~~~~~~~~~~~ + +Enqueue APIs such as ``rte_dma_copy`` and ``rte_dma_fill`` can be used to +enqueue operations to hardware. If an enqueue is successful, a ``ring_idx`` is +returned. This ``ring_idx`` can be used by applications to track per operation +metadata in an application-defined circular ring. + +The ``rte_dma_submit`` API is used to issue doorbell to hardware. +Alternatively the ``RTE_DMA_OP_FLAG_SUBMIT`` flag can be passed to the enqueue +APIs to also issue the doorbell to hardware. + +There are two dequeue APIs ``rte_dma_completed`` and +``rte_dma_completed_status``, these are used to obtain the results of the +enqueue requests. ``rte_dma_completed`` will return the number of successfully +completed operations. ``rte_dma_completed_status`` will return the number of +completed operations along with the status of each operation (filled into the +``status`` array passed by user). These two APIs can also return the last +completed operation's ``ring_idx`` which could help user track operations within +their own application-defined rings. diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 0aceaa8837..21b3c48257 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -110,6 +110,7 @@ New Features * Device allocation APIs. * Control plane APIs. + * Data plane APIs. Removed Items diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index 5114c37446..84e30f7e61 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -59,6 +59,8 @@ * - rte_dma_vchan_setup() * - rte_dma_start() * + * Then, the application can invoke dataplane functions to process jobs. + * * If the application wants to change the configuration (i.e. invoke * rte_dma_configure() or rte_dma_vchan_setup()), it must invoke * rte_dma_stop() first to stop the device and then do the reconfiguration @@ -68,6 +70,77 @@ * Finally, an application can close a dmadev by invoking the rte_dma_close() * function. * + * The dataplane APIs include two parts: + * The first part is the submission of operation requests: + * - rte_dma_copy() + * - rte_dma_copy_sg() + * - rte_dma_fill() + * - rte_dma_submit() + * + * These APIs could work with different virtual DMA channels which have + * different contexts. + * + * The first three APIs are used to submit the operation request to the virtual + * DMA channel, if the submission is successful, a positive + * ring_idx <= UINT16_MAX is returned, otherwise a negative number is returned. + * + * The last API is used to issue doorbell to hardware, and also there are flags + * (@see RTE_DMA_OP_FLAG_SUBMIT) parameter of the first three APIs could do the + * same work. + * @note When enqueuing a set of jobs to the device, having a separate submit + * outside a loop makes for clearer code than having a check for the last + * iteration inside the loop to set a special submit flag. However, for cases + * where one item alone is to be submitted or there is a small set of jobs to + * be submitted sequentially, having a submit flag provides a lower-overhead + * way of doing the submission while still keeping the code clean. + * + * The second part is to obtain the result of requests: + * - rte_dma_completed() + * - return the number of operation requests completed successfully. + * - rte_dma_completed_status() + * - return the number of operation requests completed. + * + * @note If the dmadev works in silent mode (@see RTE_DMA_CAPA_SILENT), + * application does not invoke the above two completed APIs. + * + * About the ring_idx which enqueue APIs (e.g. rte_dma_copy(), rte_dma_fill()) + * return, the rules are as follows: + * - ring_idx for each virtual DMA channel are independent. + * - For a virtual DMA channel, the ring_idx is monotonically incremented, + * when it reach UINT16_MAX, it wraps back to zero. + * - This ring_idx can be used by applications to track per-operation + * metadata in an application-defined circular ring. + * - The initial ring_idx of a virtual DMA channel is zero, after the + * device is stopped, the ring_idx needs to be reset to zero. + * + * One example: + * - step-1: start one dmadev + * - step-2: enqueue a copy operation, the ring_idx return is 0 + * - step-3: enqueue a copy operation again, the ring_idx return is 1 + * - ... + * - step-101: stop the dmadev + * - step-102: start the dmadev + * - step-103: enqueue a copy operation, the ring_idx return is 0 + * - ... + * - step-x+0: enqueue a fill operation, the ring_idx return is 65535 + * - step-x+1: enqueue a copy operation, the ring_idx return is 0 + * - ... + * + * The DMA operation address used in enqueue APIs (i.e. rte_dma_copy(), + * rte_dma_copy_sg(), rte_dma_fill()) is defined as rte_iova_t type. + * + * The dmadev supports two types of address: memory address and device address. + * + * - memory address: the source and destination address of the memory-to-memory + * transfer type, or the source address of the memory-to-device transfer type, + * or the destination address of the device-to-memory transfer type. + * @note If the device support SVA (@see RTE_DMA_CAPA_SVA), the memory address + * can be any VA address, otherwise it must be an IOVA address. + * + * - device address: the source and destination address of the device-to-device + * transfer type, or the source address of the device-to-memory transfer type, + * or the destination address of the memory-to-device transfer type. + * * About MT-safe, all the functions of the dmadev API exported by a PMD are * lock-free functions which assume to not be invoked in parallel on different * logical cores to work on the same target dmadev object. @@ -605,8 +678,395 @@ int rte_dma_stats_reset(int16_t dev_id, uint16_t vchan); __rte_experimental int rte_dma_dump(int16_t dev_id, FILE *f); +/** + * DMA transfer result status code defines. + * + * @see rte_dma_completed_status + */ +enum rte_dma_status_code { + /** The operation completed successfully. */ + RTE_DMA_STATUS_SUCCESSFUL, + /** The operation failed to complete due abort by user. + * This is mainly used when processing dev_stop, user could modidy the + * descriptors (e.g. change one bit to tell hardware abort this job), + * it allows outstanding requests to be complete as much as possible, + * so reduce the time to stop the device. + */ + RTE_DMA_STATUS_USER_ABORT, + /** The operation failed to complete due to following scenarios: + * The jobs in a particular batch are not attempted because they + * appeared after a fence where a previous job failed. In some HW + * implementation it's possible for jobs from later batches would be + * completed, though, so report the status from the not attempted jobs + * before reporting those newer completed jobs. + */ + RTE_DMA_STATUS_NOT_ATTEMPTED, + /** The operation failed to complete due invalid source address. */ + RTE_DMA_STATUS_INVALID_SRC_ADDR, + /** The operation failed to complete due invalid destination address. */ + RTE_DMA_STATUS_INVALID_DST_ADDR, + /** The operation failed to complete due invalid source or destination + * address, cover the case that only knows the address error, but not + * sure which address error. + */ + RTE_DMA_STATUS_INVALID_ADDR, + /** The operation failed to complete due invalid length. */ + RTE_DMA_STATUS_INVALID_LENGTH, + /** The operation failed to complete due invalid opcode. + * The DMA descriptor could have multiple format, which are + * distinguished by the opcode field. + */ + RTE_DMA_STATUS_INVALID_OPCODE, + /** The operation failed to complete due bus read error. */ + RTE_DMA_STATUS_BUS_READ_ERROR, + /** The operation failed to complete due bus write error. */ + RTE_DMA_STATUS_BUS_WRITE_ERROR, + /** The operation failed to complete due bus error, cover the case that + * only knows the bus error, but not sure which direction error. + */ + RTE_DMA_STATUS_BUS_ERROR, + /** The operation failed to complete due data poison. */ + RTE_DMA_STATUS_DATA_POISION, + /** The operation failed to complete due descriptor read error. */ + RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR, + /** The operation failed to complete due device link error. + * Used to indicates that the link error in the memory-to-device/ + * device-to-memory/device-to-device transfer scenario. + */ + RTE_DMA_STATUS_DEV_LINK_ERROR, + /** The operation failed to complete due lookup page fault. */ + RTE_DMA_STATUS_PAGE_FAULT, + /** The operation failed to complete due unknown reason. + * The initial value is 256, which reserves space for future errors. + */ + RTE_DMA_STATUS_ERROR_UNKNOWN = 0x100, +}; + +/** + * A structure used to hold scatter-gather DMA operation request entry. + * + * @see rte_dma_copy_sg + */ +struct rte_dma_sge { + rte_iova_t addr; /**< The DMA operation address. */ + uint32_t length; /**< The DMA operation length. */ +}; + #include "rte_dmadev_core.h" +/** DMA fence flag. + * It means the operation with this flag must be processed only after all + * previous operations are completed. + * If the specify DMA HW works in-order (it means it has default fence between + * operations), this flag could be NOP. + * + * @see rte_dma_copy() + * @see rte_dma_copy_sg() + * @see rte_dma_fill() + */ +#define RTE_DMA_OP_FLAG_FENCE RTE_BIT64(0) +/** DMA submit flag. + * It means the operation with this flag must issue doorbell to hardware after + * enqueued jobs. + * + * @see rte_dma_copy() + * @see rte_dma_copy_sg() + * @see rte_dma_fill() + */ +#define RTE_DMA_OP_FLAG_SUBMIT RTE_BIT64(1) +/** DMA write data to low level cache hint. + * Used for performance optimization, this is just a hint, and there is no + * capability bit for this, driver should not return error if this flag was set. + * + * @see rte_dma_copy() + * @see rte_dma_copy_sg() + * @see rte_dma_fill() + */ +#define RTE_DMA_OP_FLAG_LLC RTE_BIT64(2) + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a copy operation onto the virtual DMA channel. + * + * This queues up a copy operation to be performed by hardware, if the 'flags' + * parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger doorbell to begin + * this operation, otherwise do not trigger doorbell. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param src + * The address of the source buffer. + * @param dst + * The address of the destination buffer. + * @param length + * The length of the data to be copied. + * @param flags + * An flags for this operation. + * @see RTE_DMA_OP_FLAG_* + * + * @return + * - 0..UINT16_MAX: index of enqueued job. + * - -ENOSPC: if no space left to enqueue. + * - other values < 0 on failure. + */ +__rte_experimental +static inline int +rte_dma_copy(int16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dma_is_valid(dev_id) || !dev->dev_started || + vchan >= dev->dev_conf.nb_vchans || length == 0) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy, -ENOTSUP); +#endif + + return (*dev->copy)(dev, vchan, src, dst, length, flags); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a scatter-gather list copy operation onto the virtual DMA channel. + * + * This queues up a scatter-gather list copy operation to be performed by + * hardware, if the 'flags' parameter contains RTE_DMA_OP_FLAG_SUBMIT then + * trigger doorbell to begin this operation, otherwise do not trigger doorbell. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param src + * The pointer of source scatter-gather entry array. + * @param dst + * The pointer of destination scatter-gather entry array. + * @param nb_src + * The number of source scatter-gather entry. + * @see struct rte_dma_info::max_sges + * @param nb_dst + * The number of destination scatter-gather entry. + * @see struct rte_dma_info::max_sges + * @param flags + * An flags for this operation. + * @see RTE_DMA_OP_FLAG_* + * + * @return + * - 0..UINT16_MAX: index of enqueued job. + * - -ENOSPC: if no space left to enqueue. + * - other values < 0 on failure. + */ +__rte_experimental +static inline int +rte_dma_copy_sg(int16_t dev_id, uint16_t vchan, struct rte_dma_sge *src, + struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, + uint64_t flags) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dma_is_valid(dev_id) || !dev->dev_started || + vchan >= dev->dev_conf.nb_vchans || + src == NULL || dst == NULL || nb_src == 0 || nb_dst == 0) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy_sg, -ENOTSUP); +#endif + + return (*dev->copy_sg)(dev, vchan, src, dst, nb_src, nb_dst, flags); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a fill operation onto the virtual DMA channel. + * + * This queues up a fill operation to be performed by hardware, if the 'flags' + * parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger doorbell to begin + * this operation, otherwise do not trigger doorbell. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param pattern + * The pattern to populate the destination buffer with. + * @param dst + * The address of the destination buffer. + * @param length + * The length of the destination buffer. + * @param flags + * An flags for this operation. + * @see RTE_DMA_OP_FLAG_* + * + * @return + * - 0..UINT16_MAX: index of enqueued job. + * - -ENOSPC: if no space left to enqueue. + * - other values < 0 on failure. + */ +__rte_experimental +static inline int +rte_dma_fill(int16_t dev_id, uint16_t vchan, uint64_t pattern, + rte_iova_t dst, uint32_t length, uint64_t flags) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dma_is_valid(dev_id) || !dev->dev_started || + vchan >= dev->dev_conf.nb_vchans || length == 0) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->fill, -ENOTSUP); +#endif + + return (*dev->fill)(dev, vchan, pattern, dst, length, flags); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Trigger hardware to begin performing enqueued operations. + * + * This API is used to write the "doorbell" to the hardware to trigger it + * to begin the operations previously enqueued by rte_dma_copy/fill(). + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +static inline int +rte_dma_submit(int16_t dev_id, uint16_t vchan) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dma_is_valid(dev_id) || !dev->dev_started || + vchan >= dev->dev_conf.nb_vchans) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->submit, -ENOTSUP); +#endif + + return (*dev->submit)(dev, vchan); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Returns the number of operations that have been successfully completed. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param nb_cpls + * The maximum number of completed operations that can be processed. + * @param[out] last_idx + * The last completed operation's ring_idx. + * If not required, NULL can be passed in. + * @param[out] has_error + * Indicates if there are transfer error. + * If not required, NULL can be passed in. + * + * @return + * The number of operations that successfully completed. This return value + * must be less than or equal to the value of nb_cpls. + */ +__rte_experimental +static inline uint16_t +rte_dma_completed(int16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + uint16_t idx; + bool err; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dma_is_valid(dev_id) || !dev->dev_started || + vchan >= dev->dev_conf.nb_vchans || nb_cpls == 0) + return 0; + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed, 0); +#endif + + /* Ensure the pointer values are non-null to simplify drivers. + * In most cases these should be compile time evaluated, since this is + * an inline function. + * - If NULL is explicitly passed as parameter, then compiler knows the + * value is NULL + * - If address of local variable is passed as parameter, then compiler + * can know it's non-NULL. + */ + if (last_idx == NULL) + last_idx = &idx; + if (has_error == NULL) + has_error = &err; + + *has_error = false; + return (*dev->completed)(dev, vchan, nb_cpls, last_idx, has_error); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Returns the number of operations that have been completed, and the + * operations result may succeed or fail. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param nb_cpls + * Indicates the size of status array. + * @param[out] last_idx + * The last completed operation's ring_idx. + * If not required, NULL can be passed in. + * @param[out] status + * This is a pointer to an array of length 'nb_cpls' that holds the completion + * status code of each operation. + * @see enum rte_dma_status_code + * + * @return + * The number of operations that completed. This return value must be less + * than or equal to the value of nb_cpls. + * If this number is greater than zero (assuming n), then n values in the + * status array are also set. + */ +__rte_experimental +static inline uint16_t +rte_dma_completed_status(int16_t dev_id, uint16_t vchan, + const uint16_t nb_cpls, uint16_t *last_idx, + enum rte_dma_status_code *status) +{ + struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; + uint16_t idx; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dma_is_valid(dev_id) || !dev->dev_started || + vchan >= dev->dev_conf.nb_vchans || + nb_cpls == 0 || status == NULL) + return 0; + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed_status, 0); +#endif + + if (last_idx == NULL) + last_idx = &idx; + + return (*dev->completed_status)(dev, vchan, nb_cpls, last_idx, status); +} + #ifdef __cplusplus } #endif diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h index d6f885527a..5c202e35ce 100644 --- a/lib/dmadev/rte_dmadev_core.h +++ b/lib/dmadev/rte_dmadev_core.h @@ -55,6 +55,36 @@ typedef int (*rte_dma_stats_reset_t)(struct rte_dma_dev *dev, uint16_t vchan); /** @internal Used to dump internal information. */ typedef int (*rte_dma_dump_t)(const struct rte_dma_dev *dev, FILE *f); +/** @internal Used to enqueue a copy operation. */ +typedef int (*rte_dma_copy_t)(struct rte_dma_dev *dev, uint16_t vchan, + rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags); + +/** @internal Used to enqueue a scatter-gather list copy operation. */ +typedef int (*rte_dma_copy_sg_t)(struct rte_dma_dev *dev, uint16_t vchan, + const struct rte_dma_sge *src, + const struct rte_dma_sge *dst, + uint16_t nb_src, uint16_t nb_dst, + uint64_t flags); + +/** @internal Used to enqueue a fill operation. */ +typedef int (*rte_dma_fill_t)(struct rte_dma_dev *dev, uint16_t vchan, + uint64_t pattern, rte_iova_t dst, + uint32_t length, uint64_t flags); + +/** @internal Used to trigger hardware to begin working. */ +typedef int (*rte_dma_submit_t)(struct rte_dma_dev *dev, uint16_t vchan); + +/** @internal Used to return number of successful completed operations. */ +typedef uint16_t (*rte_dma_completed_t)(struct rte_dma_dev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error); + +/** @internal Used to return number of completed operations. */ +typedef uint16_t (*rte_dma_completed_status_t)(struct rte_dma_dev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, enum rte_dma_status_code *status); + /** * Possible states of a DMA device. * @@ -90,14 +120,29 @@ struct rte_dma_dev_ops { /** @internal * The generic data structure associated with each DMA device. + * + * The dataplane APIs are located at the beginning of the structure. + * And the 'dev_private' field was placed in the first cache line to optimize + * performance because the PMD driver mainly depends on this field. */ struct rte_dma_dev { - char dev_name[RTE_DEV_NAME_MAX_LEN]; /**< Unique identifier name */ - int16_t dev_id; /**< Device [external] identifier. */ - int16_t numa_node; /**< Local NUMA memory ID. -1 if unknown. */ void *dev_private; /**< PMD-specific private data. */ + rte_dma_copy_t copy; + rte_dma_copy_sg_t copy_sg; + rte_dma_fill_t fill; + rte_dma_submit_t submit; + rte_dma_completed_t completed; + rte_dma_completed_status_t completed_status; + void *reserved_cl0; + /** Reserve space for future IO functions, while keeping dev_ops + * pointer on the second cacheline. + */ + void *reserved_cl1[7]; /** Functions exported by PMD. */ const struct rte_dma_dev_ops *dev_ops; + char dev_name[RTE_DEV_NAME_MAX_LEN]; /**< Unique identifier name */ + int16_t dev_id; /**< Device [external] identifier. */ + int16_t numa_node; /**< Local NUMA memory ID. -1 if unknown. */ struct rte_dma_conf dev_conf; /**< DMA device configuration. */ /** Device info which supplied during device initialization. */ struct rte_device *device; diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map index 6b7939b10f..c780463bb2 100644 --- a/lib/dmadev/version.map +++ b/lib/dmadev/version.map @@ -2,10 +2,15 @@ EXPERIMENTAL { global: rte_dma_close; + rte_dma_completed; + rte_dma_completed_status; rte_dma_configure; + rte_dma_copy; + rte_dma_copy_sg; rte_dma_count_avail; rte_dma_dev_max; rte_dma_dump; + rte_dma_fill; rte_dma_get_dev_id; rte_dma_info_get; rte_dma_is_valid; @@ -13,6 +18,7 @@ EXPERIMENTAL { rte_dma_stats_get; rte_dma_stats_reset; rte_dma_stop; + rte_dma_submit; rte_dma_vchan_setup; local: *; From patchwork Fri Sep 24 10:53:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 99583 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 90BF2A0548; Fri, 24 Sep 2021 12:58:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 62E3641308; Fri, 24 Sep 2021 12:58:17 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 51CFE41302 for ; Fri, 24 Sep 2021 12:58:14 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HG89l2jv5zWRYC; Fri, 24 Sep 2021 18:56:59 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:12 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:11 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , , , Date: Fri, 24 Sep 2021 18:53:55 +0800 Message-ID: <20210924105357.15386-5-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210924105357.15386-1-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <20210924105357.15386-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v23 4/6] dmadev: add multi-process support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add multi-process support for dmadev. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup Reviewed-by: Kevin Laatz Reviewed-by: Conor Walsh --- doc/guides/rel_notes/release_21_11.rst | 1 + lib/dmadev/rte_dmadev.c | 168 ++++++++++++++++++++----- lib/dmadev/rte_dmadev.h | 24 ++-- lib/dmadev/rte_dmadev_core.h | 45 +++++-- 4 files changed, 185 insertions(+), 53 deletions(-) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 21b3c48257..67d2bf5101 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -111,6 +111,7 @@ New Features * Device allocation APIs. * Control plane APIs. * Data plane APIs. + * Multi-process support. Removed Items diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index e0134b9eec..1338b29937 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -17,6 +17,13 @@ struct rte_dma_dev *rte_dma_devices; static int16_t dma_devices_max; +static struct { + /* Hold the dev_max information of the primary process. This field is + * set by the primary process and is read by the secondary process. + */ + int16_t dev_max; + struct rte_dma_dev_data data[0]; +} *dma_devices_shared_data; RTE_LOG_REGISTER_DEFAULT(rte_dma_logtype, INFO); #define RTE_DMA_LOG(level, fmt, args...) \ @@ -76,11 +83,11 @@ dma_find_free_dev(void) { int16_t i; - if (rte_dma_devices == NULL) + if (rte_dma_devices == NULL || dma_devices_shared_data == NULL) return -1; for (i = 0; i < dma_devices_max; i++) { - if (rte_dma_devices[i].dev_name[0] == '\0') + if (dma_devices_shared_data->data[i].dev_name[0] == '\0') return i; } @@ -97,7 +104,7 @@ dma_find(const char *name) for (i = 0; i < dma_devices_max; i++) { if ((rte_dma_devices[i].state != RTE_DMA_DEV_UNUSED) && - (!strcmp(name, rte_dma_devices[i].dev_name))) + (!strcmp(name, rte_dma_devices[i].data->dev_name))) return &rte_dma_devices[i]; } @@ -130,16 +137,65 @@ dma_process_data_prepare(void) return 0; } +static int +dma_shared_data_prepare(void) +{ + const char *mz_name = "rte_dma_dev_data"; + const struct rte_memzone *mz; + size_t size; + + if (dma_devices_shared_data != NULL) + return 0; + + size = sizeof(*dma_devices_shared_data) + + sizeof(struct rte_dma_dev_data) * dma_devices_max; + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + mz = rte_memzone_reserve(mz_name, size, rte_socket_id(), 0); + else + mz = rte_memzone_lookup(mz_name); + if (mz == NULL) + return -ENOMEM; + + dma_devices_shared_data = mz->addr; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + memset(dma_devices_shared_data, 0, size); + dma_devices_shared_data->dev_max = dma_devices_max; + } else { + dma_devices_max = dma_devices_shared_data->dev_max; + } + + return 0; +} + static int dma_data_prepare(void) { - if (dma_devices_max == 0) - dma_devices_max = RTE_DMADEV_DEFAULT_MAX_DEVS; - return dma_process_data_prepare(); + int ret; + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + if (dma_devices_max == 0) + dma_devices_max = RTE_DMADEV_DEFAULT_MAX_DEVS; + ret = dma_process_data_prepare(); + if (ret) + return ret; + ret = dma_shared_data_prepare(); + if (ret) + return ret; + } else { + ret = dma_shared_data_prepare(); + if (ret) + return ret; + ret = dma_process_data_prepare(); + if (ret) + return ret; + } + + return 0; } static struct rte_dma_dev * -dma_allocate(const char *name, int numa_node, size_t private_data_size) +dma_allocate_primary(const char *name, int numa_node, size_t private_data_size) { struct rte_dma_dev *dev; void *dev_private; @@ -174,10 +230,55 @@ dma_allocate(const char *name, int numa_node, size_t private_data_size) dev = &rte_dma_devices[dev_id]; dev->dev_private = dev_private; - rte_strscpy(dev->dev_name, name, sizeof(dev->dev_name)); - dev->dev_id = dev_id; - dev->numa_node = numa_node; - dev->dev_private = dev_private; + dev->data = &dma_devices_shared_data->data[dev_id]; + rte_strscpy(dev->data->dev_name, name, sizeof(dev->data->dev_name)); + dev->data->dev_id = dev_id; + dev->data->numa_node = numa_node; + dev->data->dev_private = dev_private; + + return dev; +} + +static struct rte_dma_dev * +dma_attach_secondary(const char *name) +{ + struct rte_dma_dev *dev; + int16_t i; + int ret; + + ret = dma_data_prepare(); + if (ret < 0) { + RTE_DMA_LOG(ERR, "Cannot initialize dmadevs data"); + return NULL; + } + + for (i = 0; i < dma_devices_max; i++) { + if (!strcmp(dma_devices_shared_data->data[i].dev_name, name)) + break; + } + if (i == dma_devices_max) { + RTE_DMA_LOG(ERR, + "Device %s is not driven by the primary process", + name); + return NULL; + } + + dev = &rte_dma_devices[i]; + dev->data = &dma_devices_shared_data->data[i]; + dev->dev_private = dev->data->dev_private; + + return dev; +} + +static struct rte_dma_dev * +dma_allocate(const char *name, int numa_node, size_t private_data_size) +{ + struct rte_dma_dev *dev; + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + dev = dma_allocate_primary(name, numa_node, private_data_size); + else + dev = dma_attach_secondary(name); return dev; } @@ -185,7 +286,11 @@ dma_allocate(const char *name, int numa_node, size_t private_data_size) static void dma_release(struct rte_dma_dev *dev) { - rte_free(dev->dev_private); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + memset(dev->data, 0, sizeof(struct rte_dma_dev_data)); + rte_free(dev->dev_private); + } + memset(dev, 0, sizeof(struct rte_dma_dev)); } @@ -219,7 +324,7 @@ rte_dma_pmd_release(const char *name) return -EINVAL; if (dev->state == RTE_DMA_DEV_READY) - return rte_dma_close(dev->dev_id); + return rte_dma_close(dev->data->dev_id); dma_release(dev); return 0; @@ -237,7 +342,7 @@ rte_dma_get_dev_id(const char *name) if (dev == NULL) return -EINVAL; - return dev->dev_id; + return dev->data->dev_id; } bool @@ -283,7 +388,7 @@ rte_dma_info_get(int16_t dev_id, struct rte_dma_info *dev_info) return ret; dev_info->numa_node = dev->device->numa_node; - dev_info->nb_vchans = dev->dev_conf.nb_vchans; + dev_info->nb_vchans = dev->data->dev_conf.nb_vchans; return 0; } @@ -299,7 +404,7 @@ rte_dma_configure(int16_t dev_id, const struct rte_dma_conf *dev_conf) if (dev_conf == NULL) return -EINVAL; - if (dev->dev_started != 0) { + if (dev->data->dev_started != 0) { RTE_DMA_LOG(ERR, "Device %d must be stopped to allow configuration", dev_id); @@ -331,7 +436,8 @@ rte_dma_configure(int16_t dev_id, const struct rte_dma_conf *dev_conf) ret = (*dev->dev_ops->dev_configure)(dev, dev_conf, sizeof(struct rte_dma_conf)); if (ret == 0) - memcpy(&dev->dev_conf, dev_conf, sizeof(struct rte_dma_conf)); + memcpy(&dev->data->dev_conf, dev_conf, + sizeof(struct rte_dma_conf)); return ret; } @@ -344,12 +450,12 @@ rte_dma_start(int16_t dev_id) RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); - if (dev->dev_conf.nb_vchans == 0) { + if (dev->data->dev_conf.nb_vchans == 0) { RTE_DMA_LOG(ERR, "Device %d must be configured first", dev_id); return -EINVAL; } - if (dev->dev_started != 0) { + if (dev->data->dev_started != 0) { RTE_DMA_LOG(WARNING, "Device %d already started", dev_id); return 0; } @@ -362,7 +468,7 @@ rte_dma_start(int16_t dev_id) return ret; mark_started: - dev->dev_started = 1; + dev->data->dev_started = 1; return 0; } @@ -374,7 +480,7 @@ rte_dma_stop(int16_t dev_id) RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); - if (dev->dev_started == 0) { + if (dev->data->dev_started == 0) { RTE_DMA_LOG(WARNING, "Device %d already stopped", dev_id); return 0; } @@ -387,7 +493,7 @@ rte_dma_stop(int16_t dev_id) return ret; mark_stopped: - dev->dev_started = 0; + dev->data->dev_started = 0; return 0; } @@ -400,7 +506,7 @@ rte_dma_close(int16_t dev_id) RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); /* Device must be stopped before it can be closed */ - if (dev->dev_started == 1) { + if (dev->data->dev_started == 1) { RTE_DMA_LOG(ERR, "Device %d must be stopped before closing", dev_id); return -EBUSY; @@ -427,7 +533,7 @@ rte_dma_vchan_setup(int16_t dev_id, uint16_t vchan, if (conf == NULL) return -EINVAL; - if (dev->dev_started != 0) { + if (dev->data->dev_started != 0) { RTE_DMA_LOG(ERR, "Device %d must be stopped to allow configuration", dev_id); @@ -439,7 +545,7 @@ rte_dma_vchan_setup(int16_t dev_id, uint16_t vchan, RTE_DMA_LOG(ERR, "Device %d get device info fail", dev_id); return -EINVAL; } - if (dev->dev_conf.nb_vchans == 0) { + if (dev->data->dev_conf.nb_vchans == 0) { RTE_DMA_LOG(ERR, "Device %d must be configured first", dev_id); return -EINVAL; } @@ -513,7 +619,7 @@ rte_dma_stats_get(int16_t dev_id, uint16_t vchan, struct rte_dma_stats *stats) RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); if (stats == NULL) return -EINVAL; - if (vchan >= dev->dev_conf.nb_vchans && + if (vchan >= dev->data->dev_conf.nb_vchans && vchan != RTE_DMA_ALL_VCHAN) { RTE_DMA_LOG(ERR, "Device %d vchan %u out of range", dev_id, vchan); @@ -532,7 +638,7 @@ rte_dma_stats_reset(int16_t dev_id, uint16_t vchan) struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; RTE_DMA_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); - if (vchan >= dev->dev_conf.nb_vchans && + if (vchan >= dev->data->dev_conf.nb_vchans && vchan != RTE_DMA_ALL_VCHAN) { RTE_DMA_LOG(ERR, "Device %d vchan %u out of range", dev_id, vchan); @@ -606,14 +712,14 @@ rte_dma_dump(int16_t dev_id, FILE *f) } (void)fprintf(f, "DMA Dev %d, '%s' [%s]\n", - dev->dev_id, - dev->dev_name, - dev->dev_started ? "started" : "stopped"); + dev->data->dev_id, + dev->data->dev_name, + dev->data->dev_started ? "started" : "stopped"); dma_dump_capability(f, dev_info.dev_capa); (void)fprintf(f, " max_vchans_supported: %u\n", dev_info.max_vchans); (void)fprintf(f, " nb_vchans_configured: %u\n", dev_info.nb_vchans); (void)fprintf(f, " silent_mode: %s\n", - dev->dev_conf.enable_silent ? "on" : "off"); + dev->data->dev_conf.enable_silent ? "on" : "off"); if (dev->dev_ops->dev_dump != NULL) return (*dev->dev_ops->dev_dump)(dev, f); diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index 84e30f7e61..561a1b1154 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -821,8 +821,8 @@ rte_dma_copy(int16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst, struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; #ifdef RTE_DMADEV_DEBUG - if (!rte_dma_is_valid(dev_id) || !dev->dev_started || - vchan >= dev->dev_conf.nb_vchans || length == 0) + if (!rte_dma_is_valid(dev_id) || !dev->data->dev_started || + vchan >= dev->data->dev_conf.nb_vchans || length == 0) return -EINVAL; RTE_FUNC_PTR_OR_ERR_RET(*dev->copy, -ENOTSUP); #endif @@ -872,8 +872,8 @@ rte_dma_copy_sg(int16_t dev_id, uint16_t vchan, struct rte_dma_sge *src, struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; #ifdef RTE_DMADEV_DEBUG - if (!rte_dma_is_valid(dev_id) || !dev->dev_started || - vchan >= dev->dev_conf.nb_vchans || + if (!rte_dma_is_valid(dev_id) || !dev->data->dev_started || + vchan >= dev->data->dev_conf.nb_vchans || src == NULL || dst == NULL || nb_src == 0 || nb_dst == 0) return -EINVAL; RTE_FUNC_PTR_OR_ERR_RET(*dev->copy_sg, -ENOTSUP); @@ -919,8 +919,8 @@ rte_dma_fill(int16_t dev_id, uint16_t vchan, uint64_t pattern, struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; #ifdef RTE_DMADEV_DEBUG - if (!rte_dma_is_valid(dev_id) || !dev->dev_started || - vchan >= dev->dev_conf.nb_vchans || length == 0) + if (!rte_dma_is_valid(dev_id) || !dev->data->dev_started || + vchan >= dev->data->dev_conf.nb_vchans || length == 0) return -EINVAL; RTE_FUNC_PTR_OR_ERR_RET(*dev->fill, -ENOTSUP); #endif @@ -952,8 +952,8 @@ rte_dma_submit(int16_t dev_id, uint16_t vchan) struct rte_dma_dev *dev = &rte_dma_devices[dev_id]; #ifdef RTE_DMADEV_DEBUG - if (!rte_dma_is_valid(dev_id) || !dev->dev_started || - vchan >= dev->dev_conf.nb_vchans) + if (!rte_dma_is_valid(dev_id) || !dev->data->dev_started || + vchan >= dev->data->dev_conf.nb_vchans) return -EINVAL; RTE_FUNC_PTR_OR_ERR_RET(*dev->submit, -ENOTSUP); #endif @@ -994,8 +994,8 @@ rte_dma_completed(int16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, bool err; #ifdef RTE_DMADEV_DEBUG - if (!rte_dma_is_valid(dev_id) || !dev->dev_started || - vchan >= dev->dev_conf.nb_vchans || nb_cpls == 0) + if (!rte_dma_is_valid(dev_id) || !dev->data->dev_started || + vchan >= dev->data->dev_conf.nb_vchans || nb_cpls == 0) return 0; RTE_FUNC_PTR_OR_ERR_RET(*dev->completed, 0); #endif @@ -1054,8 +1054,8 @@ rte_dma_completed_status(int16_t dev_id, uint16_t vchan, uint16_t idx; #ifdef RTE_DMADEV_DEBUG - if (!rte_dma_is_valid(dev_id) || !dev->dev_started || - vchan >= dev->dev_conf.nb_vchans || + if (!rte_dma_is_valid(dev_id) || !dev->data->dev_started || + vchan >= dev->data->dev_conf.nb_vchans || nb_cpls == 0 || status == NULL) return 0; RTE_FUNC_PTR_OR_ERR_RET(*dev->completed_status, 0); diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h index 5c202e35ce..019ac7af9c 100644 --- a/lib/dmadev/rte_dmadev_core.h +++ b/lib/dmadev/rte_dmadev_core.h @@ -118,10 +118,39 @@ struct rte_dma_dev_ops { rte_dma_dump_t dev_dump; }; -/** @internal +/** + * @internal + * The data part, with no function pointers, associated with each DMA device. + * + * This structure is safe to place in shared memory to be common among different + * processes in a multi-process configuration. + * + * @see struct rte_dmadev::data + */ +struct rte_dma_dev_data { + char dev_name[RTE_DEV_NAME_MAX_LEN]; /**< Unique identifier name */ + int16_t dev_id; /**< Device [external] identifier. */ + int16_t numa_node; /**< Local NUMA memory ID. -1 if unknown. */ + /** PMD-specific private data. + * This is a copy of the 'dev_private' field in the 'struct rte_dmadev' + * from primary process, it is used by the secondary process to get + * dev_private information. + */ + void *dev_private; + struct rte_dma_conf dev_conf; /**< DMA device configuration. */ + uint8_t dev_started : 1; /**< Device state: STARTED(1)/STOPPED(0). */ + uint64_t reserved[2]; /**< Reserved for future fields */ +} __rte_cache_aligned; + +/** + * @internal * The generic data structure associated with each DMA device. * - * The dataplane APIs are located at the beginning of the structure. + * The dataplane APIs are located at the beginning of the structure, along + * with the pointer to where all the data elements for the particular device + * are stored in shared memory. This split scheme allows the function pointer + * and driver data to be per-process, while the actual configuration data for + * the device is shared. * And the 'dev_private' field was placed in the first cache line to optimize * performance because the PMD driver mainly depends on this field. */ @@ -134,20 +163,16 @@ struct rte_dma_dev { rte_dma_completed_t completed; rte_dma_completed_status_t completed_status; void *reserved_cl0; - /** Reserve space for future IO functions, while keeping dev_ops - * pointer on the second cacheline. + /** Reserve space for future IO functions, while keeping data and + * dev_ops pointers on the second cacheline. */ - void *reserved_cl1[7]; + void *reserved_cl1[6]; + struct rte_dma_dev_data *data; /**< Pointer to device data. */ /** Functions exported by PMD. */ const struct rte_dma_dev_ops *dev_ops; - char dev_name[RTE_DEV_NAME_MAX_LEN]; /**< Unique identifier name */ - int16_t dev_id; /**< Device [external] identifier. */ - int16_t numa_node; /**< Local NUMA memory ID. -1 if unknown. */ - struct rte_dma_conf dev_conf; /**< DMA device configuration. */ /** Device info which supplied during device initialization. */ struct rte_device *device; enum rte_dma_dev_state state; /**< Flag indicating the device state. */ - uint8_t dev_started : 1; /**< Device state: STARTED(1)/STOPPED(0). */ uint64_t reserved[2]; /**< Reserved for future fields. */ } __rte_cache_aligned; From patchwork Fri Sep 24 10:53:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 99584 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7E5FA0548; Fri, 24 Sep 2021 12:58:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7ABD241312; Fri, 24 Sep 2021 12:58:18 +0200 (CEST) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by mails.dpdk.org (Postfix) with ESMTP id 876D341303 for ; Fri, 24 Sep 2021 12:58:14 +0200 (CEST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4HG8BF2TMPz8tLk; Fri, 24 Sep 2021 18:57:25 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:12 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:11 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , , , Date: Fri, 24 Sep 2021 18:53:56 +0800 Message-ID: <20210924105357.15386-6-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210924105357.15386-1-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <20210924105357.15386-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v23 5/6] dma/skeleton: introduce skeleton dmadev driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Skeleton dmadevice driver, on the lines of rawdev skeleton, is for showcasing of the dmadev library. Design of skeleton involves a virtual device which is plugged into VDEV bus on initialization. Also, enable compilation of dmadev skeleton drivers. Signed-off-by: Chengwen Feng Reviewed-by: Kevin Laatz Reviewed-by: Conor Walsh --- MAINTAINERS | 1 + drivers/dma/meson.build | 4 +- drivers/dma/skeleton/meson.build | 7 + drivers/dma/skeleton/skeleton_dmadev.c | 570 +++++++++++++++++++++++++ drivers/dma/skeleton/skeleton_dmadev.h | 61 +++ drivers/dma/skeleton/version.map | 3 + 6 files changed, 645 insertions(+), 1 deletion(-) create mode 100644 drivers/dma/skeleton/meson.build create mode 100644 drivers/dma/skeleton/skeleton_dmadev.c create mode 100644 drivers/dma/skeleton/skeleton_dmadev.h create mode 100644 drivers/dma/skeleton/version.map diff --git a/MAINTAINERS b/MAINTAINERS index a5b11ac70b..85d4f83395 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -457,6 +457,7 @@ F: doc/guides/regexdevs/features/default.ini DMA device API - EXPERIMENTAL M: Chengwen Feng F: lib/dmadev/ +F: drivers/dma/skeleton/ F: doc/guides/prog_guide/dmadev.rst Eventdev API diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build index a24c56d8ff..d9c7ede32f 100644 --- a/drivers/dma/meson.build +++ b/drivers/dma/meson.build @@ -1,4 +1,6 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright 2021 HiSilicon Limited -drivers = [] +drivers = [ + 'skeleton', +] diff --git a/drivers/dma/skeleton/meson.build b/drivers/dma/skeleton/meson.build new file mode 100644 index 0000000000..8871b80956 --- /dev/null +++ b/drivers/dma/skeleton/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2021 HiSilicon Limited + +deps += ['dmadev', 'kvargs', 'ring', 'bus_vdev'] +sources = files( + 'skeleton_dmadev.c', +) diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c new file mode 100644 index 0000000000..a7d55b8ca0 --- /dev/null +++ b/drivers/dma/skeleton/skeleton_dmadev.c @@ -0,0 +1,570 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited + */ + +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "skeleton_dmadev.h" + +RTE_LOG_REGISTER_DEFAULT(skeldma_logtype, INFO); +#define SKELDMA_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, skeldma_logtype, "%s(): " fmt "\n", \ + __func__, ##args) + +/* Count of instances, currently only 1 is supported. */ +static uint16_t skeldma_count; + +static int +skeldma_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, + uint32_t info_sz) +{ +#define SKELDMA_MAX_DESC 8192 +#define SKELDMA_MIN_DESC 32 + + RTE_SET_USED(dev); + RTE_SET_USED(info_sz); + + dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | + RTE_DMA_CAPA_SVA | + RTE_DMA_CAPA_OPS_COPY; + dev_info->max_vchans = 1; + dev_info->max_desc = SKELDMA_MAX_DESC; + dev_info->min_desc = SKELDMA_MIN_DESC; + + return 0; +} + +static int +skeldma_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, + uint32_t conf_sz) +{ + RTE_SET_USED(dev); + RTE_SET_USED(conf); + RTE_SET_USED(conf_sz); + return 0; +} + +static void * +cpucopy_thread(void *param) +{ +#define SLEEP_THRESHOLD 10000 +#define SLEEP_US_VAL 10 + + struct rte_dma_dev *dev = param; + struct skeldma_hw *hw = dev->dev_private; + struct skeldma_desc *desc = NULL; + int ret; + + while (!hw->exit_flag) { + ret = rte_ring_dequeue(hw->desc_running, (void **)&desc); + if (ret) { + hw->zero_req_count++; + if (hw->zero_req_count == 0) + hw->zero_req_count = SLEEP_THRESHOLD; + if (hw->zero_req_count >= SLEEP_THRESHOLD) + rte_delay_us_sleep(SLEEP_US_VAL); + continue; + } + + hw->zero_req_count = 0; + rte_memcpy(desc->dst, desc->src, desc->len); + hw->completed_count++; + (void)rte_ring_enqueue(hw->desc_completed, (void *)desc); + } + + return NULL; +} + +static void +fflush_ring(struct skeldma_hw *hw, struct rte_ring *ring) +{ + struct skeldma_desc *desc = NULL; + while (rte_ring_count(ring) > 0) { + (void)rte_ring_dequeue(ring, (void **)&desc); + (void)rte_ring_enqueue(hw->desc_empty, (void *)desc); + } +} + +static int +skeldma_start(struct rte_dma_dev *dev) +{ + struct skeldma_hw *hw = dev->dev_private; + rte_cpuset_t cpuset; + int ret; + + if (hw->desc_mem == NULL) { + SKELDMA_LOG(ERR, "Vchan was not setup, start fail!"); + return -EINVAL; + } + + /* Reset the dmadev to a known state, include: + * 1) fflush pending/running/completed ring to empty ring. + * 2) init ring idx to zero. + * 3) init running statistics. + * 4) mark cpucopy task exit_flag to false. + */ + fflush_ring(hw, hw->desc_pending); + fflush_ring(hw, hw->desc_running); + fflush_ring(hw, hw->desc_completed); + hw->ridx = 0; + hw->submitted_count = 0; + hw->zero_req_count = 0; + hw->completed_count = 0; + hw->exit_flag = false; + + rte_mb(); + + ret = rte_ctrl_thread_create(&hw->thread, "dma_skeleton", NULL, + cpucopy_thread, dev); + if (ret) { + SKELDMA_LOG(ERR, "Start cpucopy thread fail!"); + return -EINVAL; + } + + if (hw->lcore_id != -1) { + cpuset = rte_lcore_cpuset(hw->lcore_id); + ret = pthread_setaffinity_np(hw->thread, sizeof(cpuset), + &cpuset); + if (ret) + SKELDMA_LOG(WARNING, + "Set thread affinity lcore = %d fail!", + hw->lcore_id); + } + + return 0; +} + +static int +skeldma_stop(struct rte_dma_dev *dev) +{ + struct skeldma_hw *hw = dev->dev_private; + + hw->exit_flag = true; + rte_delay_ms(1); + + pthread_cancel(hw->thread); + pthread_join(hw->thread, NULL); + + return 0; +} + +static int +vchan_setup(struct skeldma_hw *hw, uint16_t nb_desc) +{ + struct skeldma_desc *desc; + struct rte_ring *empty; + struct rte_ring *pending; + struct rte_ring *running; + struct rte_ring *completed; + uint16_t i; + + desc = rte_zmalloc_socket("dma_skelteon_desc", + nb_desc * sizeof(struct skeldma_desc), + RTE_CACHE_LINE_SIZE, hw->socket_id); + if (desc == NULL) { + SKELDMA_LOG(ERR, "Malloc dma skeleton desc fail!"); + return -ENOMEM; + } + + empty = rte_ring_create("dma_skeleton_desc_empty", nb_desc, + hw->socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + pending = rte_ring_create("dma_skeleton_desc_pending", nb_desc, + hw->socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + running = rte_ring_create("dma_skeleton_desc_running", nb_desc, + hw->socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + completed = rte_ring_create("dma_skeleton_desc_completed", nb_desc, + hw->socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + if (empty == NULL || pending == NULL || running == NULL || + completed == NULL) { + SKELDMA_LOG(ERR, "Create dma skeleton desc ring fail!"); + rte_ring_free(empty); + rte_ring_free(pending); + rte_ring_free(running); + rte_ring_free(completed); + rte_free(desc); + return -ENOMEM; + } + + /* The real usable ring size is *count-1* instead of *count* to + * differentiate a free ring from an empty ring. + * @see rte_ring_create + */ + for (i = 0; i < nb_desc - 1; i++) + (void)rte_ring_enqueue(empty, (void *)(desc + i)); + + hw->desc_mem = desc; + hw->desc_empty = empty; + hw->desc_pending = pending; + hw->desc_running = running; + hw->desc_completed = completed; + + return 0; +} + +static void +vchan_release(struct skeldma_hw *hw) +{ + if (hw->desc_mem == NULL) + return; + + rte_free(hw->desc_mem); + hw->desc_mem = NULL; + rte_ring_free(hw->desc_empty); + hw->desc_empty = NULL; + rte_ring_free(hw->desc_pending); + hw->desc_pending = NULL; + rte_ring_free(hw->desc_running); + hw->desc_running = NULL; + rte_ring_free(hw->desc_completed); + hw->desc_completed = NULL; +} + +static int +skeldma_close(struct rte_dma_dev *dev) +{ + /* The device already stopped */ + vchan_release(dev->dev_private); + return 0; +} + +static int +skeldma_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, + const struct rte_dma_vchan_conf *conf, + uint32_t conf_sz) +{ + struct skeldma_hw *hw = dev->dev_private; + + RTE_SET_USED(vchan); + RTE_SET_USED(conf_sz); + + if (!rte_is_power_of_2(conf->nb_desc)) { + SKELDMA_LOG(ERR, "Number of desc must be power of 2!"); + return -EINVAL; + } + + vchan_release(hw); + return vchan_setup(hw, conf->nb_desc); +} + +static int +skeldma_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, + struct rte_dma_stats *stats, uint32_t stats_sz) +{ + struct skeldma_hw *hw = dev->dev_private; + + RTE_SET_USED(vchan); + RTE_SET_USED(stats_sz); + + stats->submitted = hw->submitted_count; + stats->completed = hw->completed_count; + stats->errors = 0; + + return 0; +} + +static int +skeldma_stats_reset(struct rte_dma_dev *dev, uint16_t vchan) +{ + struct skeldma_hw *hw = dev->dev_private; + + RTE_SET_USED(vchan); + + hw->submitted_count = 0; + hw->completed_count = 0; + + return 0; +} + +static int +skeldma_dump(const struct rte_dma_dev *dev, FILE *f) +{ +#define GET_RING_COUNT(ring) ((ring) ? (rte_ring_count(ring)) : 0) + + struct skeldma_hw *hw = dev->dev_private; + + (void)fprintf(f, + " lcore_id: %d\n" + " socket_id: %d\n" + " desc_empty_ring_count: %u\n" + " desc_pending_ring_count: %u\n" + " desc_running_ring_count: %u\n" + " desc_completed_ring_count: %u\n", + hw->lcore_id, hw->socket_id, + GET_RING_COUNT(hw->desc_empty), + GET_RING_COUNT(hw->desc_pending), + GET_RING_COUNT(hw->desc_running), + GET_RING_COUNT(hw->desc_completed)); + (void)fprintf(f, + " next_ring_idx: %u\n" + " submitted_count: %" PRIu64 "\n" + " completed_count: %" PRIu64 "\n", + hw->ridx, hw->submitted_count, hw->completed_count); + + return 0; +} + +static inline void +submit(struct skeldma_hw *hw, struct skeldma_desc *desc) +{ + uint16_t count = rte_ring_count(hw->desc_pending); + struct skeldma_desc *pend_desc = NULL; + + while (count > 0) { + (void)rte_ring_dequeue(hw->desc_pending, (void **)&pend_desc); + (void)rte_ring_enqueue(hw->desc_running, (void *)pend_desc); + count--; + } + + if (desc) + (void)rte_ring_enqueue(hw->desc_running, (void *)desc); +} + +static int +skeldma_copy(struct rte_dma_dev *dev, uint16_t vchan, + rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags) +{ + struct skeldma_hw *hw = dev->dev_private; + struct skeldma_desc *desc; + int ret; + + RTE_SET_USED(vchan); + RTE_SET_USED(flags); + + ret = rte_ring_dequeue(hw->desc_empty, (void **)&desc); + if (ret) + return -ENOSPC; + desc->src = (void *)(uintptr_t)src; + desc->dst = (void *)(uintptr_t)dst; + desc->len = length; + desc->ridx = hw->ridx; + if (flags & RTE_DMA_OP_FLAG_SUBMIT) + submit(hw, desc); + else + (void)rte_ring_enqueue(hw->desc_pending, (void *)desc); + hw->submitted_count++; + + return hw->ridx++; +} + +static int +skeldma_submit(struct rte_dma_dev *dev, uint16_t vchan) +{ + struct skeldma_hw *hw = dev->dev_private; + RTE_SET_USED(vchan); + submit(hw, NULL); + return 0; +} + +static uint16_t +skeldma_completed(struct rte_dma_dev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error) +{ + struct skeldma_hw *hw = dev->dev_private; + struct skeldma_desc *desc = NULL; + uint16_t index = 0; + uint16_t count; + + RTE_SET_USED(vchan); + RTE_SET_USED(has_error); + + count = RTE_MIN(nb_cpls, rte_ring_count(hw->desc_completed)); + while (index < count) { + (void)rte_ring_dequeue(hw->desc_completed, (void **)&desc); + if (index == count - 1) + *last_idx = desc->ridx; + index++; + (void)rte_ring_enqueue(hw->desc_empty, (void *)desc); + } + + return count; +} + +static uint16_t +skeldma_completed_status(struct rte_dma_dev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, enum rte_dma_status_code *status) +{ + struct skeldma_hw *hw = dev->dev_private; + struct skeldma_desc *desc = NULL; + uint16_t index = 0; + uint16_t count; + + RTE_SET_USED(vchan); + + count = RTE_MIN(nb_cpls, rte_ring_count(hw->desc_completed)); + while (index < count) { + (void)rte_ring_dequeue(hw->desc_completed, (void **)&desc); + if (index == count - 1) + *last_idx = desc->ridx; + status[index++] = RTE_DMA_STATUS_SUCCESSFUL; + (void)rte_ring_enqueue(hw->desc_empty, (void *)desc); + } + + return count; +} + +static const struct rte_dma_dev_ops skeldma_ops = { + .dev_info_get = skeldma_info_get, + .dev_configure = skeldma_configure, + .dev_start = skeldma_start, + .dev_stop = skeldma_stop, + .dev_close = skeldma_close, + + .vchan_setup = skeldma_vchan_setup, + + .stats_get = skeldma_stats_get, + .stats_reset = skeldma_stats_reset, + + .dev_dump = skeldma_dump, +}; + +static int +skeldma_create(const char *name, struct rte_vdev_device *vdev, int lcore_id) +{ + struct rte_dma_dev *dev; + struct skeldma_hw *hw; + int socket_id; + + socket_id = (lcore_id < 0) ? rte_socket_id() : + rte_lcore_to_socket_id(lcore_id); + dev = rte_dma_pmd_allocate(name, socket_id, sizeof(struct skeldma_hw)); + if (dev == NULL) { + SKELDMA_LOG(ERR, "Unable to allocate dmadev: %s", name); + return -EINVAL; + } + + dev->copy = skeldma_copy; + dev->submit = skeldma_submit; + dev->completed = skeldma_completed; + dev->completed_status = skeldma_completed_status; + dev->dev_ops = &skeldma_ops; + dev->device = &vdev->device; + + hw = dev->dev_private; + hw->lcore_id = lcore_id; + hw->socket_id = socket_id; + + dev->state = RTE_DMA_DEV_READY; + + return dev->data->dev_id; +} + +static int +skeldma_destroy(const char *name) +{ + return rte_dma_pmd_release(name); +} + +static int +skeldma_parse_lcore(const char *key __rte_unused, + const char *value, + void *opaque) +{ + int lcore_id = atoi(value); + if (lcore_id >= 0 && lcore_id < RTE_MAX_LCORE) + *(int *)opaque = lcore_id; + return 0; +} + +static void +skeldma_parse_vdev_args(struct rte_vdev_device *vdev, int *lcore_id) +{ + static const char *const args[] = { + SKELDMA_ARG_LCORE, + NULL + }; + + struct rte_kvargs *kvlist; + const char *params; + + params = rte_vdev_device_args(vdev); + if (params == NULL || params[0] == '\0') + return; + + kvlist = rte_kvargs_parse(params, args); + if (!kvlist) + return; + + (void)rte_kvargs_process(kvlist, SKELDMA_ARG_LCORE, + skeldma_parse_lcore, lcore_id); + SKELDMA_LOG(INFO, "Parse lcore_id = %d", *lcore_id); + + rte_kvargs_free(kvlist); +} + +static int +skeldma_probe(struct rte_vdev_device *vdev) +{ + const char *name; + int lcore_id = -1; + int ret; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + SKELDMA_LOG(ERR, "Multiple process not supported for %s", name); + return -EINVAL; + } + + /* More than one instance is not supported */ + if (skeldma_count > 0) { + SKELDMA_LOG(ERR, "Multiple instance not supported for %s", + name); + return -EINVAL; + } + + skeldma_parse_vdev_args(vdev, &lcore_id); + + ret = skeldma_create(name, vdev, lcore_id); + if (ret >= 0) { + SKELDMA_LOG(INFO, "Create %s dmadev with lcore-id %d", + name, lcore_id); + skeldma_count = 1; + } + + return ret < 0 ? ret : 0; +} + +static int +skeldma_remove(struct rte_vdev_device *vdev) +{ + const char *name; + int ret; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -1; + + ret = skeldma_destroy(name); + if (!ret) { + skeldma_count = 0; + SKELDMA_LOG(INFO, "Remove %s dmadev", name); + } + + return ret; +} + +static struct rte_vdev_driver skeldma_pmd_drv = { + .probe = skeldma_probe, + .remove = skeldma_remove, + .drv_flags = RTE_VDEV_DRV_NEED_IOVA_AS_VA, +}; + +RTE_PMD_REGISTER_VDEV(dma_skeleton, skeldma_pmd_drv); +RTE_PMD_REGISTER_PARAM_STRING(dma_skeleton, + SKELDMA_ARG_LCORE "= "); diff --git a/drivers/dma/skeleton/skeleton_dmadev.h b/drivers/dma/skeleton/skeleton_dmadev.h new file mode 100644 index 0000000000..eaa52364bf --- /dev/null +++ b/drivers/dma/skeleton/skeleton_dmadev.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited + */ + +#ifndef SKELETON_DMADEV_H +#define SKELETON_DMADEV_H + +#include + +#include + +#define SKELDMA_ARG_LCORE "lcore" + +struct skeldma_desc { + void *src; + void *dst; + uint32_t len; + uint16_t ridx; /* ring idx */ +}; + +struct skeldma_hw { + int lcore_id; /* cpucopy task affinity core */ + int socket_id; + pthread_t thread; /* cpucopy task thread */ + volatile int exit_flag; /* cpucopy task exit flag */ + + struct skeldma_desc *desc_mem; + + /* Descriptor ring state machine: + * + * ----------- enqueue without submit ----------- + * | empty |------------------------------->| pending | + * -----------\ ----------- + * ^ \------------ | + * | | |submit doorbell + * | | | + * | |enqueue with submit | + * |get completed |------------------| | + * | | | + * | v v + * ----------- cpucopy thread working ----------- + * |completed|<-------------------------------| running | + * ----------- ----------- + */ + struct rte_ring *desc_empty; + struct rte_ring *desc_pending; + struct rte_ring *desc_running; + struct rte_ring *desc_completed; + + /* Cache delimiter for dataplane API's operation data */ + char cache1 __rte_cache_aligned; + uint16_t ridx; /* ring idx */ + uint64_t submitted_count; + + /* Cache delimiter for cpucopy thread's operation data */ + char cache2 __rte_cache_aligned; + uint32_t zero_req_count; + uint64_t completed_count; +}; + +#endif /* SKELETON_DMADEV_H */ diff --git a/drivers/dma/skeleton/version.map b/drivers/dma/skeleton/version.map new file mode 100644 index 0000000000..c2e0723b4c --- /dev/null +++ b/drivers/dma/skeleton/version.map @@ -0,0 +1,3 @@ +DPDK_22 { + local: *; +}; From patchwork Fri Sep 24 10:53:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 99587 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 81654A0548; Fri, 24 Sep 2021 12:58:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D232341325; Fri, 24 Sep 2021 12:58:21 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 4D21C412F6 for ; Fri, 24 Sep 2021 12:58:14 +0200 (CEST) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HG86H1TLKzbmk6; Fri, 24 Sep 2021 18:53:59 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:12 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Fri, 24 Sep 2021 18:58:12 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , , , Date: Fri, 24 Sep 2021 18:53:57 +0800 Message-ID: <20210924105357.15386-7-fengchengwen@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210924105357.15386-1-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <20210924105357.15386-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v23 6/6] app/test: add dmadev API test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add dmadev API test which based on 'dma_skeleton' vdev. The test cases could be executed using 'dmadev_autotest' command in test framework. Signed-off-by: Chengwen Feng Signed-off-by: Bruce Richardson Reviewed-by: Kevin Laatz Reviewed-by: Conor Walsh --- MAINTAINERS | 1 + app/test/meson.build | 4 + app/test/test_dmadev.c | 41 +++ app/test/test_dmadev_api.c | 574 +++++++++++++++++++++++++++++++++++++ app/test/test_dmadev_api.h | 5 + 5 files changed, 625 insertions(+) create mode 100644 app/test/test_dmadev.c create mode 100644 app/test/test_dmadev_api.c create mode 100644 app/test/test_dmadev_api.h diff --git a/MAINTAINERS b/MAINTAINERS index 85d4f83395..3258da194d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -458,6 +458,7 @@ DMA device API - EXPERIMENTAL M: Chengwen Feng F: lib/dmadev/ F: drivers/dma/skeleton/ +F: app/test/test_dmadev* F: doc/guides/prog_guide/dmadev.rst Eventdev API diff --git a/app/test/meson.build b/app/test/meson.build index a7611686ad..9027eba3a4 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -43,6 +43,8 @@ test_sources = files( 'test_debug.c', 'test_distributor.c', 'test_distributor_perf.c', + 'test_dmadev.c', + 'test_dmadev_api.c', 'test_eal_flags.c', 'test_eal_fs.c', 'test_efd.c', @@ -162,6 +164,7 @@ test_deps = [ 'cmdline', 'cryptodev', 'distributor', + 'dmadev', 'efd', 'ethdev', 'eventdev', @@ -333,6 +336,7 @@ driver_test_names = [ 'cryptodev_sw_mvsam_autotest', 'cryptodev_sw_snow3g_autotest', 'cryptodev_sw_zuc_autotest', + 'dmadev_autotest', 'eventdev_selftest_octeontx', 'eventdev_selftest_sw', 'rawdev_autotest', diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c new file mode 100644 index 0000000000..75cc939158 --- /dev/null +++ b/app/test/test_dmadev.c @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited + * Copyright(c) 2021 Intel Corporation + */ + +#include +#include + +#include "test.h" +#include "test_dmadev_api.h" + +static int +test_apis(void) +{ + const char *pmd = "dma_skeleton"; + int id; + int ret; + + if (rte_vdev_init(pmd, NULL) < 0) + return TEST_SKIPPED; + id = rte_dma_get_dev_id(pmd); + if (id < 0) + return TEST_SKIPPED; + printf("\n### Test dmadev infrastructure using skeleton driver\n"); + ret = test_dma_api(id); + rte_vdev_uninit(pmd); + + return ret; +} + +static int +test_dma(void) +{ + /* basic sanity on dmadev infrastructure */ + if (test_apis() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(dmadev_autotest, test_dma); diff --git a/app/test/test_dmadev_api.c b/app/test/test_dmadev_api.c new file mode 100644 index 0000000000..90c317aae2 --- /dev/null +++ b/app/test/test_dmadev_api.c @@ -0,0 +1,574 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited + */ + +#include + +#include +#include +#include +#include + +extern int test_dma_api(uint16_t dev_id); + +#define DMA_TEST_API_RUN(test) \ + testsuite_run_test(test, #test) + +#define TEST_MEMCPY_SIZE 1024 +#define TEST_WAIT_US_VAL 50000 + +#define TEST_SUCCESS 0 +#define TEST_FAILED -1 + +static int16_t test_dev_id; +static int16_t invalid_dev_id; + +static char *src; +static char *dst; + +static int total; +static int passed; +static int failed; + +static int +testsuite_setup(int16_t dev_id) +{ + test_dev_id = dev_id; + invalid_dev_id = -1; + + src = rte_malloc("dmadev_test_src", TEST_MEMCPY_SIZE, 0); + if (src == NULL) + return -ENOMEM; + dst = rte_malloc("dmadev_test_dst", TEST_MEMCPY_SIZE, 0); + if (dst == NULL) { + rte_free(src); + src = NULL; + return -ENOMEM; + } + + total = 0; + passed = 0; + failed = 0; + + /* Set dmadev log level to critical to suppress unnecessary output + * during API tests. + */ + rte_log_set_level_pattern("lib.dmadev", RTE_LOG_CRIT); + + return 0; +} + +static void +testsuite_teardown(void) +{ + rte_free(src); + src = NULL; + rte_free(dst); + dst = NULL; + /* Ensure the dmadev is stopped. */ + rte_dma_stop(test_dev_id); + + rte_log_set_level_pattern("lib.dmadev", RTE_LOG_INFO); +} + +static void +testsuite_run_test(int (*test)(void), const char *name) +{ + int ret = 0; + + if (test) { + ret = test(); + if (ret < 0) { + failed++; + printf("%s Failed\n", name); + } else { + passed++; + printf("%s Passed\n", name); + } + } + + total++; +} + +static int +test_dma_get_dev_id(void) +{ + int ret = rte_dma_get_dev_id("invalid_dmadev_device"); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + return TEST_SUCCESS; +} + +static int +test_dma_is_valid_dev(void) +{ + int ret; + ret = rte_dma_is_valid(invalid_dev_id); + RTE_TEST_ASSERT(ret == false, "Expected false for invalid dev id"); + ret = rte_dma_is_valid(test_dev_id); + RTE_TEST_ASSERT(ret == true, "Expected true for valid dev id"); + return TEST_SUCCESS; +} + +static int +test_dma_count(void) +{ + uint16_t count = rte_dma_count_avail(); + RTE_TEST_ASSERT(count > 0, "Invalid dmadev count %u", count); + return TEST_SUCCESS; +} + +static int +test_dma_info_get(void) +{ + struct rte_dma_info info = { 0 }; + int ret; + + ret = rte_dma_info_get(invalid_dev_id, &info); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dma_info_get(test_dev_id, NULL); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dma_info_get(test_dev_id, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info"); + + return TEST_SUCCESS; +} + +static int +test_dma_configure(void) +{ + struct rte_dma_conf conf = { 0 }; + struct rte_dma_info info = { 0 }; + int ret; + + /* Check for invalid parameters */ + ret = rte_dma_configure(invalid_dev_id, &conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dma_configure(test_dev_id, NULL); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for nb_vchans == 0 */ + memset(&conf, 0, sizeof(conf)); + ret = rte_dma_configure(test_dev_id, &conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for conf.nb_vchans > info.max_vchans */ + ret = rte_dma_info_get(test_dev_id, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info"); + memset(&conf, 0, sizeof(conf)); + conf.nb_vchans = info.max_vchans + 1; + ret = rte_dma_configure(test_dev_id, &conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check enable silent mode */ + memset(&conf, 0, sizeof(conf)); + conf.nb_vchans = info.max_vchans; + conf.enable_silent = true; + ret = rte_dma_configure(test_dev_id, &conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Configure success */ + memset(&conf, 0, sizeof(conf)); + conf.nb_vchans = info.max_vchans; + ret = rte_dma_configure(test_dev_id, &conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure dmadev, %d", ret); + + /* Check configure success */ + ret = rte_dma_info_get(test_dev_id, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info"); + RTE_TEST_ASSERT_EQUAL(conf.nb_vchans, info.nb_vchans, + "Configure nb_vchans not match"); + + return TEST_SUCCESS; +} + +static int +check_direction(void) +{ + struct rte_dma_vchan_conf vchan_conf; + int ret; + + /* Check for direction */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_DEV_TO_DEV + 1; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM - 1; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for direction and dev_capa combination */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_DEV; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + vchan_conf.direction = RTE_DMA_DIR_DEV_TO_MEM; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + vchan_conf.direction = RTE_DMA_DIR_DEV_TO_DEV; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + return 0; +} + +static int +check_port_type(struct rte_dma_info *dev_info) +{ + struct rte_dma_vchan_conf vchan_conf; + int ret; + + /* Check src port type validation */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info->min_desc; + vchan_conf.src_port.port_type = RTE_DMA_PORT_PCIE; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check dst port type validation */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info->min_desc; + vchan_conf.dst_port.port_type = RTE_DMA_PORT_PCIE; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + return 0; +} + +static int +test_dma_vchan_setup(void) +{ + struct rte_dma_vchan_conf vchan_conf = { 0 }; + struct rte_dma_conf dev_conf = { 0 }; + struct rte_dma_info dev_info = { 0 }; + int ret; + + /* Check for invalid parameters */ + ret = rte_dma_vchan_setup(invalid_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dma_vchan_setup(test_dev_id, 0, NULL); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Make sure configure success */ + ret = rte_dma_info_get(test_dev_id, &dev_info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info"); + dev_conf.nb_vchans = dev_info.max_vchans; + ret = rte_dma_configure(test_dev_id, &dev_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure dmadev, %d", ret); + + /* Check for invalid vchan */ + ret = rte_dma_vchan_setup(test_dev_id, dev_conf.nb_vchans, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for direction */ + ret = check_direction(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to check direction"); + + /* Check for nb_desc validation */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info.min_desc - 1; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + vchan_conf.nb_desc = dev_info.max_desc + 1; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check port type */ + ret = check_port_type(&dev_info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to check port type"); + + /* Check vchan setup success */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info.min_desc; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup vchan, %d", ret); + + return TEST_SUCCESS; +} + +static int +setup_one_vchan(void) +{ + struct rte_dma_vchan_conf vchan_conf = { 0 }; + struct rte_dma_info dev_info = { 0 }; + struct rte_dma_conf dev_conf = { 0 }; + int ret; + + ret = rte_dma_info_get(test_dev_id, &dev_info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info, %d", ret); + dev_conf.nb_vchans = dev_info.max_vchans; + ret = rte_dma_configure(test_dev_id, &dev_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure, %d", ret); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info.min_desc; + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup vchan, %d", ret); + + return TEST_SUCCESS; +} + +static int +test_dma_start_stop(void) +{ + struct rte_dma_vchan_conf vchan_conf = { 0 }; + struct rte_dma_conf dev_conf = { 0 }; + int ret; + + /* Check for invalid parameters */ + ret = rte_dma_start(invalid_dev_id); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dma_stop(invalid_dev_id); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Setup one vchan for later test */ + ret = setup_one_vchan(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret); + + ret = rte_dma_start(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret); + + /* Check reconfigure and vchan setup when device started */ + ret = rte_dma_configure(test_dev_id, &dev_conf); + RTE_TEST_ASSERT(ret == -EBUSY, "Failed to configure, %d", ret); + ret = rte_dma_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EBUSY, "Failed to setup vchan, %d", ret); + + ret = rte_dma_stop(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret); + + return TEST_SUCCESS; +} + +static int +test_dma_stats(void) +{ + struct rte_dma_info dev_info = { 0 }; + struct rte_dma_stats stats = { 0 }; + int ret; + + /* Check for invalid parameters */ + ret = rte_dma_stats_get(invalid_dev_id, 0, &stats); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dma_stats_get(invalid_dev_id, 0, NULL); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dma_stats_reset(invalid_dev_id, 0); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Setup one vchan for later test */ + ret = setup_one_vchan(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret); + + /* Check for invalid vchan */ + ret = rte_dma_info_get(test_dev_id, &dev_info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info, %d", ret); + ret = rte_dma_stats_get(test_dev_id, dev_info.max_vchans, &stats); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dma_stats_reset(test_dev_id, dev_info.max_vchans); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for valid vchan */ + ret = rte_dma_stats_get(test_dev_id, 0, &stats); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get stats, %d", ret); + ret = rte_dma_stats_get(test_dev_id, RTE_DMA_ALL_VCHAN, &stats); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get all stats, %d", ret); + ret = rte_dma_stats_reset(test_dev_id, 0); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to reset stats, %d", ret); + ret = rte_dma_stats_reset(test_dev_id, RTE_DMA_ALL_VCHAN); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to reset all stats, %d", ret); + + return TEST_SUCCESS; +} + +static int +test_dma_dump(void) +{ + int ret; + + /* Check for invalid parameters */ + ret = rte_dma_dump(invalid_dev_id, stderr); + RTE_TEST_ASSERT(ret == -EINVAL, "Excepted -EINVAL, %d", ret); + ret = rte_dma_dump(test_dev_id, NULL); + RTE_TEST_ASSERT(ret == -EINVAL, "Excepted -EINVAL, %d", ret); + + return TEST_SUCCESS; +} + +static void +setup_memory(void) +{ + int i; + + for (i = 0; i < TEST_MEMCPY_SIZE; i++) + src[i] = (char)i; + memset(dst, 0, TEST_MEMCPY_SIZE); +} + +static int +verify_memory(void) +{ + int i; + + for (i = 0; i < TEST_MEMCPY_SIZE; i++) { + if (src[i] == dst[i]) + continue; + RTE_TEST_ASSERT_EQUAL(src[i], dst[i], + "Failed to copy memory, %d %d", src[i], dst[i]); + } + + return 0; +} + +static int +test_dma_completed(void) +{ + uint16_t last_idx = 1; + bool has_error = true; + uint16_t cpl_ret; + int ret; + + /* Setup one vchan for later test */ + ret = setup_one_vchan(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret); + + ret = rte_dma_start(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret); + + setup_memory(); + + /* Check enqueue without submit */ + ret = rte_dma_copy(test_dev_id, 0, (rte_iova_t)src, (rte_iova_t)dst, + TEST_MEMCPY_SIZE, 0); + RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to enqueue copy, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dma_completed(test_dev_id, 0, 1, &last_idx, &has_error); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 0, "Failed to get completed"); + + /* Check add submit */ + ret = rte_dma_submit(test_dev_id, 0); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to submit, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dma_completed(test_dev_id, 0, 1, &last_idx, &has_error); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to get completed"); + RTE_TEST_ASSERT_EQUAL(last_idx, 0, "Last idx should be zero, %u", + last_idx); + RTE_TEST_ASSERT_EQUAL(has_error, false, "Should have no error"); + ret = verify_memory(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to verify memory"); + + setup_memory(); + + /* Check for enqueue with submit */ + ret = rte_dma_copy(test_dev_id, 0, (rte_iova_t)src, (rte_iova_t)dst, + TEST_MEMCPY_SIZE, RTE_DMA_OP_FLAG_SUBMIT); + RTE_TEST_ASSERT_EQUAL(ret, 1, "Failed to enqueue copy, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dma_completed(test_dev_id, 0, 1, &last_idx, &has_error); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to get completed"); + RTE_TEST_ASSERT_EQUAL(last_idx, 1, "Last idx should be 1, %u", + last_idx); + RTE_TEST_ASSERT_EQUAL(has_error, false, "Should have no error"); + ret = verify_memory(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to verify memory"); + + /* Stop dmadev to make sure dmadev to a known state */ + ret = rte_dma_stop(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret); + + return TEST_SUCCESS; +} + +static int +test_dma_completed_status(void) +{ + enum rte_dma_status_code status[1] = { 1 }; + uint16_t last_idx = 1; + uint16_t cpl_ret, i; + int ret; + + /* Setup one vchan for later test */ + ret = setup_one_vchan(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret); + + ret = rte_dma_start(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret); + + /* Check for enqueue with submit */ + ret = rte_dma_copy(test_dev_id, 0, (rte_iova_t)src, (rte_iova_t)dst, + TEST_MEMCPY_SIZE, RTE_DMA_OP_FLAG_SUBMIT); + RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to enqueue copy, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dma_completed_status(test_dev_id, 0, 1, &last_idx, + status); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to completed status"); + RTE_TEST_ASSERT_EQUAL(last_idx, 0, "Last idx should be zero, %u", + last_idx); + for (i = 0; i < RTE_DIM(status); i++) + RTE_TEST_ASSERT_EQUAL(status[i], 0, + "Failed to completed status, %d", status[i]); + + /* Check do completed status again */ + cpl_ret = rte_dma_completed_status(test_dev_id, 0, 1, &last_idx, + status); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 0, "Failed to completed status"); + + /* Check for enqueue with submit again */ + ret = rte_dma_copy(test_dev_id, 0, (rte_iova_t)src, (rte_iova_t)dst, + TEST_MEMCPY_SIZE, RTE_DMA_OP_FLAG_SUBMIT); + RTE_TEST_ASSERT_EQUAL(ret, 1, "Failed to enqueue copy, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dma_completed_status(test_dev_id, 0, 1, &last_idx, + status); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to completed status"); + RTE_TEST_ASSERT_EQUAL(last_idx, 1, "Last idx should be 1, %u", + last_idx); + for (i = 0; i < RTE_DIM(status); i++) + RTE_TEST_ASSERT_EQUAL(status[i], 0, + "Failed to completed status, %d", status[i]); + + /* Stop dmadev to make sure dmadev to a known state */ + ret = rte_dma_stop(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret); + + return TEST_SUCCESS; +} + +int +test_dma_api(uint16_t dev_id) +{ + int ret = testsuite_setup(dev_id); + if (ret) { + printf("testsuite setup fail!\n"); + return -1; + } + + /* If the testcase exit successfully, ensure that the test dmadev exist + * and the dmadev is in the stopped state. + */ + DMA_TEST_API_RUN(test_dma_get_dev_id); + DMA_TEST_API_RUN(test_dma_is_valid_dev); + DMA_TEST_API_RUN(test_dma_count); + DMA_TEST_API_RUN(test_dma_info_get); + DMA_TEST_API_RUN(test_dma_configure); + DMA_TEST_API_RUN(test_dma_vchan_setup); + DMA_TEST_API_RUN(test_dma_start_stop); + DMA_TEST_API_RUN(test_dma_stats); + DMA_TEST_API_RUN(test_dma_dump); + DMA_TEST_API_RUN(test_dma_completed); + DMA_TEST_API_RUN(test_dma_completed_status); + + testsuite_teardown(); + + printf("Total tests : %d\n", total); + printf("Passed : %d\n", passed); + printf("Failed : %d\n", failed); + + if (failed) + return -1; + + return 0; +}; diff --git a/app/test/test_dmadev_api.h b/app/test/test_dmadev_api.h new file mode 100644 index 0000000000..33fbc5bd41 --- /dev/null +++ b/app/test/test_dmadev_api.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited + */ + +int test_dma_api(uint16_t dev_id);