From patchwork Mon Aug 23 03:31:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97182 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F701A0C54; Mon, 23 Aug 2021 05:35:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BF00640143; Mon, 23 Aug 2021 05:35:36 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 9E55F4003E for ; Mon, 23 Aug 2021 05:35:34 +0200 (CEST) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GtHpN6ggwz8tFC; Mon, 23 Aug 2021 11:31:24 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:31 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:31 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Mon, 23 Aug 2021 11:31:26 +0800 Message-ID: <1629689494-55091-2-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v16 1/9] dmadev: introduce DMA device library public APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The 'dmadevice' is a generic type of DMA device. This patch introduce the 'dmadevice' public APIs which expose generic operations that can enable configuration and I/O with the DMA devices. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup Acked-by: Jerin Jacob --- doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf.in | 1 + lib/dmadev/meson.build | 4 + lib/dmadev/rte_dmadev.h | 957 ++++++++++++++++++++++++++++++++++++++++++++++ lib/dmadev/version.map | 25 ++ lib/meson.build | 1 + 6 files changed, 989 insertions(+) create mode 100644 lib/dmadev/meson.build create mode 100644 lib/dmadev/rte_dmadev.h create mode 100644 lib/dmadev/version.map diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 1992107..ce08250 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -27,6 +27,7 @@ The public API headers are grouped by topics: [event_timer_adapter] (@ref rte_event_timer_adapter.h), [event_crypto_adapter] (@ref rte_event_crypto_adapter.h), [rawdev] (@ref rte_rawdev.h), + [dmadev] (@ref rte_dmadev.h), [metrics] (@ref rte_metrics.h), [bitrate] (@ref rte_bitrate.h), [latency] (@ref rte_latencystats.h), diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index 325a019..a44a92b 100644 --- a/doc/api/doxy-api.conf.in +++ b/doc/api/doxy-api.conf.in @@ -34,6 +34,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \ @TOPDIR@/lib/cmdline \ @TOPDIR@/lib/compressdev \ @TOPDIR@/lib/cryptodev \ + @TOPDIR@/lib/dmadev \ @TOPDIR@/lib/distributor \ @TOPDIR@/lib/efd \ @TOPDIR@/lib/ethdev \ diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build new file mode 100644 index 0000000..6d5bd85 --- /dev/null +++ b/lib/dmadev/meson.build @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2021 HiSilicon Limited. + +headers = files('rte_dmadev.h') diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h new file mode 100644 index 0000000..a008ee0 --- /dev/null +++ b/lib/dmadev/rte_dmadev.h @@ -0,0 +1,957 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + * Copyright(c) 2021 Intel Corporation. + * Copyright(c) 2021 Marvell International Ltd. + * Copyright(c) 2021 SmartShare Systems. + */ + +#ifndef _RTE_DMADEV_H_ +#define _RTE_DMADEV_H_ + +/** + * @file rte_dmadev.h + * + * RTE DMA (Direct Memory Access) device APIs. + * + * The DMA framework is built on the following model: + * + * --------------- --------------- --------------- + * | virtual DMA | | virtual DMA | | virtual DMA | + * | channel | | channel | | channel | + * --------------- --------------- --------------- + * | | | + * ------------------ | + * | | + * ------------ ------------ + * | dmadev | | dmadev | + * ------------ ------------ + * | | + * ------------------ ------------------ + * | HW-DMA-channel | | HW-DMA-channel | + * ------------------ ------------------ + * | | + * -------------------------------- + * | + * --------------------- + * | HW-DMA-Controller | + * --------------------- + * + * The DMA controller could have multiple HW-DMA-channels (aka. HW-DMA-queues), + * each HW-DMA-channel should be represented by a dmadev. + * + * The dmadev could create multiple virtual DMA channels, each virtual DMA + * channel represents a different transfer context. The DMA operation request + * must be submitted to the virtual DMA channel. e.g. Application could create + * virtual DMA channel 0 for memory-to-memory transfer scenario, and create + * virtual DMA channel 1 for memory-to-device transfer scenario. + * + * The dmadev are dynamically allocated by rte_dmadev_pmd_allocate() during the + * PCI/SoC device probing phase performed at EAL initialization time. And could + * be released by rte_dmadev_pmd_release() during the PCI/SoC device removing + * phase. + * + * This framework uses 'uint16_t dev_id' as the device identifier of a dmadev, + * and 'uint16_t vchan' as the virtual DMA channel identifier in one dmadev. + * + * The functions exported by the dmadev API to setup a device designated by its + * device identifier must be invoked in the following order: + * - rte_dmadev_configure() + * - rte_dmadev_vchan_setup() + * - rte_dmadev_start() + * + * Then, the application can invoke dataplane APIs to process jobs. + * + * If the application wants to change the configuration (i.e. invoke + * rte_dmadev_configure() or rte_dmadev_vchan_setup()), it must invoke + * rte_dmadev_stop() first to stop the device and then do the reconfiguration + * before invoking rte_dmadev_start() again. The dataplane APIs should not be + * invoked when the device is stopped. + * + * Finally, an application can close a dmadev by invoking the + * rte_dmadev_close() function. + * + * The dataplane APIs include two parts: + * The first part is the submission of operation requests: + * - rte_dmadev_copy() + * - rte_dmadev_copy_sg() + * - rte_dmadev_fill() + * - rte_dmadev_submit() + * + * These APIs could work with different virtual DMA channels which have + * different contexts. + * + * The first three APIs are used to submit the operation request to the virtual + * DMA channel, if the submission is successful, an uint16_t ring_idx is + * returned, otherwise a negative number is returned. + * + * The last API was used to issue doorbell to hardware, and also there are flags + * (@see RTE_DMA_OP_FLAG_SUBMIT) parameter of the first three APIs could do the + * same work. + * + * The second part is to obtain the result of requests: + * - rte_dmadev_completed() + * - return the number of operation requests completed successfully. + * - rte_dmadev_completed_status() + * - return the number of operation requests completed. + * + * @note If the dmadev works in silent mode (@see RTE_DMADEV_CAPA_SILENT), + * application does not invoke the above two completed APIs. + * + * About the ring_idx which enqueue APIs (e.g. rte_dmadev_copy() + * rte_dmadev_fill()) returned, the rules are as follows: + * - ring_idx for each virtual DMA channel are independent. + * - For a virtual DMA channel, the ring_idx is monotonically incremented, + * when it reach UINT16_MAX, it wraps back to zero. + * - This ring_idx can be used by applications to track per-operation + * metadata in an application-defined circular ring. + * - The initial ring_idx of a virtual DMA channel is zero, after the + * device is stopped, the ring_idx needs to be reset to zero. + * + * One example: + * - step-1: start one dmadev + * - step-2: enqueue a copy operation, the ring_idx return is 0 + * - step-3: enqueue a copy operation again, the ring_idx return is 1 + * - ... + * - step-101: stop the dmadev + * - step-102: start the dmadev + * - step-103: enqueue a copy operation, the cookie return is 0 + * - ... + * - step-x+0: enqueue a fill operation, the ring_idx return is 65535 + * - step-x+1: enqueue a copy operation, the ring_idx return is 0 + * - ... + * + * The DMA operation address used in enqueue APIs (i.e. rte_dmadev_copy(), + * rte_dmadev_copy_sg(), rte_dmadev_fill()) defined as rte_iova_t type. The + * dmadev supports two types of address: memory address and device address. + * + * - memory address: the source and destination address of the memory-to-memory + * transfer type, or the source address of the memory-to-device transfer type, + * or the destination address of the device-to-memory transfer type. + * @note If the device support SVA (@see RTE_DMADEV_CAPA_SVA), the memory + * address can be any VA address, otherwise it must be an IOVA address. + * + * - device address: the source and destination address of the device-to-device + * transfer type, or the source address of the device-to-memory transfer type, + * or the destination address of the memory-to-device transfer type. + * + * By default, all the functions of the dmadev API exported by a PMD are + * lock-free functions which assume to not be invoked in parallel on different + * logical cores to work on the same target dmadev object. + * @note Different virtual DMA channels on the same dmadev *DO NOT* support + * parallel invocation because these virtual DMA channels share the same + * HW-DMA-channel. + * + */ + +#include +#include +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +#define RTE_DMADEV_NAME_MAX_LEN RTE_DEV_NAME_MAX_LEN + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get the device identifier for the named DMA device. + * + * @param name + * DMA device name. + * + * @return + * Returns DMA device identifier on success. + * - <0: Failure to find named DMA device. + */ +__rte_experimental +int +rte_dmadev_get_dev_id(const char *name); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * @param dev_id + * DMA device index. + * + * @return + * - If the device index is valid (true) or not (false). + */ +__rte_experimental +bool +rte_dmadev_is_valid_dev(uint16_t dev_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get the total number of DMA devices that have been successfully + * initialised. + * + * @return + * The total number of usable DMA devices. + */ +__rte_experimental +uint16_t +rte_dmadev_count(void); + +/* Enumerates DMA device capabilities. */ +#define RTE_DMADEV_CAPA_MEM_TO_MEM (1ull << 0) +/**< DMA device support memory-to-memory transfer. + * + * @see struct rte_dmadev_info::dev_capa + */ + +#define RTE_DMADEV_CAPA_MEM_TO_DEV (1ull << 1) +/**< DMA device support memory-to-device transfer. + * + * @see struct rte_dmadev_info::dev_capa + * @see struct rte_dmadev_port_param::port_type + */ + +#define RTE_DMADEV_CAPA_DEV_TO_MEM (1ull << 2) +/**< DMA device support device-to-memory transfer. + * + * @see struct rte_dmadev_info::dev_capa + * @see struct rte_dmadev_port_param::port_type + */ + +#define RTE_DMADEV_CAPA_DEV_TO_DEV (1ull << 3) +/**< DMA device support device-to-device transfer. + * + * @see struct rte_dmadev_info::dev_capa + * @see struct rte_dmadev_port_param::port_type + */ + +#define RTE_DMADEV_CAPA_SVA (1ull << 4) +/**< DMA device support SVA which could use VA as DMA address. + * If device support SVA then application could pass any VA address like memory + * from rte_malloc(), rte_memzone(), malloc, stack memory. + * If device don't support SVA, then application should pass IOVA address which + * from rte_malloc(), rte_memzone(). + * + * @see struct rte_dmadev_info::dev_capa + */ + +#define RTE_DMADEV_CAPA_SILENT (1ull << 5) +/**< DMA device support work in silent mode. + * In this mode, application don't required to invoke rte_dmadev_completed*() + * API. + * + * @see struct rte_dmadev_conf::silent_mode + */ + +#define RTE_DMADEV_CAPA_OPS_COPY (1ull << 32) +/**< DMA device support copy ops. + * This capability start with index of 32, so that it could leave gap between + * normal capability and ops capability. + * + * @see struct rte_dmadev_info::dev_capa + */ + +#define RTE_DMADEV_CAPA_OPS_COPY_SG (1ull << 33) +/**< DMA device support scatter-gather list copy ops. + * + * @see struct rte_dmadev_info::dev_capa + */ + +#define RTE_DMADEV_CAPA_OPS_FILL (1ull << 34) +/**< DMA device support fill ops. + * + * @see struct rte_dmadev_info::dev_capa + */ + +/** + * A structure used to retrieve the information of a DMA device. + */ +struct rte_dmadev_info { + struct rte_device *device; /**< Generic Device information. */ + uint64_t dev_capa; /**< Device capabilities (RTE_DMADEV_CAPA_*). */ + uint16_t max_vchans; + /**< Maximum number of virtual DMA channels supported. */ + uint16_t max_desc; + /**< Maximum allowed number of virtual DMA channel descriptors. */ + uint16_t min_desc; + /**< Minimum allowed number of virtual DMA channel descriptors. */ + uint16_t max_sges; + /**< Maximum number of source or destination scatter-gather entry + * supported. + * If the device does not support COPY_SG capability, this value can be + * zero. + * If the device supports COPY_SG capability, then rte_dmadev_copy_sg() + * parameter nb_src/nb_dst should not exceed this value. + */ + uint16_t nb_vchans; /**< Number of virtual DMA channel configured. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve information of a DMA device. + * + * @param dev_id + * The identifier of the device. + * @param[out] dev_info + * A pointer to a structure of type *rte_dmadev_info* to be filled with the + * information of the device. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_info_get(uint16_t dev_id, struct rte_dmadev_info *dev_info); + +/** + * A structure used to configure a DMA device. + */ +struct rte_dmadev_conf { + uint16_t nb_vchans; + /**< The number of virtual DMA channels to set up for the DMA device. + * This value cannot be greater than the field 'max_vchans' of struct + * rte_dmadev_info which get from rte_dmadev_info_get(). + */ + bool enable_silent; + /**< Indicates whether to enable silent mode. + * false-default mode, true-silent mode. + * This value can be set to true only when the SILENT capability is + * supported. + * + * @see RTE_DMADEV_CAPA_SILENT + */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure a DMA device. + * + * This function must be invoked first before any other function in the + * API. This function can also be re-invoked when a device is in the + * stopped state. + * + * @param dev_id + * The identifier of the device to configure. + * @param dev_conf + * The DMA device configuration structure encapsulated into rte_dmadev_conf + * object. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_configure(uint16_t dev_id, const struct rte_dmadev_conf *dev_conf); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Start a DMA device. + * + * The device start step is the last one and consists of setting the DMA + * to start accepting jobs. + * + * @param dev_id + * The identifier of the device. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_start(uint16_t dev_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Stop a DMA device. + * + * The device can be restarted with a call to rte_dmadev_start(). + * + * @param dev_id + * The identifier of the device. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_stop(uint16_t dev_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Close a DMA device. + * + * The device cannot be restarted after this call. + * + * @param dev_id + * The identifier of the device. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_close(uint16_t dev_id); + +/** + * rte_dma_direction - DMA transfer direction defines. + */ +enum rte_dma_direction { + RTE_DMA_DIR_MEM_TO_MEM, + /**< DMA transfer direction - from memory to memory. + * + * @see struct rte_dmadev_vchan_conf::direction + */ + RTE_DMA_DIR_MEM_TO_DEV, + /**< DMA transfer direction - from memory to device. + * In a typical scenario, the SoCs are installed on host servers as + * iNICs through the PCIe interface. In this case, the SoCs works in + * EP(endpoint) mode, it could initiate a DMA move request from memory + * (which is SoCs memory) to device (which is host memory). + * + * @see struct rte_dmadev_vchan_conf::direction + */ + RTE_DMA_DIR_DEV_TO_MEM, + /**< DMA transfer direction - from device to memory. + * In a typical scenario, the SoCs are installed on host servers as + * iNICs through the PCIe interface. In this case, the SoCs works in + * EP(endpoint) mode, it could initiate a DMA move request from device + * (which is host memory) to memory (which is SoCs memory). + * + * @see struct rte_dmadev_vchan_conf::direction + */ + RTE_DMA_DIR_DEV_TO_DEV, + /**< DMA transfer direction - from device to device. + * In a typical scenario, the SoCs are installed on host servers as + * iNICs through the PCIe interface. In this case, the SoCs works in + * EP(endpoint) mode, it could initiate a DMA move request from device + * (which is host memory) to the device (which is another host memory). + * + * @see struct rte_dmadev_vchan_conf::direction + */ +}; + +/** + * enum rte_dmadev_port_type - DMA access port type defines. + * + * @see struct rte_dmadev_port_param::port_type + */ +enum rte_dmadev_port_type { + RTE_DMADEV_PORT_NONE, + RTE_DMADEV_PORT_PCIE, /**< The DMA access port is PCIe. */ +}; + +/** + * A structure used to descript DMA access port parameters. + * + * @see struct rte_dmadev_vchan_conf::src_port + * @see struct rte_dmadev_vchan_conf::dst_port + */ +struct rte_dmadev_port_param { + enum rte_dmadev_port_type port_type; + /**< The device access port type. + * @see enum rte_dmadev_port_type + */ + union { + /** PCIe access port parameters. + * + * The following model shows SoC's PCIe module connects to + * multiple PCIe hosts and multiple endpoints. The PCIe module + * has an integrated DMA controller. + * + * If the DMA wants to access the memory of host A, it can be + * initiated by PF1 in core0, or by VF0 of PF0 in core0. + * + * \code{.unparsed} + * System Bus + * | ----------PCIe module---------- + * | Bus + * | Interface + * | ----- ------------------ + * | | | | PCIe Core0 | + * | | | | | ----------- + * | | | | PF-0 -- VF-0 | | Host A | + * | | |--------| |- VF-1 |--------| Root | + * | | | | PF-1 | | Complex | + * | | | | PF-2 | ----------- + * | | | ------------------ + * | | | + * | | | ------------------ + * | | | | PCIe Core1 | + * | | | | | ----------- + * | | | | PF-0 -- VF-0 | | Host B | + * |-----| |--------| PF-1 -- VF-0 |--------| Root | + * | | | | |- VF-1 | | Complex | + * | | | | PF-2 | ----------- + * | | | ------------------ + * | | | + * | | | ------------------ + * | |DMA| | | ------ + * | | | | |--------| EP | + * | | |--------| PCIe Core2 | ------ + * | | | | | ------ + * | | | | |--------| EP | + * | | | | | ------ + * | ----- ------------------ + * + * \endcode + * + * @note If some fields can not be supported by the + * hardware/driver, then the driver ignores those fields. + * Please check driver-specific documentation for limitations + * and capablites. + */ + struct { + uint64_t coreid : 4; /**< PCIe core id used. */ + uint64_t pfid : 8; /**< PF id used. */ + uint64_t vfen : 1; /**< VF enable bit. */ + uint64_t vfid : 16; /**< VF id used. */ + uint64_t pasid : 20; + /**< The pasid filed in TLP packet. */ + uint64_t attr : 3; + /**< The attributes filed in TLP packet. */ + uint64_t ph : 2; + /**< The processing hint filed in TLP packet. */ + uint64_t st : 16; + /**< The steering tag filed in TLP packet. */ + } pcie; + }; + uint64_t reserved[2]; /**< Reserved for future fields. */ +}; + +/** + * A structure used to configure a virtual DMA channel. + */ +struct rte_dmadev_vchan_conf { + enum rte_dma_direction direction; + /**< Transfer direction + * @see enum rte_dma_direction + */ + uint16_t nb_desc; + /**< Number of descriptor for the virtual DMA channel */ + struct rte_dmadev_port_param src_port; + /**< 1) Used to describes the device access port parameter in the + * device-to-memory transfer scenario. + * 2) Used to describes the source device access port parameter in the + * device-to-device transfer scenario. + * @see struct rte_dmadev_port_param + */ + struct rte_dmadev_port_param dst_port; + /**< 1) Used to describes the device access port parameter in the + * memory-to-device transfer scenario. + * 2) Used to describes the destination device access port parameter in + * the device-to-device transfer scenario. + * @see struct rte_dmadev_port_param + */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Allocate and set up a virtual DMA channel. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. The value must be in the range + * [0, nb_vchans - 1] previously supplied to rte_dmadev_configure(). + * @param conf + * The virtual DMA channel configuration structure encapsulated into + * rte_dmadev_vchan_conf object. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_vchan_setup(uint16_t dev_id, uint16_t vchan, + const struct rte_dmadev_vchan_conf *conf); + +/** + * rte_dmadev_stats - running statistics. + */ +struct rte_dmadev_stats { + uint64_t submitted; + /**< Count of operations which were submitted to hardware. */ + uint64_t completed; + /**< Count of operations which were completed. */ + uint64_t errors; + /**< Count of operations which failed to complete. */ +}; + +#define RTE_DMADEV_ALL_VCHAN 0xFFFFu + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve basic statistics of a or all virtual DMA channel(s). + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * If equal RTE_DMADEV_ALL_VCHAN means all channels. + * @param[out] stats + * The basic statistics structure encapsulated into rte_dmadev_stats + * object. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_stats_get(uint16_t dev_id, uint16_t vchan, + struct rte_dmadev_stats *stats); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Reset basic statistics of a or all virtual DMA channel(s). + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * If equal RTE_DMADEV_ALL_VCHAN means all channels. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_stats_reset(uint16_t dev_id, uint16_t vchan); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Dump DMA device info. + * + * @param dev_id + * The identifier of the device. + * @param f + * The file to write the output to. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_dump(uint16_t dev_id, FILE *f); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Trigger the dmadev self test. + * + * @param dev_id + * The identifier of the device. + * + * @return + * - 0: selftest successful. + * - -ENOTSUP: if the device doesn't support selftest. + * - other values < 0 on failure. + */ +__rte_experimental +int +rte_dmadev_selftest(uint16_t dev_id); + +/** + * rte_dma_status_code - DMA transfer result status code defines. + */ +enum rte_dma_status_code { + RTE_DMA_STATUS_SUCCESSFUL, + /**< The operation completed successfully. */ + RTE_DMA_STATUS_USER_ABORT, + /**< The operation failed to complete due abort by user. + * This is mainly used when processing dev_stop, user could modidy the + * descriptors (e.g. change one bit to tell hardware abort this job), + * it allows outstanding requests to be complete as much as possible, + * so reduce the time to stop the device. + */ + RTE_DMA_STATUS_NOT_ATTEMPTED, + /**< The operation failed to complete due to following scenarios: + * The jobs in a particular batch are not attempted because they + * appeared after a fence where a previous job failed. In some HW + * implementation it's possible for jobs from later batches would be + * completed, though, so report the status from the not attempted jobs + * before reporting those newer completed jobs. + */ + RTE_DMA_STATUS_INVALID_SRC_ADDR, + /**< The operation failed to complete due invalid source address. */ + RTE_DMA_STATUS_INVALID_DST_ADDR, + /**< The operation failed to complete due invalid destination + * address. + */ + RTE_DMA_STATUS_INVALID_ADDR, + /**< The operation failed to complete due invalid source or destination + * address, cover the case that only knows the address error, but not + * sure which address error. + */ + RTE_DMA_STATUS_INVALID_LENGTH, + /**< The operation failed to complete due invalid length. */ + RTE_DMA_STATUS_INVALID_OPCODE, + /**< The operation failed to complete due invalid opcode. + * The DMA descriptor could have multiple format, which are + * distinguished by the opcode field. + */ + RTE_DMA_STATUS_BUS_ERROR, + /**< The operation failed to complete due bus err. */ + RTE_DMA_STATUS_DATA_POISION, + /**< The operation failed to complete due data poison. */ + RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR, + /**< The operation failed to complete due descriptor read error. */ + RTE_DMA_STATUS_DEV_LINK_ERROR, + /**< The operation failed to complete due device link error. + * Used to indicates that the link error in the memory-to-device/ + * device-to-memory/device-to-device transfer scenario. + */ + RTE_DMA_STATUS_ERROR_UNKNOWN = 0x100, + /**< The operation failed to complete due unknown reason. + * The initial value is 256, which reserves space for future errors. + */ +}; + +/** + * rte_dmadev_sge - can hold scatter-gather DMA operation request entry. + */ +struct rte_dmadev_sge { + rte_iova_t addr; /**< The DMA operation address. */ + uint32_t length; /**< The DMA operation length. */ +}; + +/* DMA flags to augment operation preparation. */ +#define RTE_DMA_OP_FLAG_FENCE (1ull << 0) +/**< DMA fence flag. + * It means the operation with this flag must be processed only after all + * previous operations are completed. + * If the specify DMA HW works in-order (it means it has default fence between + * operations), this flag could be NOP. + * + * @see rte_dmadev_copy() + * @see rte_dmadev_copy_sg() + * @see rte_dmadev_fill() + */ + +#define RTE_DMA_OP_FLAG_SUBMIT (1ull << 1) +/**< DMA submit flag. + * It means the operation with this flag must issue doorbell to hardware after + * enqueued jobs. + */ + +#define RTE_DMA_OP_FLAG_LLC (1ull << 2) +/**< DMA write data to low level cache hint. + * Used for performance optimization, this is just a hint, and there is no + * capability bit for this, driver should not return error if this flag was set. + */ + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a copy operation onto the virtual DMA channel. + * + * This queues up a copy operation to be performed by hardware, if the 'flags' + * parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger doorbell to begin + * this operation, otherwise do not trigger doorbell. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param src + * The address of the source buffer. + * @param dst + * The address of the destination buffer. + * @param length + * The length of the data to be copied. + * @param flags + * An flags for this operation. + * @see RTE_DMA_OP_FLAG_* + * + * @return + * - 0..UINT16_MAX: index of enqueued job. + * - -ENOSPC: if no space left to enqueue. + * - other values < 0 on failure. + */ +__rte_experimental +int +rte_dmadev_copy(uint16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a scatter-gather list copy operation onto the virtual DMA channel. + * + * This queues up a scatter-gather list copy operation to be performed by + * hardware, if the 'flags' parameter contains RTE_DMA_OP_FLAG_SUBMIT then + * trigger doorbell to begin this operation, otherwise do not trigger doorbell. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param src + * The pointer of source scatter-gather entry array. + * @param dst + * The pointer of destination scatter-gather entry array. + * @param nb_src + * The number of source scatter-gather entry. + * @see struct rte_dmadev_info::max_sges + * @param nb_dst + * The number of destination scatter-gather entry. + * @see struct rte_dmadev_info::max_sges + * @param flags + * An flags for this operation. + * @see RTE_DMA_OP_FLAG_* + * + * @return + * - 0..UINT16_MAX: index of enqueued job. + * - -ENOSPC: if no space left to enqueue. + * - other values < 0 on failure. + */ +__rte_experimental +int +rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, struct rte_dmadev_sge *src, + struct rte_dmadev_sge *dst, uint16_t nb_src, uint16_t nb_dst, + uint64_t flags); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a fill operation onto the virtual DMA channel. + * + * This queues up a fill operation to be performed by hardware, if the 'flags' + * parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger doorbell to begin + * this operation, otherwise do not trigger doorbell. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param pattern + * The pattern to populate the destination buffer with. + * @param dst + * The address of the destination buffer. + * @param length + * The length of the destination buffer. + * @param flags + * An flags for this operation. + * @see RTE_DMA_OP_FLAG_* + * + * @return + * - 0..UINT16_MAX: index of enqueued job. + * - -ENOSPC: if no space left to enqueue. + * - other values < 0 on failure. + */ +__rte_experimental +int +rte_dmadev_fill(uint16_t dev_id, uint16_t vchan, uint64_t pattern, + rte_iova_t dst, uint32_t length, uint64_t flags); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Trigger hardware to begin performing enqueued operations. + * + * This API is used to write the "doorbell" to the hardware to trigger it + * to begin the operations previously enqueued by rte_dmadev_copy/fill(). + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * + * @return + * 0 on success. Otherwise negative value is returned. + */ +__rte_experimental +int +rte_dmadev_submit(uint16_t dev_id, uint16_t vchan); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Returns the number of operations that have been successfully completed. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param nb_cpls + * The maximum number of completed operations that can be processed. + * @param[out] last_idx + * The last completed operation's index. + * If not required, NULL can be passed in. + * @param[out] has_error + * Indicates if there are transfer error. + * If not required, NULL can be passed in. + * + * @return + * The number of operations that successfully completed. This return value + * must be less than or equal to the value of nb_cpls. + */ +__rte_experimental +uint16_t +rte_dmadev_completed(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Returns the number of operations that have been completed, and the + * operations result may succeed or fail. + * + * @param dev_id + * The identifier of the device. + * @param vchan + * The identifier of virtual DMA channel. + * @param nb_cpls + * Indicates the size of status array. + * @param[out] last_idx + * The last completed operation's index. + * If not required, NULL can be passed in. + * @param[out] status + * This is a pointer to an array of length 'nb_cpls' that holds the completion + * status code of each operation. + * @see enum rte_dma_status_code + * + * @return + * The number of operations that completed. This return value must be less + * than or equal to the value of nb_cpls. + * If this number is greater than zero (assuming n), then n values in the + * status array are also set. + */ +__rte_experimental +uint16_t +rte_dmadev_completed_status(uint16_t dev_id, uint16_t vchan, + const uint16_t nb_cpls, uint16_t *last_idx, + enum rte_dma_status_code *status); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_DMADEV_H_ */ diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map new file mode 100644 index 0000000..02fffe3 --- /dev/null +++ b/lib/dmadev/version.map @@ -0,0 +1,25 @@ +EXPERIMENTAL { + global: + + rte_dmadev_close; + rte_dmadev_completed; + rte_dmadev_completed_status; + rte_dmadev_configure; + rte_dmadev_copy; + rte_dmadev_copy_sg; + rte_dmadev_count; + rte_dmadev_dump; + rte_dmadev_fill; + rte_dmadev_get_dev_id; + rte_dmadev_info_get; + rte_dmadev_is_valid_dev; + rte_dmadev_selftest; + rte_dmadev_start; + rte_dmadev_stats_get; + rte_dmadev_stats_reset; + rte_dmadev_stop; + rte_dmadev_submit; + rte_dmadev_vchan_setup; + + local: *; +}; diff --git a/lib/meson.build b/lib/meson.build index 1673ca4..a542c23 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -44,6 +44,7 @@ libraries = [ 'power', 'pdump', 'rawdev', + 'dmadev', 'regexdev', 'rib', 'reorder', From patchwork Mon Aug 23 03:31:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97188 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6507FA0C54; Mon, 23 Aug 2021 05:36:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B822841158; Mon, 23 Aug 2021 05:35:43 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 6A759410FB for ; Mon, 23 Aug 2021 05:35:35 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GtHpn3WQxzbh9D; Mon, 23 Aug 2021 11:31:45 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:31 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:31 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Mon, 23 Aug 2021 11:31:27 +0800 Message-ID: <1629689494-55091-3-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v16 2/9] dmadev: introduce DMA device library internal header X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce DMA device library internal header, which contains internal data types that are used by the DMA devices in order to expose their ops to the class. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup --- lib/dmadev/meson.build | 1 + lib/dmadev/rte_dmadev_core.h | 180 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 181 insertions(+) create mode 100644 lib/dmadev/rte_dmadev_core.h diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build index 6d5bd85..f421ec1 100644 --- a/lib/dmadev/meson.build +++ b/lib/dmadev/meson.build @@ -2,3 +2,4 @@ # Copyright(c) 2021 HiSilicon Limited. headers = files('rte_dmadev.h') +indirect_headers += files('rte_dmadev_core.h') diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h new file mode 100644 index 0000000..ff7b70a --- /dev/null +++ b/lib/dmadev/rte_dmadev_core.h @@ -0,0 +1,180 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + * Copyright(c) 2021 Intel Corporation. + */ + +#ifndef _RTE_DMADEV_CORE_H_ +#define _RTE_DMADEV_CORE_H_ + +/** + * @file + * + * RTE DMA Device internal header. + * + * This header contains internal data types, that are used by the DMA devices + * in order to expose their ops to the class. + * + * Applications should not use these API directly. + * + */ + +struct rte_dmadev; + +typedef int (*rte_dmadev_info_get_t)(const struct rte_dmadev *dev, + struct rte_dmadev_info *dev_info, + uint32_t info_sz); +/**< @internal Used to get device information of a device. */ + +typedef int (*rte_dmadev_configure_t)(struct rte_dmadev *dev, + const struct rte_dmadev_conf *dev_conf); +/**< @internal Used to configure a device. */ + +typedef int (*rte_dmadev_start_t)(struct rte_dmadev *dev); +/**< @internal Used to start a configured device. */ + +typedef int (*rte_dmadev_stop_t)(struct rte_dmadev *dev); +/**< @internal Used to stop a configured device. */ + +typedef int (*rte_dmadev_close_t)(struct rte_dmadev *dev); +/**< @internal Used to close a configured device. */ + +typedef int (*rte_dmadev_vchan_setup_t)(struct rte_dmadev *dev, uint16_t vchan, + const struct rte_dmadev_vchan_conf *conf); +/**< @internal Used to allocate and set up a virtual DMA channel. */ + +typedef int (*rte_dmadev_stats_get_t)(const struct rte_dmadev *dev, + uint16_t vchan, struct rte_dmadev_stats *stats, + uint32_t stats_sz); +/**< @internal Used to retrieve basic statistics. */ + +typedef int (*rte_dmadev_stats_reset_t)(struct rte_dmadev *dev, uint16_t vchan); +/**< @internal Used to reset basic statistics. */ + +typedef int (*rte_dmadev_dump_t)(const struct rte_dmadev *dev, FILE *f); +/**< @internal Used to dump internal information. */ + +typedef int (*rte_dmadev_selftest_t)(uint16_t dev_id); +/**< @internal Used to start dmadev selftest. */ + +typedef int (*rte_dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vchan, + rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags); +/**< @internal Used to enqueue a copy operation. */ + +typedef int (*rte_dmadev_copy_sg_t)(struct rte_dmadev *dev, uint16_t vchan, + const struct rte_dmadev_sge *src, + const struct rte_dmadev_sge *dst, + uint16_t nb_src, uint16_t nb_dst, + uint64_t flags); +/**< @internal Used to enqueue a scatter-gather list copy operation. */ + +typedef int (*rte_dmadev_fill_t)(struct rte_dmadev *dev, uint16_t vchan, + uint64_t pattern, rte_iova_t dst, + uint32_t length, uint64_t flags); +/**< @internal Used to enqueue a fill operation. */ + +typedef int (*rte_dmadev_submit_t)(struct rte_dmadev *dev, uint16_t vchan); +/**< @internal Used to trigger hardware to begin working. */ + +typedef uint16_t (*rte_dmadev_completed_t)(struct rte_dmadev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error); +/**< @internal Used to return number of successful completed operations. */ + +typedef uint16_t (*rte_dmadev_completed_status_t)(struct rte_dmadev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, enum rte_dma_status_code *status); +/**< @internal Used to return number of completed operations. */ + +/** + * Possible states of a DMA device. + */ +enum rte_dmadev_state { + RTE_DMADEV_UNUSED = 0, + /**< Device is unused before being probed. */ + RTE_DMADEV_ATTACHED, + /**< Device is attached when allocated in probing. */ +}; + +/** + * DMA device operations function pointer table + */ +struct rte_dmadev_ops { + rte_dmadev_info_get_t dev_info_get; + rte_dmadev_configure_t dev_configure; + rte_dmadev_start_t dev_start; + rte_dmadev_stop_t dev_stop; + rte_dmadev_close_t dev_close; + rte_dmadev_vchan_setup_t vchan_setup; + rte_dmadev_stats_get_t stats_get; + rte_dmadev_stats_reset_t stats_reset; + rte_dmadev_dump_t dev_dump; + rte_dmadev_selftest_t dev_selftest; +}; + +/** + * @internal + * The data part, with no function pointers, associated with each DMA device. + * + * This structure is safe to place in shared memory to be common among different + * processes in a multi-process configuration. + */ +struct rte_dmadev_data { + void *dev_private; + /**< PMD-specific private data. + * This is a copy of the 'dev_private' field in the 'struct rte_dmadev' + * from primary process, it is used by the secondary process to get + * dev_private information. + */ + uint16_t dev_id; /**< Device [external] identifier. */ + char dev_name[RTE_DMADEV_NAME_MAX_LEN]; /**< Unique identifier name */ + struct rte_dmadev_conf dev_conf; /**< DMA device configuration. */ + uint8_t dev_started : 1; /**< Device state: STARTED(1)/STOPPED(0). */ + uint64_t reserved[2]; /**< Reserved for future fields */ +} __rte_cache_aligned; + +/** + * @internal + * The generic data structure associated with each DMA device. + * + * The dataplane APIs are located at the beginning of the structure, along + * with the pointer to where all the data elements for the particular device + * are stored in shared memory. This split scheme allows the function pointer + * and driver data to be per-process, while the actual configuration data for + * the device is shared. + * And the 'dev_private' field was placed in the first cache line to optimize + * performance because the PMD driver mainly depends on this field. + */ +struct rte_dmadev { + rte_dmadev_copy_t copy; + rte_dmadev_copy_sg_t copy_sg; + rte_dmadev_fill_t fill; + rte_dmadev_submit_t submit; + rte_dmadev_completed_t completed; + rte_dmadev_completed_status_t completed_status; + void *reserved_ptr; /**< Reserved for future IO function. */ + void *dev_private; + /**< PMD-specific private data. + * + * - If is the primary process, after dmadev allocated by + * rte_dmadev_pmd_allocate(), the PCI/SoC device probing should + * initialize this field, and copy it's value to the 'dev_private' + * field of 'struct rte_dmadev_data' which pointer by 'data' filed. + * + * - If is the secondary process, dmadev framework will initialize this + * field by copy from 'dev_private' field of 'struct rte_dmadev_data' + * which initialized by primary process. + * + * @note It's the primary process responsibility to deinitialize this + * field after invoke rte_dmadev_pmd_release() in the PCI/SoC device + * removing stage. + */ + struct rte_dmadev_data *data; /**< Pointer to device data. */ + const struct rte_dmadev_ops *dev_ops; /**< Functions exported by PMD. */ + struct rte_device *device; + /**< Device info which supplied during device initialization. */ + enum rte_dmadev_state state; /**< Flag indicating the device state. */ + uint64_t reserved[2]; /**< Reserved for future fields. */ +} __rte_cache_aligned; + +#endif /* _RTE_DMADEV_CORE_H_ */ From patchwork Mon Aug 23 03:31:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97186 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48712A0C54; Mon, 23 Aug 2021 05:36:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 597EB41145; Mon, 23 Aug 2021 05:35:41 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 36776410F9 for ; Mon, 23 Aug 2021 05:35:35 +0200 (CEST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GtHpn1N54zbdQp; Mon, 23 Aug 2021 11:31:45 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:31 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Mon, 23 Aug 2021 11:31:28 +0800 Message-ID: <1629689494-55091-4-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v16 3/9] dmadev: introduce DMA device library PMD header X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce DMA device library PMD header which was driver facing APIs for a DMA device. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup --- lib/dmadev/meson.build | 1 + lib/dmadev/rte_dmadev.h | 2 ++ lib/dmadev/rte_dmadev_pmd.h | 72 +++++++++++++++++++++++++++++++++++++++++++++ lib/dmadev/version.map | 10 +++++++ 4 files changed, 85 insertions(+) create mode 100644 lib/dmadev/rte_dmadev_pmd.h diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build index f421ec1..833baf7 100644 --- a/lib/dmadev/meson.build +++ b/lib/dmadev/meson.build @@ -3,3 +3,4 @@ headers = files('rte_dmadev.h') indirect_headers += files('rte_dmadev_core.h') +driver_sdk_headers += files('rte_dmadev_pmd.h') diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index a008ee0..0744afa 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -736,6 +736,8 @@ struct rte_dmadev_sge { uint32_t length; /**< The DMA operation length. */ }; +#include "rte_dmadev_core.h" + /* DMA flags to augment operation preparation. */ #define RTE_DMA_OP_FLAG_FENCE (1ull << 0) /**< DMA fence flag. diff --git a/lib/dmadev/rte_dmadev_pmd.h b/lib/dmadev/rte_dmadev_pmd.h new file mode 100644 index 0000000..45141f9 --- /dev/null +++ b/lib/dmadev/rte_dmadev_pmd.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + */ + +#ifndef _RTE_DMADEV_PMD_H_ +#define _RTE_DMADEV_PMD_H_ + +/** + * @file + * + * RTE DMA Device PMD APIs + * + * Driver facing APIs for a DMA device. These are not to be called directly by + * any application. + */ + +#include "rte_dmadev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @internal + * Allocates a new dmadev slot for an DMA device and returns the pointer + * to that slot for the driver to use. + * + * @param name + * DMA device name. + * + * @return + * A pointer to the DMA device slot case of success, + * NULL otherwise. + */ +__rte_internal +struct rte_dmadev * +rte_dmadev_pmd_allocate(const char *name); + +/** + * @internal + * Release the specified dmadev. + * + * @param dev + * Device to be released. + * + * @return + * - 0 on success, negative on error + */ +__rte_internal +int +rte_dmadev_pmd_release(struct rte_dmadev *dev); + +/** + * @internal + * Return the DMA device based on the device name. + * + * @param name + * DMA device name. + * + * @return + * A pointer to the DMA device slot case of success, + * NULL otherwise. + */ +__rte_internal +struct rte_dmadev * +rte_dmadev_get_device_by_name(const char *name); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_DMADEV_PMD_H_ */ diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map index 02fffe3..408b93c 100644 --- a/lib/dmadev/version.map +++ b/lib/dmadev/version.map @@ -23,3 +23,13 @@ EXPERIMENTAL { local: *; }; + +INTERNAL { + global: + + rte_dmadev_get_device_by_name; + rte_dmadev_pmd_allocate; + rte_dmadev_pmd_release; + + local: *; +}; From patchwork Mon Aug 23 03:31:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97183 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3EB3A0C54; Mon, 23 Aug 2021 05:35:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D51D6410FE; Mon, 23 Aug 2021 05:35:37 +0200 (CEST) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by mails.dpdk.org (Postfix) with ESMTP id ED47740DF5 for ; Mon, 23 Aug 2021 05:35:34 +0200 (CEST) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4GtHtZ1XnNz1CZZc; Mon, 23 Aug 2021 11:35:02 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Mon, 23 Aug 2021 11:31:29 +0800 Message-ID: <1629689494-55091-5-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v16 4/9] dmadev: introduce DMA device library implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce DMA device library implementation which includes configuration and I/O with the DMA devices. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup --- config/rte_config.h | 3 + lib/dmadev/meson.build | 1 + lib/dmadev/rte_dmadev.c | 567 +++++++++++++++++++++++++++++++++++++++++++ lib/dmadev/rte_dmadev.h | 118 ++++++++- lib/dmadev/rte_dmadev_core.h | 2 + lib/dmadev/version.map | 1 + 6 files changed, 680 insertions(+), 12 deletions(-) create mode 100644 lib/dmadev/rte_dmadev.c diff --git a/config/rte_config.h b/config/rte_config.h index 590903c..331a431 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -81,6 +81,9 @@ /* rawdev defines */ #define RTE_RAWDEV_MAX_DEVS 64 +/* dmadev defines */ +#define RTE_DMADEV_MAX_DEVS 64 + /* ip_fragmentation defines */ #define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4 #undef RTE_LIBRTE_IP_FRAG_TBL_STAT diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build index 833baf7..d2fc85e 100644 --- a/lib/dmadev/meson.build +++ b/lib/dmadev/meson.build @@ -1,6 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2021 HiSilicon Limited. +sources = files('rte_dmadev.c') headers = files('rte_dmadev.h') indirect_headers += files('rte_dmadev_core.h') driver_sdk_headers += files('rte_dmadev_pmd.h') diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c new file mode 100644 index 0000000..80be485 --- /dev/null +++ b/lib/dmadev/rte_dmadev.c @@ -0,0 +1,567 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + * Copyright(c) 2021 Intel Corporation. + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rte_dmadev.h" +#include "rte_dmadev_pmd.h" + +struct rte_dmadev rte_dmadevices[RTE_DMADEV_MAX_DEVS]; + +static const char *mz_rte_dmadev_data = "rte_dmadev_data"; +/* Shared memory between primary and secondary processes. */ +static struct { + struct rte_dmadev_data data[RTE_DMADEV_MAX_DEVS]; +} *dmadev_shared_data; + +RTE_LOG_REGISTER_DEFAULT(rte_dmadev_logtype, INFO); +#define RTE_DMADEV_LOG(level, ...) \ + rte_log(RTE_LOG_ ## level, rte_dmadev_logtype, "" __VA_ARGS__) + +/* Macros to check for valid device id */ +#define RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ + if (!rte_dmadev_is_valid_dev(dev_id)) { \ + RTE_DMADEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + return retval; \ + } \ +} while (0) + +static int +dmadev_check_name(const char *name) +{ + size_t name_len; + + if (name == NULL) { + RTE_DMADEV_LOG(ERR, "Name can't be NULL\n"); + return -EINVAL; + } + + name_len = strnlen(name, RTE_DMADEV_NAME_MAX_LEN); + if (name_len == 0) { + RTE_DMADEV_LOG(ERR, "Zero length DMA device name\n"); + return -EINVAL; + } + if (name_len >= RTE_DMADEV_NAME_MAX_LEN) { + RTE_DMADEV_LOG(ERR, "DMA device name is too long\n"); + return -EINVAL; + } + + return 0; +} + +static uint16_t +dmadev_find_free_dev(void) +{ + uint16_t i; + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if (dmadev_shared_data->data[i].dev_name[0] == '\0') + return i; + } + + return RTE_DMADEV_MAX_DEVS; +} + +static struct rte_dmadev* +dmadev_find(const char *name) +{ + uint16_t i; + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if ((rte_dmadevices[i].state == RTE_DMADEV_ATTACHED) && + (!strcmp(name, rte_dmadevices[i].data->dev_name))) + return &rte_dmadevices[i]; + } + + return NULL; +} + +static int +dmadev_shared_data_prepare(void) +{ + const struct rte_memzone *mz; + + if (dmadev_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate port data and ownership shared memory. */ + mz = rte_memzone_reserve(mz_rte_dmadev_data, + sizeof(*dmadev_shared_data), + rte_socket_id(), 0); + } else + mz = rte_memzone_lookup(mz_rte_dmadev_data); + if (mz == NULL) + return -ENOMEM; + + dmadev_shared_data = mz->addr; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + memset(dmadev_shared_data->data, 0, + sizeof(dmadev_shared_data->data)); + } + + return 0; +} + +static struct rte_dmadev * +dmadev_allocate(const char *name) +{ + struct rte_dmadev *dev; + uint16_t dev_id; + + dev = dmadev_find(name); + if (dev != NULL) { + RTE_DMADEV_LOG(ERR, "DMA device already allocated\n"); + return NULL; + } + + if (dmadev_shared_data_prepare() != 0) { + RTE_DMADEV_LOG(ERR, "Cannot allocate DMA shared data\n"); + return NULL; + } + + dev_id = dmadev_find_free_dev(); + if (dev_id == RTE_DMADEV_MAX_DEVS) { + RTE_DMADEV_LOG(ERR, "Reached maximum number of DMA devices\n"); + return NULL; + } + + dev = &rte_dmadevices[dev_id]; + dev->data = &dmadev_shared_data->data[dev_id]; + dev->data->dev_id = dev_id; + rte_strscpy(dev->data->dev_name, name, sizeof(dev->data->dev_name)); + + return dev; +} + +static struct rte_dmadev * +dmadev_attach_secondary(const char *name) +{ + struct rte_dmadev *dev; + uint16_t i; + + if (dmadev_shared_data_prepare() != 0) { + RTE_DMADEV_LOG(ERR, "Cannot allocate DMA shared data\n"); + return NULL; + } + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if (!strcmp(dmadev_shared_data->data[i].dev_name, name)) + break; + } + if (i == RTE_DMADEV_MAX_DEVS) { + RTE_DMADEV_LOG(ERR, + "Device %s is not driven by the primary process\n", + name); + return NULL; + } + + dev = &rte_dmadevices[i]; + dev->data = &dmadev_shared_data->data[i]; + dev->dev_private = dev->data->dev_private; + + return dev; +} + +struct rte_dmadev * +rte_dmadev_pmd_allocate(const char *name) +{ + struct rte_dmadev *dev; + + if (dmadev_check_name(name) != 0) + return NULL; + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + dev = dmadev_allocate(name); + else + dev = dmadev_attach_secondary(name); + + if (dev == NULL) + return NULL; + dev->state = RTE_DMADEV_ATTACHED; + + return dev; +} + +int +rte_dmadev_pmd_release(struct rte_dmadev *dev) +{ + void *dev_private_tmp; + + if (dev == NULL) + return -EINVAL; + + if (dev->state == RTE_DMADEV_UNUSED) + return 0; + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + memset(dev->data, 0, sizeof(struct rte_dmadev_data)); + + dev_private_tmp = dev->dev_private; + memset(dev, 0, sizeof(struct rte_dmadev)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + dev->dev_private = dev_private_tmp; + dev->state = RTE_DMADEV_UNUSED; + + return 0; +} + +struct rte_dmadev * +rte_dmadev_get_device_by_name(const char *name) +{ + if (dmadev_check_name(name) != 0) + return NULL; + return dmadev_find(name); +} + +int +rte_dmadev_get_dev_id(const char *name) +{ + struct rte_dmadev *dev = rte_dmadev_get_device_by_name(name); + if (dev != NULL) + return dev->data->dev_id; + return -EINVAL; +} + +bool +rte_dmadev_is_valid_dev(uint16_t dev_id) +{ + return (dev_id < RTE_DMADEV_MAX_DEVS) && + rte_dmadevices[dev_id].state == RTE_DMADEV_ATTACHED; +} + +uint16_t +rte_dmadev_count(void) +{ + uint16_t count = 0; + uint16_t i; + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if (rte_dmadevices[i].state == RTE_DMADEV_ATTACHED) + count++; + } + + return count; +} + +int +rte_dmadev_info_get(uint16_t dev_id, struct rte_dmadev_info *dev_info) +{ + const struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (dev_info == NULL) + return -EINVAL; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_info_get, -ENOTSUP); + memset(dev_info, 0, sizeof(struct rte_dmadev_info)); + ret = (*dev->dev_ops->dev_info_get)(dev, dev_info, + sizeof(struct rte_dmadev_info)); + if (ret != 0) + return ret; + + dev_info->device = dev->device; + dev_info->nb_vchans = dev->data->dev_conf.nb_vchans; + + return 0; +} + +int +rte_dmadev_configure(uint16_t dev_id, const struct rte_dmadev_conf *dev_conf) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + struct rte_dmadev_info info; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (dev_conf == NULL) + return -EINVAL; + + if (dev->data->dev_started != 0) { + RTE_DMADEV_LOG(ERR, + "Device %u must be stopped to allow configuration\n", + dev_id); + return -EBUSY; + } + + ret = rte_dmadev_info_get(dev_id, &info); + if (ret != 0) { + RTE_DMADEV_LOG(ERR, "Device %u get device info fail\n", dev_id); + return -EINVAL; + } + if (dev_conf->nb_vchans == 0) { + RTE_DMADEV_LOG(ERR, + "Device %u configure zero vchans\n", dev_id); + return -EINVAL; + } + if (dev_conf->nb_vchans > info.max_vchans) { + RTE_DMADEV_LOG(ERR, + "Device %u configure too many vchans\n", dev_id); + return -EINVAL; + } + if (dev_conf->enable_silent && + !(info.dev_capa & RTE_DMADEV_CAPA_SILENT)) { + RTE_DMADEV_LOG(ERR, "Device %u don't support silent\n", dev_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP); + ret = (*dev->dev_ops->dev_configure)(dev, dev_conf); + if (ret == 0) + memcpy(&dev->data->dev_conf, dev_conf, sizeof(*dev_conf)); + + return ret; +} + +int +rte_dmadev_start(uint16_t dev_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + + if (dev->data->dev_started != 0) { + RTE_DMADEV_LOG(WARNING, "Device %u already started\n", dev_id); + return 0; + } + + if (dev->dev_ops->dev_start == NULL) + goto mark_started; + + ret = (*dev->dev_ops->dev_start)(dev); + if (ret != 0) + return ret; + +mark_started: + dev->data->dev_started = 1; + return 0; +} + +int +rte_dmadev_stop(uint16_t dev_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + + if (dev->data->dev_started == 0) { + RTE_DMADEV_LOG(WARNING, "Device %u already stopped\n", dev_id); + return 0; + } + + if (dev->dev_ops->dev_stop == NULL) + goto mark_stopped; + + ret = (*dev->dev_ops->dev_stop)(dev); + if (ret != 0) + return ret; + +mark_stopped: + dev->data->dev_started = 0; + return 0; +} + +int +rte_dmadev_close(uint16_t dev_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + + /* Device must be stopped before it can be closed */ + if (dev->data->dev_started == 1) { + RTE_DMADEV_LOG(ERR, + "Device %u must be stopped before closing\n", dev_id); + return -EBUSY; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP); + return (*dev->dev_ops->dev_close)(dev); +} + +int +rte_dmadev_vchan_setup(uint16_t dev_id, uint16_t vchan, + const struct rte_dmadev_vchan_conf *conf) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + struct rte_dmadev_info info; + bool src_is_dev, dst_is_dev; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (conf == NULL) + return -EINVAL; + + if (dev->data->dev_started != 0) { + RTE_DMADEV_LOG(ERR, + "Device %u must be stopped to allow configuration\n", + dev_id); + return -EBUSY; + } + + ret = rte_dmadev_info_get(dev_id, &info); + if (ret != 0) { + RTE_DMADEV_LOG(ERR, "Device %u get device info fail\n", dev_id); + return -EINVAL; + } + if (vchan >= info.nb_vchans) { + RTE_DMADEV_LOG(ERR, "Device %u vchan out range!\n", dev_id); + return -EINVAL; + } + if (conf->direction != RTE_DMA_DIR_MEM_TO_MEM && + conf->direction != RTE_DMA_DIR_MEM_TO_DEV && + conf->direction != RTE_DMA_DIR_DEV_TO_MEM && + conf->direction != RTE_DMA_DIR_DEV_TO_DEV) { + RTE_DMADEV_LOG(ERR, "Device %u direction invalid!\n", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_MEM_TO_MEM && + !(info.dev_capa & RTE_DMADEV_CAPA_MEM_TO_MEM)) { + RTE_DMADEV_LOG(ERR, + "Device %u don't support mem2mem transfer\n", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_MEM_TO_DEV && + !(info.dev_capa & RTE_DMADEV_CAPA_MEM_TO_DEV)) { + RTE_DMADEV_LOG(ERR, + "Device %u don't support mem2dev transfer\n", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_DEV_TO_MEM && + !(info.dev_capa & RTE_DMADEV_CAPA_DEV_TO_MEM)) { + RTE_DMADEV_LOG(ERR, + "Device %u don't support dev2mem transfer\n", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_DEV_TO_DEV && + !(info.dev_capa & RTE_DMADEV_CAPA_DEV_TO_DEV)) { + RTE_DMADEV_LOG(ERR, + "Device %u don't support dev2dev transfer\n", dev_id); + return -EINVAL; + } + if (conf->nb_desc < info.min_desc || conf->nb_desc > info.max_desc) { + RTE_DMADEV_LOG(ERR, + "Device %u number of descriptors invalid\n", dev_id); + return -EINVAL; + } + src_is_dev = conf->direction == RTE_DMA_DIR_DEV_TO_MEM || + conf->direction == RTE_DMA_DIR_DEV_TO_DEV; + if ((conf->src_port.port_type == RTE_DMADEV_PORT_NONE && src_is_dev) || + (conf->src_port.port_type != RTE_DMADEV_PORT_NONE && !src_is_dev)) { + RTE_DMADEV_LOG(ERR, + "Device %u source port type invalid\n", dev_id); + return -EINVAL; + } + dst_is_dev = conf->direction == RTE_DMA_DIR_MEM_TO_DEV || + conf->direction == RTE_DMA_DIR_DEV_TO_DEV; + if ((conf->dst_port.port_type == RTE_DMADEV_PORT_NONE && dst_is_dev) || + (conf->dst_port.port_type != RTE_DMADEV_PORT_NONE && !dst_is_dev)) { + RTE_DMADEV_LOG(ERR, + "Device %u destination port type invalid\n", dev_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_setup, -ENOTSUP); + return (*dev->dev_ops->vchan_setup)(dev, vchan, conf); +} + +int +rte_dmadev_stats_get(uint16_t dev_id, uint16_t vchan, + struct rte_dmadev_stats *stats) +{ + const struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (stats == NULL) + return -EINVAL; + if (vchan >= dev->data->dev_conf.nb_vchans && + vchan != RTE_DMADEV_ALL_VCHAN) { + RTE_DMADEV_LOG(ERR, + "Device %u vchan %u out of range\n", dev_id, vchan); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP); + memset(stats, 0, sizeof(struct rte_dmadev_stats)); + return (*dev->dev_ops->stats_get)(dev, vchan, stats, + sizeof(struct rte_dmadev_stats)); +} + +int +rte_dmadev_stats_reset(uint16_t dev_id, uint16_t vchan) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (vchan >= dev->data->dev_conf.nb_vchans && + vchan != RTE_DMADEV_ALL_VCHAN) { + RTE_DMADEV_LOG(ERR, + "Device %u vchan %u out of range\n", dev_id, vchan); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_reset, -ENOTSUP); + return (*dev->dev_ops->stats_reset)(dev, vchan); +} + +int +rte_dmadev_dump(uint16_t dev_id, FILE *f) +{ + const struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + struct rte_dmadev_info info; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (f == NULL) + return -EINVAL; + + ret = rte_dmadev_info_get(dev_id, &info); + if (ret != 0) { + RTE_DMADEV_LOG(ERR, "Device %u get device info fail\n", dev_id); + return -EINVAL; + } + + fprintf(f, "DMA Dev %u, '%s' [%s]\n", + dev->data->dev_id, + dev->data->dev_name, + dev->data->dev_started ? "started" : "stopped"); + fprintf(f, " dev_capa: 0x%" PRIx64 "\n", info.dev_capa); + fprintf(f, " max_vchans_supported: %u\n", info.max_vchans); + fprintf(f, " nb_vchans_configured: %u\n", info.nb_vchans); + fprintf(f, " silent_mode: %s\n", + dev->data->dev_conf.enable_silent ? "on" : "off"); + + if (dev->dev_ops->dev_dump != NULL) + return (*dev->dev_ops->dev_dump)(dev, f); + + return 0; +} + +int +rte_dmadev_selftest(uint16_t dev_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_selftest, -ENOTSUP); + return (*dev->dev_ops->dev_selftest)(dev_id); +} diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index 0744afa..cf9e4bf 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -793,9 +793,21 @@ struct rte_dmadev_sge { * - other values < 0 on failure. */ __rte_experimental -int +static inline int rte_dmadev_copy(uint16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst, - uint32_t length, uint64_t flags); + uint32_t length, uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.nb_vchans || length == 0) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy, -ENOTSUP); +#endif + + return (*dev->copy)(dev, vchan, src, dst, length, flags); +} /** * @warning @@ -831,10 +843,23 @@ rte_dmadev_copy(uint16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst, * - other values < 0 on failure. */ __rte_experimental -int +static inline int rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, struct rte_dmadev_sge *src, struct rte_dmadev_sge *dst, uint16_t nb_src, uint16_t nb_dst, - uint64_t flags); + uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.nb_vchans || + src == NULL || dst == NULL || nb_src == 0 || nb_dst == 0) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy_sg, -ENOTSUP); +#endif + + return (*dev->copy_sg)(dev, vchan, src, dst, nb_src, nb_dst, flags); +} /** * @warning @@ -866,9 +891,21 @@ rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, struct rte_dmadev_sge *src, * - other values < 0 on failure. */ __rte_experimental -int +static inline int rte_dmadev_fill(uint16_t dev_id, uint16_t vchan, uint64_t pattern, - rte_iova_t dst, uint32_t length, uint64_t flags); + rte_iova_t dst, uint32_t length, uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.nb_vchans || length == 0) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->fill, -ENOTSUP); +#endif + + return (*dev->fill)(dev, vchan, pattern, dst, length, flags); +} /** * @warning @@ -888,8 +925,20 @@ rte_dmadev_fill(uint16_t dev_id, uint16_t vchan, uint64_t pattern, * 0 on success. Otherwise negative value is returned. */ __rte_experimental -int -rte_dmadev_submit(uint16_t dev_id, uint16_t vchan); +static inline int +rte_dmadev_submit(uint16_t dev_id, uint16_t vchan) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.nb_vchans) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->submit, -ENOTSUP); +#endif + + return (*dev->submit)(dev, vchan); +} /** * @warning @@ -915,9 +964,37 @@ rte_dmadev_submit(uint16_t dev_id, uint16_t vchan); * must be less than or equal to the value of nb_cpls. */ __rte_experimental -uint16_t +static inline uint16_t rte_dmadev_completed(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, - uint16_t *last_idx, bool *has_error); + uint16_t *last_idx, bool *has_error) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + uint16_t idx; + bool err; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.nb_vchans || nb_cpls == 0) + return 0; + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed, 0); +#endif + + /* Ensure the pointer values are non-null to simplify drivers. + * In most cases these should be compile time evaluated, since this is + * an inline function. + * - If NULL is explicitly passed as parameter, then compiler knows the + * value is NULL + * - If address of local variable is passed as parameter, then compiler + * can know it's non-NULL. + */ + if (last_idx == NULL) + last_idx = &idx; + if (has_error == NULL) + has_error = &err; + + *has_error = false; + return (*dev->completed)(dev, vchan, nb_cpls, last_idx, has_error); +} /** * @warning @@ -947,10 +1024,27 @@ rte_dmadev_completed(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, * status array are also set. */ __rte_experimental -uint16_t +static inline uint16_t rte_dmadev_completed_status(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, uint16_t *last_idx, - enum rte_dma_status_code *status); + enum rte_dma_status_code *status) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + uint16_t idx; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.nb_vchans || + nb_cpls == 0 || status == NULL) + return 0; + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed_status, 0); +#endif + + if (last_idx == NULL) + last_idx = &idx; + + return (*dev->completed_status)(dev, vchan, nb_cpls, last_idx, status); +} #ifdef __cplusplus } diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h index ff7b70a..aa8e622 100644 --- a/lib/dmadev/rte_dmadev_core.h +++ b/lib/dmadev/rte_dmadev_core.h @@ -177,4 +177,6 @@ struct rte_dmadev { uint64_t reserved[2]; /**< Reserved for future fields. */ } __rte_cache_aligned; +extern struct rte_dmadev rte_dmadevices[]; + #endif /* _RTE_DMADEV_CORE_H_ */ diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map index 408b93c..86c5e75 100644 --- a/lib/dmadev/version.map +++ b/lib/dmadev/version.map @@ -27,6 +27,7 @@ EXPERIMENTAL { INTERNAL { global: + rte_dmadevices; rte_dmadev_get_device_by_name; rte_dmadev_pmd_allocate; rte_dmadev_pmd_release; From patchwork Mon Aug 23 03:31:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97190 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D0C2A0C54; Mon, 23 Aug 2021 05:36:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A21541185; Mon, 23 Aug 2021 05:35:47 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id A08B341100 for ; Mon, 23 Aug 2021 05:35:35 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GtHpn3wyvzbh9H; Mon, 23 Aug 2021 11:31:45 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Mon, 23 Aug 2021 11:31:30 +0800 Message-ID: <1629689494-55091-6-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v16 5/9] doc: add DMA device library guide X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds dmadev library guide. Signed-off-by: Chengwen Feng Acked-by: Conor Walsh --- doc/guides/prog_guide/dmadev.rst | 125 ++++++++++++++++ doc/guides/prog_guide/img/dmadev.svg | 283 +++++++++++++++++++++++++++++++++++ doc/guides/prog_guide/index.rst | 1 + 3 files changed, 409 insertions(+) create mode 100644 doc/guides/prog_guide/dmadev.rst create mode 100644 doc/guides/prog_guide/img/dmadev.svg diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst new file mode 100644 index 0000000..75bac04 --- /dev/null +++ b/doc/guides/prog_guide/dmadev.rst @@ -0,0 +1,125 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2021 HiSilicon Limited + +DMA Device Library +==================== + +The DMA library provides a DMA device framework for management and provisioning +of hardware and software DMA poll mode drivers, defining generic APIs which +support a number of different DMA operations. + + +Design Principles +----------------- + +The DMA library follows the same basic principles as those used in DPDK's +Ethernet Device framework and the RegEx framework. The DMA framework provides +a generic DMA device framework which supports both physical (hardware) +and virtual (software) DMA devices as well as a generic DMA API which allows +DMA devices to be managed and configured and supports DMA operations to be +provisioned on DMA poll mode driver. + +.. _figure_dmadev: + +.. figure:: img/dmadev.* + +The above figure shows the model on which the DMA framework is built on: + + * The DMA controller could have multiple hardware DMA channels (aka. hardware + DMA queues), each hardware DMA channel should be represented by a dmadev. + * The dmadev could create multiple virtual DMA channels, each virtual DMA + channel represents a different transfer context. The DMA operation request + must be submitted to the virtual DMA channel. e.g. Application could create + virtual DMA channel 0 for memory-to-memory transfer scenario, and create + virtual DMA channel 1 for memory-to-device transfer scenario. + + +Device Management +----------------- + +Device Creation +~~~~~~~~~~~~~~~ + +Physical DMA controllers are discovered during the PCI probe/enumeration of the +EAL function which is executed at DPDK initialization, this is based on their +PCI BDF (bus/bridge, device, function). Specific physical DMA controllers, like +other physical devices in DPDK can be listed using the EAL command line options. + +The dmadevs are dynamically allocated by using the API +``rte_dmadev_pmd_allocate`` based on the number of hardware DMA channels. + + +Device Identification +~~~~~~~~~~~~~~~~~~~~~ + +Each DMA device, whether physical or virtual is uniquely designated by two +identifiers: + +- A unique device index used to designate the DMA device in all functions + exported by the DMA API. + +- A device name used to designate the DMA device in console messages, for + administration or debugging purposes. + + +Device Configuration +~~~~~~~~~~~~~~~~~~~~ + +The rte_dmadev_configure API is used to configure a DMA device. + +.. code-block:: c + + int rte_dmadev_configure(uint16_t dev_id, + const struct rte_dmadev_conf *dev_conf); + +The ``rte_dmadev_conf`` structure is used to pass the configuration parameters +for the DMA device for example the number of virtual DMA channels to set up, +indication of whether to enable silent mode. + + +Configuration of Virtual DMA Channels +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The rte_dmadev_vchan_setup API is used to configure a virtual DMA channel. + +.. code-block:: c + + int rte_dmadev_vchan_setup(uint16_t dev_id, uint16_t vchan, + const struct rte_dmadev_vchan_conf *conf); + +The ``rte_dmadev_vchan_conf`` structure is used to pass the configuration +parameters for the virtual DMA channel for example transfer direction, number of +descriptor for the virtual DMA channel, source device access port parameter, +destination device access port parameter. + + +Device Features and Capabilities +-------------------------------- + +DMA devices may support different feature sets. The ``rte_dmadev_info_get`` API +can be used to get the device info and supported features. + +Silent mode is a special device capability which does not require the +application to invoke dequeue APIs. + + +Enqueue / Dequeue APIs +~~~~~~~~~~~~~~~~~~~~~~ + +Enqueue APIs such as ``rte_dmadev_copy`` and ``rte_dmadev_fill`` can be used to +enqueue operations to hardware. If an enqueue is successful, a ``ring_idx`` is +returned. This ``ring_idx`` can be used by applications to track per-operation +metadata in an application-defined circular ring. + +The ``rte_dmadev_submit`` API is used to issue doorbell to hardware. +Alternatively the ``RTE_DMA_OP_FLAG_SUBMIT`` flag can be passed to the enqueue +APIs to also issue the doorbell to hardware. + +There are two dequeue APIs ``rte_dmadev_completed`` and +``rte_dmadev_completed_status``, these are used to obtain the results of +the enqueue requests. ``rte_dmadev_completed`` will return the number of +successfully completed operations. ``rte_dmadev_completed_status`` will return +the number of completed operations along with the status of each operation +(filled into the ``status`` array passed by user). These two APIs can also +return the last completed operation's ``ring_idx`` which could help user track +operations within their own application-defined rings. diff --git a/doc/guides/prog_guide/img/dmadev.svg b/doc/guides/prog_guide/img/dmadev.svg new file mode 100644 index 0000000..157d7eb --- /dev/null +++ b/doc/guides/prog_guide/img/dmadev.svg @@ -0,0 +1,283 @@ + + + + + + + + + + + + + + virtual DMA channel + + virtual DMA channel + + virtual DMA channel + + + dmadev + + hardware DMA channel + + hardware DMA channel + + hardware DMA controller + + dmadev + + + + + + + + + diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index 2dce507..0abea06 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -29,6 +29,7 @@ Programmer's Guide regexdev rte_security rawdev + dmadev link_bonding_poll_mode_drv_lib timer_lib hash_lib From patchwork Mon Aug 23 03:31:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97185 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1CF33A0C54; Mon, 23 Aug 2021 05:35:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 28D8E4113F; Mon, 23 Aug 2021 05:35:40 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id DB9BB40687 for ; Mon, 23 Aug 2021 05:35:34 +0200 (CEST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GtHpn1p1MzbdQr; Mon, 23 Aug 2021 11:31:45 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Mon, 23 Aug 2021 11:31:31 +0800 Message-ID: <1629689494-55091-7-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v16 6/9] dma/skeleton: introduce skeleton dmadev driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Skeleton dmadevice driver, on the lines of rawdev skeleton, is for showcasing of the dmadev library. This driver implements cpucopy 'DMA', so that a test module can be developed. Design of skeleton involves a virtual device which is plugged into VDEV bus on initialization. Also, enable compilation of dmadev skeleton drivers. Signed-off-by: Chengwen Feng --- drivers/dma/meson.build | 11 + drivers/dma/skeleton/meson.build | 7 + drivers/dma/skeleton/skeleton_dmadev.c | 595 +++++++++++++++++++++++++++++++++ drivers/dma/skeleton/skeleton_dmadev.h | 75 +++++ drivers/dma/skeleton/version.map | 3 + drivers/meson.build | 1 + 6 files changed, 692 insertions(+) create mode 100644 drivers/dma/meson.build create mode 100644 drivers/dma/skeleton/meson.build create mode 100644 drivers/dma/skeleton/skeleton_dmadev.c create mode 100644 drivers/dma/skeleton/skeleton_dmadev.h create mode 100644 drivers/dma/skeleton/version.map diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build new file mode 100644 index 0000000..0c2c34c --- /dev/null +++ b/drivers/dma/meson.build @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2021 HiSilicon Limited. + +if is_windows + subdir_done() +endif + +drivers = [ + 'skeleton', +] +std_deps = ['dmadev'] diff --git a/drivers/dma/skeleton/meson.build b/drivers/dma/skeleton/meson.build new file mode 100644 index 0000000..27509b1 --- /dev/null +++ b/drivers/dma/skeleton/meson.build @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2021 HiSilicon Limited. + +deps += ['dmadev', 'kvargs', 'ring', 'bus_vdev'] +sources = files( + 'skeleton_dmadev.c', +) diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c new file mode 100644 index 0000000..b3ab4a0 --- /dev/null +++ b/drivers/dma/skeleton/skeleton_dmadev.c @@ -0,0 +1,595 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "skeleton_dmadev.h" + +/* Count of instances */ +static uint16_t skeldma_init_once; + +static int +skeldma_info_get(const struct rte_dmadev *dev, struct rte_dmadev_info *dev_info, + uint32_t info_sz) +{ +#define SKELDMA_MAX_DESC 8192 +#define SKELDMA_MIN_DESC 128 + + RTE_SET_USED(dev); + RTE_SET_USED(info_sz); + + dev_info->dev_capa = RTE_DMADEV_CAPA_MEM_TO_MEM | + RTE_DMADEV_CAPA_SVA | + RTE_DMADEV_CAPA_OPS_COPY; + dev_info->max_vchans = 1; + dev_info->max_desc = SKELDMA_MAX_DESC; + dev_info->min_desc = SKELDMA_MIN_DESC; + + return 0; +} + +static int +skeldma_configure(struct rte_dmadev *dev, const struct rte_dmadev_conf *conf) +{ + RTE_SET_USED(dev); + RTE_SET_USED(conf); + return 0; +} + +static void * +cpucopy_thread(void *param) +{ +#define SLEEP_THRESHOLD 10000 +#define SLEEP_US_VAL 10 + + struct rte_dmadev *dev = (struct rte_dmadev *)param; + struct skeldma_hw *hw = dev->dev_private; + struct skeldma_desc *desc = NULL; + int ret; + + while (!hw->exit_flag) { + ret = rte_ring_dequeue(hw->desc_running, (void **)&desc); + if (ret) { + hw->zero_req_count++; + if (hw->zero_req_count > SLEEP_THRESHOLD) { + if (hw->zero_req_count == 0) + hw->zero_req_count = SLEEP_THRESHOLD; + rte_delay_us_sleep(SLEEP_US_VAL); + } + continue; + } + + hw->zero_req_count = 0; + rte_memcpy(desc->dst, desc->src, desc->len); + hw->completed_count++; + (void)rte_ring_enqueue(hw->desc_completed, (void *)desc); + } + + return NULL; +} + +static void +fflush_ring(struct skeldma_hw *hw, struct rte_ring *ring) +{ + struct skeldma_desc *desc = NULL; + while (rte_ring_count(ring) > 0) { + (void)rte_ring_dequeue(ring, (void **)&desc); + (void)rte_ring_enqueue(hw->desc_empty, (void *)desc); + } +} + +static int +skeldma_start(struct rte_dmadev *dev) +{ + struct skeldma_hw *hw = dev->dev_private; + rte_cpuset_t cpuset; + int ret; + + if (hw->desc_mem == NULL) { + SKELDMA_ERR("Vchan was not setup, start fail!\n"); + return -EINVAL; + } + + /* Reset the dmadev to a known state, include: + * 1) fflush pending/running/completed ring to empty ring. + * 2) init ring idx to zero. + * 3) init running statistics. + * 4) mark cpucopy task exit_flag to false. + */ + fflush_ring(hw, hw->desc_pending); + fflush_ring(hw, hw->desc_running); + fflush_ring(hw, hw->desc_completed); + hw->ridx = 0; + hw->submitted_count = 0; + hw->zero_req_count = 0; + hw->completed_count = 0; + hw->exit_flag = false; + + rte_mb(); + + ret = rte_ctrl_thread_create(&hw->thread, "dma_skeleton", NULL, + cpucopy_thread, dev); + if (ret) { + SKELDMA_ERR("Start cpucopy thread fail!\n"); + return -EINVAL; + } + + if (hw->lcore_id != -1) { + cpuset = rte_lcore_cpuset(hw->lcore_id); + ret = pthread_setaffinity_np(hw->thread, sizeof(cpuset), + &cpuset); + if (ret) + SKELDMA_WARN("Set thread affinity lcore = %u fail!\n", + hw->lcore_id); + } + + return 0; +} + +static int +skeldma_stop(struct rte_dmadev *dev) +{ + struct skeldma_hw *hw = dev->dev_private; + + hw->exit_flag = true; + rte_delay_ms(1); + + pthread_cancel(hw->thread); + pthread_join(hw->thread, NULL); + + return 0; +} + +static int +vchan_setup(struct skeldma_hw *hw, uint16_t nb_desc) +{ + struct skeldma_desc *desc; + struct rte_ring *empty; + struct rte_ring *pending; + struct rte_ring *running; + struct rte_ring *completed; + uint16_t i; + + desc = rte_zmalloc_socket("dma_skelteon_desc", + nb_desc * sizeof(struct skeldma_desc), + RTE_CACHE_LINE_SIZE, hw->socket_id); + if (desc == NULL) { + SKELDMA_ERR("Malloc dma skeleton desc fail!\n"); + return -ENOMEM; + } + + empty = rte_ring_create("dma_skeleton_desc_empty", nb_desc, + hw->socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + pending = rte_ring_create("dma_skeleton_desc_pending", nb_desc, + hw->socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + running = rte_ring_create("dma_skeleton_desc_running", nb_desc, + hw->socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + completed = rte_ring_create("dma_skeleton_desc_completed", nb_desc, + hw->socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + if (empty == NULL || pending == NULL || running == NULL || + completed == NULL) { + SKELDMA_ERR("Create dma skeleton desc ring fail!\n"); + rte_ring_free(empty); + rte_ring_free(pending); + rte_ring_free(running); + rte_ring_free(completed); + rte_free(desc); + return -ENOMEM; + } + + /* The real usable ring size is *count-1* instead of *count* to + * differentiate a free ring from an empty ring. + * @see rte_ring_create + */ + for (i = 0; i < nb_desc - 1; i++) + (void)rte_ring_enqueue(empty, (void *)(desc + i)); + + hw->desc_mem = desc; + hw->desc_empty = empty; + hw->desc_pending = pending; + hw->desc_running = running; + hw->desc_completed = completed; + + return 0; +} + +static void +vchan_release(struct skeldma_hw *hw) +{ + if (hw->desc_mem == NULL) + return; + + rte_free(hw->desc_mem); + hw->desc_mem = NULL; + rte_ring_free(hw->desc_empty); + hw->desc_empty = NULL; + rte_ring_free(hw->desc_pending); + hw->desc_pending = NULL; + rte_ring_free(hw->desc_running); + hw->desc_running = NULL; + rte_ring_free(hw->desc_completed); + hw->desc_completed = NULL; +} + +static int +skeldma_close(struct rte_dmadev *dev) +{ + /* The device already stopped */ + vchan_release(dev->dev_private); + return 0; +} + +static int +skeldma_vchan_setup(struct rte_dmadev *dev, uint16_t vchan, + const struct rte_dmadev_vchan_conf *conf) +{ + struct skeldma_hw *hw = dev->dev_private; + + RTE_SET_USED(vchan); + + if (!rte_is_power_of_2(conf->nb_desc)) { + SKELDMA_ERR("Number of desc must be power of 2!\n"); + return -EINVAL; + } + + vchan_release(hw); + return vchan_setup(hw, conf->nb_desc); +} + +static int +skeldma_stats_get(const struct rte_dmadev *dev, uint16_t vchan, + struct rte_dmadev_stats *stats, uint32_t stats_sz) +{ + struct skeldma_hw *hw = dev->dev_private; + + RTE_SET_USED(vchan); + RTE_SET_USED(stats_sz); + + stats->submitted = hw->submitted_count; + stats->completed = hw->completed_count; + stats->errors = 0; + + return 0; +} + +static int +skeldma_stats_reset(struct rte_dmadev *dev, uint16_t vchan) +{ + struct skeldma_hw *hw = dev->dev_private; + + RTE_SET_USED(vchan); + + hw->submitted_count = 0; + hw->completed_count = 0; + + return 0; +} + +static int +skeldma_dump(const struct rte_dmadev *dev, FILE *f) +{ +#define GET_RING_COUNT(ring) ((ring) ? (rte_ring_count(ring)) : 0) + + struct skeldma_hw *hw = dev->dev_private; + + fprintf(f, + " lcore_id: %d\n" + " socket_id: %d\n" + " desc_empty_ring_count: %u\n" + " desc_pending_ring_count: %u\n" + " desc_running_ring_count: %u\n" + " desc_completed_ring_count: %u\n", + hw->lcore_id, hw->socket_id, + GET_RING_COUNT(hw->desc_empty), + GET_RING_COUNT(hw->desc_pending), + GET_RING_COUNT(hw->desc_running), + GET_RING_COUNT(hw->desc_completed)); + fprintf(f, + " next_ring_idx: %u\n" + " submitted_count: %" PRIu64 "\n" + " completed_count: %" PRIu64 "\n", + hw->ridx, hw->submitted_count, hw->completed_count); + + return 0; +} + +static void +submit(struct skeldma_hw *hw, struct skeldma_desc *desc) +{ + uint16_t count = rte_ring_count(hw->desc_pending); + struct skeldma_desc *pend_desc = NULL; + + while (count > 0) { + (void)rte_ring_dequeue(hw->desc_pending, (void **)&pend_desc); + (void)rte_ring_enqueue(hw->desc_running, (void *)pend_desc); + count--; + } + + if (desc) + (void)rte_ring_enqueue(hw->desc_running, (void *)desc); +} + +static int +skeldma_copy(struct rte_dmadev *dev, uint16_t vchan, + rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags) +{ + struct skeldma_hw *hw = dev->dev_private; + struct skeldma_desc *desc; + int ret; + + RTE_SET_USED(vchan); + RTE_SET_USED(flags); + + ret = rte_ring_dequeue(hw->desc_empty, (void **)&desc); + if (ret) + return -ENOSPC; + desc->src = (void *)src; + desc->dst = (void *)dst; + desc->len = length; + desc->ridx = hw->ridx++; + if (flags & RTE_DMA_OP_FLAG_SUBMIT) + submit(hw, desc); + else + (void)rte_ring_enqueue(hw->desc_pending, (void *)desc); + hw->submitted_count++; + + return 0; +} + +static int +skeldma_submit(struct rte_dmadev *dev, uint16_t vchan) +{ + struct skeldma_hw *hw = dev->dev_private; + RTE_SET_USED(vchan); + submit(hw, NULL); + return 0; +} + +static uint16_t +skeldma_completed(struct rte_dmadev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error) +{ + struct skeldma_hw *hw = dev->dev_private; + struct skeldma_desc *desc = NULL; + uint16_t index = 0; + uint16_t count; + + RTE_SET_USED(vchan); + RTE_SET_USED(has_error); + + count = RTE_MIN(nb_cpls, rte_ring_count(hw->desc_completed)); + while (index < count) { + (void)rte_ring_dequeue(hw->desc_completed, (void **)&desc); + if (index == count - 1) + *last_idx = desc->ridx; + index++; + (void)rte_ring_enqueue(hw->desc_empty, (void *)desc); + } + + return count; +} + +static uint16_t +skeldma_completed_status(struct rte_dmadev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, enum rte_dma_status_code *status) +{ + struct skeldma_hw *hw = dev->dev_private; + struct skeldma_desc *desc = NULL; + uint16_t index = 0; + uint16_t count; + + RTE_SET_USED(vchan); + + count = RTE_MIN(nb_cpls, rte_ring_count(hw->desc_completed)); + while (index < count) { + (void)rte_ring_dequeue(hw->desc_completed, (void **)&desc); + if (index == count - 1) + *last_idx = desc->ridx; + status[index++] = RTE_DMA_STATUS_SUCCESSFUL; + (void)rte_ring_enqueue(hw->desc_empty, (void *)desc); + } + + return count; +} + +static const struct rte_dmadev_ops skeldma_ops = { + .dev_info_get = skeldma_info_get, + .dev_configure = skeldma_configure, + .dev_start = skeldma_start, + .dev_stop = skeldma_stop, + .dev_close = skeldma_close, + + .vchan_setup = skeldma_vchan_setup, + + .stats_get = skeldma_stats_get, + .stats_reset = skeldma_stats_reset, + + .dev_dump = skeldma_dump, +}; + +static int +skeldma_create(const char *name, struct rte_vdev_device *vdev, int lcore_id) +{ + struct rte_dmadev *dev; + struct skeldma_hw *hw; + int socket_id; + + dev = rte_dmadev_pmd_allocate(name); + if (dev == NULL) { + SKELDMA_ERR("Unable to allocate dmadev: %s\n", name); + return -EINVAL; + } + + socket_id = (lcore_id < 0) ? rte_socket_id() : + rte_lcore_to_socket_id(lcore_id); + dev->dev_private = rte_zmalloc_socket("dmadev private", + sizeof(struct skeldma_hw), + RTE_CACHE_LINE_SIZE, + socket_id); + if (!dev->dev_private) { + SKELDMA_ERR("Unable to allocate device private memory\n"); + (void)rte_dmadev_pmd_release(dev); + return -ENOMEM; + } + + dev->copy = skeldma_copy; + dev->submit = skeldma_submit; + dev->completed = skeldma_completed; + dev->completed_status = skeldma_completed_status; + dev->dev_ops = &skeldma_ops; + dev->device = &vdev->device; + + hw = dev->dev_private; + hw->lcore_id = lcore_id; + hw->socket_id = socket_id; + + return dev->data->dev_id; +} + +static int +skeldma_destroy(const char *name) +{ + struct rte_dmadev *dev; + int ret; + + dev = rte_dmadev_get_device_by_name(name); + if (!dev) + return -EINVAL; + + ret = rte_dmadev_close(dev->data->dev_id); + if (ret) + return ret; + + rte_free(dev->dev_private); + dev->dev_private = NULL; + (void)rte_dmadev_pmd_release(dev); + + return 0; +} + +static int +skeldma_parse_lcore(const char *key __rte_unused, + const char *value, + void *opaque) +{ + int lcore_id = atoi(value); + if (lcore_id >= 0 && lcore_id < RTE_MAX_LCORE) + *(int *)opaque = lcore_id; + return 0; +} + +static void +skeldma_parse_vdev_args(struct rte_vdev_device *vdev, int *lcore_id) +{ + static const char *const args[] = { + SKELDMA_ARG_LCORE, + NULL + }; + + struct rte_kvargs *kvlist; + const char *params; + + params = rte_vdev_device_args(vdev); + if (params == NULL || params[0] == '\0') + return; + + kvlist = rte_kvargs_parse(params, args); + if (!kvlist) + return; + + (void)rte_kvargs_process(kvlist, SKELDMA_ARG_LCORE, + skeldma_parse_lcore, lcore_id); + + SKELDMA_INFO("Parse lcore_id = %d\n", *lcore_id); + + rte_kvargs_free(kvlist); +} + +static int +skeldma_probe(struct rte_vdev_device *vdev) +{ + const char *name; + int lcore_id = -1; + int ret; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + SKELDMA_ERR("Multiple process not supported for %s\n", name); + return -EINVAL; + } + + /* More than one instance is not supported */ + if (skeldma_init_once) { + SKELDMA_ERR("Multiple instance not supported for %s\n", name); + return -EINVAL; + } + + skeldma_parse_vdev_args(vdev, &lcore_id); + + ret = skeldma_create(name, vdev, lcore_id); + if (ret >= 0) { + SKELDMA_INFO("Create %s dmadev lcore-id %d\n", name, lcore_id); + /* Device instance created; Second instance not possible */ + skeldma_init_once = 1; + } + + return ret < 0 ? ret : 0; +} + +static int +skeldma_remove(struct rte_vdev_device *vdev) +{ + const char *name; + int ret; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -1; + + ret = skeldma_destroy(name); + if (!ret) { + skeldma_init_once = 0; + SKELDMA_INFO("Remove %s dmadev\n", name); + } + + return ret; +} + +static struct rte_vdev_driver skeldma_pmd_drv = { + .probe = skeldma_probe, + .remove = skeldma_remove, + .drv_flags = RTE_VDEV_DRV_NEED_IOVA_AS_VA, +}; + +RTE_LOG_REGISTER_DEFAULT(skeldma_logtype, INFO); +RTE_PMD_REGISTER_VDEV(dma_skeleton, skeldma_pmd_drv); +RTE_PMD_REGISTER_PARAM_STRING(dma_skeleton, + SKELDMA_ARG_LCORE "= "); diff --git a/drivers/dma/skeleton/skeleton_dmadev.h b/drivers/dma/skeleton/skeleton_dmadev.h new file mode 100644 index 0000000..6495653 --- /dev/null +++ b/drivers/dma/skeleton/skeleton_dmadev.h @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + */ + +#ifndef __SKELETON_DMADEV_H__ +#define __SKELETON_DMADEV_H__ + +#include + +extern int skeldma_logtype; +#define SKELDMA_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, skeldma_logtype, "%s(): " fmt "", \ + __func__, ##args) + +#define SKELDMA_DEBUG(fmt, args...) \ + SKELDMA_LOG(DEBUG, fmt, ## args) +#define SKELDMA_INFO(fmt, args...) \ + SKELDMA_LOG(INFO, fmt, ## args) +#define SKELDMA_WARN(fmt, args...) \ + SKELDMA_LOG(WARNING, fmt, ## args) +#define SKELDMA_ERR(fmt, args...) \ + SKELDMA_LOG(ERR, fmt, ## args) + +#define SKELDMA_ARG_LCORE "lcore" + +struct skeldma_desc { + void *src; + void *dst; + uint32_t len; + uint16_t ridx; /* ring idx */ +}; + +struct skeldma_hw { + int lcore_id; /* cpucopy task affinity core */ + int socket_id; + pthread_t thread; /* cpucopy task thread */ + volatile int exit_flag; /* cpucopy task exit flag */ + + struct skeldma_desc *desc_mem; + + /* Descriptor ring state machine: + * + * ----------- enqueue without submit ----------- + * | empty |------------------------------->| pending | + * -----------\ ----------- + * ^ \------------ | + * | | |submit doorbell + * | | | + * | |enqueue with submit | + * |get completed |------------------| | + * | | | + * | v v + * ----------- cpucopy thread working ----------- + * |completed|<-------------------------------| running | + * ----------- ----------- + */ + struct rte_ring *desc_empty; + struct rte_ring *desc_pending; + struct rte_ring *desc_running; + struct rte_ring *desc_completed; + + /* Cache delimiter for dataplane API's operation data */ + char cache1 __rte_cache_aligned; + uint16_t ridx; /* ring idx */ + uint64_t submitted_count; + + /* Cache delimiter for cpucopy thread's operation data */ + char cache2 __rte_cache_aligned; + uint32_t zero_req_count; + uint64_t completed_count; +}; + +int test_dma_skeleton(uint16_t dev_id); + +#endif /* __SKELETON_DMADEV_H__ */ diff --git a/drivers/dma/skeleton/version.map b/drivers/dma/skeleton/version.map new file mode 100644 index 0000000..c2e0723 --- /dev/null +++ b/drivers/dma/skeleton/version.map @@ -0,0 +1,3 @@ +DPDK_22 { + local: *; +}; diff --git a/drivers/meson.build b/drivers/meson.build index bc6f4f5..383f648 100644 --- a/drivers/meson.build +++ b/drivers/meson.build @@ -18,6 +18,7 @@ subdirs = [ 'vdpa', # depends on common, bus and mempool. 'event', # depends on common, bus, mempool and net. 'baseband', # depends on common and bus. + 'dma', # depends on common and bus. ] if meson.is_cross_build() From patchwork Mon Aug 23 03:31:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97184 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4390BA0C54; Mon, 23 Aug 2021 05:35:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ED9C341122; Mon, 23 Aug 2021 05:35:38 +0200 (CEST) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by mails.dpdk.org (Postfix) with ESMTP id AB04E40143 for ; Mon, 23 Aug 2021 05:35:34 +0200 (CEST) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4GtHtZ5QJvz1CZZ1; Mon, 23 Aug 2021 11:35:02 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Mon, 23 Aug 2021 11:31:32 +0800 Message-ID: <1629689494-55091-8-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v16 7/9] dma/skeleton: add test cases X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Patch introduces dmadev unit testcase for validation against the skeleton dmadev PMD implementation. Test cases are added along with the skeleton driver implementation. It can be enabled by using vdev argument to any DPDK binary: --vdev="dma_skeleton,selftest=1" In case 'selftest=1' is not provided, autotest doesn't execute the test cases but the vdev is still available for application use. Signed-off-by: Chengwen Feng --- drivers/dma/skeleton/meson.build | 1 + drivers/dma/skeleton/skeleton_dmadev.c | 34 +- drivers/dma/skeleton/skeleton_dmadev.h | 1 + drivers/dma/skeleton/skeleton_dmadev_test.c | 521 ++++++++++++++++++++++++++++ 4 files changed, 553 insertions(+), 4 deletions(-) create mode 100644 drivers/dma/skeleton/skeleton_dmadev_test.c diff --git a/drivers/dma/skeleton/meson.build b/drivers/dma/skeleton/meson.build index 27509b1..5d47339 100644 --- a/drivers/dma/skeleton/meson.build +++ b/drivers/dma/skeleton/meson.build @@ -4,4 +4,5 @@ deps += ['dmadev', 'kvargs', 'ring', 'bus_vdev'] sources = files( 'skeleton_dmadev.c', + 'skeleton_dmadev_test.c', ) diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c index b3ab4a0..1707e88 100644 --- a/drivers/dma/skeleton/skeleton_dmadev.c +++ b/drivers/dma/skeleton/skeleton_dmadev.c @@ -430,6 +430,7 @@ static const struct rte_dmadev_ops skeldma_ops = { .stats_reset = skeldma_stats_reset, .dev_dump = skeldma_dump, + .dev_selftest = test_dma_skeleton, }; static int @@ -503,11 +504,24 @@ skeldma_parse_lcore(const char *key __rte_unused, return 0; } +static int +skeldma_parse_selftest(const char *key __rte_unused, + const char *value, + void *opaque) +{ + int flag = atoi(value); + if (flag == 0 || flag == 1) + *(int *)opaque = flag; + return 0; +} + static void -skeldma_parse_vdev_args(struct rte_vdev_device *vdev, int *lcore_id) +skeldma_parse_vdev_args(struct rte_vdev_device *vdev, + int *lcore_id, int *selftest) { static const char *const args[] = { SKELDMA_ARG_LCORE, + SKELDMA_ARG_SELFTEST, NULL }; @@ -524,8 +538,11 @@ skeldma_parse_vdev_args(struct rte_vdev_device *vdev, int *lcore_id) (void)rte_kvargs_process(kvlist, SKELDMA_ARG_LCORE, skeldma_parse_lcore, lcore_id); + (void)rte_kvargs_process(kvlist, SKELDMA_ARG_SELFTEST, + skeldma_parse_selftest, selftest); - SKELDMA_INFO("Parse lcore_id = %d\n", *lcore_id); + SKELDMA_INFO("Parse lcore_id = %d selftest = %d\n", + *lcore_id, *selftest); rte_kvargs_free(kvlist); } @@ -535,6 +552,7 @@ skeldma_probe(struct rte_vdev_device *vdev) { const char *name; int lcore_id = -1; + int selftest = 0; int ret; name = rte_vdev_device_name(vdev); @@ -552,10 +570,17 @@ skeldma_probe(struct rte_vdev_device *vdev) return -EINVAL; } - skeldma_parse_vdev_args(vdev, &lcore_id); + skeldma_parse_vdev_args(vdev, &lcore_id, &selftest); ret = skeldma_create(name, vdev, lcore_id); if (ret >= 0) { + /* In case command line argument for 'selftest' was passed; + * if invalid arguments were passed, execution continues but + * without selftest. + */ + if (selftest) + (void)test_dma_skeleton(ret); + SKELDMA_INFO("Create %s dmadev lcore-id %d\n", name, lcore_id); /* Device instance created; Second instance not possible */ skeldma_init_once = 1; @@ -592,4 +617,5 @@ static struct rte_vdev_driver skeldma_pmd_drv = { RTE_LOG_REGISTER_DEFAULT(skeldma_logtype, INFO); RTE_PMD_REGISTER_VDEV(dma_skeleton, skeldma_pmd_drv); RTE_PMD_REGISTER_PARAM_STRING(dma_skeleton, - SKELDMA_ARG_LCORE "= "); + SKELDMA_ARG_LCORE "= " + SKELDMA_ARG_SELFTEST "=<0|1> "); diff --git a/drivers/dma/skeleton/skeleton_dmadev.h b/drivers/dma/skeleton/skeleton_dmadev.h index 6495653..e8a310d 100644 --- a/drivers/dma/skeleton/skeleton_dmadev.h +++ b/drivers/dma/skeleton/skeleton_dmadev.h @@ -22,6 +22,7 @@ extern int skeldma_logtype; SKELDMA_LOG(ERR, fmt, ## args) #define SKELDMA_ARG_LCORE "lcore" +#define SKELDMA_ARG_SELFTEST "selftest" struct skeldma_desc { void *src; diff --git a/drivers/dma/skeleton/skeleton_dmadev_test.c b/drivers/dma/skeleton/skeleton_dmadev_test.c new file mode 100644 index 0000000..be56f07 --- /dev/null +++ b/drivers/dma/skeleton/skeleton_dmadev_test.c @@ -0,0 +1,521 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + */ + +#include + +#include +#include +#include +#include + +/* Using relative path as skeleton_dmadev is not part of exported headers */ +#include "skeleton_dmadev.h" + +#define SKELDMA_TEST_DEBUG(fmt, args...) \ + SKELDMA_LOG(DEBUG, fmt, ## args) +#define SKELDMA_TEST_INFO(fmt, args...) \ + SKELDMA_LOG(INFO, fmt, ## args) + +#define SKELDMA_TEST_RUN(test) \ + testsuite_run_test(test, #test) + +#define TEST_MEMCPY_SIZE 1024 +#define TEST_WAIT_US_VAL 50000 + +#define TEST_SUCCESS 0 +#define TEST_FAILED -1 + +static uint16_t test_dev_id; +static uint16_t invalid_dev_id; + +static int total; +static int passed; +static int failed; +static char *src; +static char *dst; + +static int +testsuite_setup(uint16_t dev_id) +{ + test_dev_id = dev_id; + invalid_dev_id = RTE_DMADEV_MAX_DEVS; + + src = rte_malloc("dmadev_test_src", TEST_MEMCPY_SIZE, 0); + if (src == NULL) + return -ENOMEM; + dst = rte_malloc("dmadev_test_dst", TEST_MEMCPY_SIZE, 0); + if (dst == NULL) + return -ENOMEM; + + total = 0; + passed = 0; + failed = 0; + + return 0; +} + +static void +testsuite_teardown(void) +{ + rte_free(src); + rte_free(dst); + /* Ensure the dmadev is stopped. */ + rte_dmadev_stop(test_dev_id); +} + +static void +testsuite_run_test(int (*test)(void), const char *name) +{ + int ret = 0; + + if (test) { + ret = test(); + if (ret < 0) { + failed++; + SKELDMA_TEST_INFO("%s Failed", name); + } else { + passed++; + SKELDMA_TEST_DEBUG("%s Passed", name); + } + } + + total++; +} + +static int +test_dmadev_get_dev_id(void) +{ + int ret = rte_dmadev_get_dev_id("invalid_dmadev_device"); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + return TEST_SUCCESS; +} + +static int +test_dmadev_is_valid_dev(void) +{ + int ret; + ret = rte_dmadev_is_valid_dev(invalid_dev_id); + RTE_TEST_ASSERT(ret == false, "Expected false for invalid dev id"); + ret = rte_dmadev_is_valid_dev(test_dev_id); + RTE_TEST_ASSERT(ret == true, "Expected true for valid dev id"); + return TEST_SUCCESS; +} + +static int +test_dmadev_count(void) +{ + uint16_t count = rte_dmadev_count(); + RTE_TEST_ASSERT(count > 0, "Invalid dmadev count %u", count); + return TEST_SUCCESS; +} + +static int +test_dmadev_info_get(void) +{ + struct rte_dmadev_info info = { 0 }; + int ret; + + ret = rte_dmadev_info_get(invalid_dev_id, &info); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dmadev_info_get(test_dev_id, NULL); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dmadev_info_get(test_dev_id, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info"); + + return TEST_SUCCESS; +} + +static int +test_dmadev_configure(void) +{ + struct rte_dmadev_conf conf = { 0 }; + struct rte_dmadev_info info = { 0 }; + int ret; + + /* Check for invalid parameters */ + ret = rte_dmadev_configure(invalid_dev_id, &conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dmadev_configure(test_dev_id, NULL); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for nb_vchans == 0 */ + memset(&conf, 0, sizeof(conf)); + ret = rte_dmadev_configure(test_dev_id, &conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for conf.nb_vchans > info.max_vchans */ + ret = rte_dmadev_info_get(test_dev_id, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info"); + memset(&conf, 0, sizeof(conf)); + conf.nb_vchans = info.max_vchans + 1; + ret = rte_dmadev_configure(test_dev_id, &conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check enable silent mode */ + memset(&conf, 0, sizeof(conf)); + conf.nb_vchans = info.max_vchans; + conf.enable_silent = true; + ret = rte_dmadev_configure(test_dev_id, &conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Configure success */ + memset(&conf, 0, sizeof(conf)); + conf.nb_vchans = info.max_vchans; + ret = rte_dmadev_configure(test_dev_id, &conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure dmadev, %d", ret); + + /* Check configure success */ + ret = rte_dmadev_info_get(test_dev_id, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info"); + RTE_TEST_ASSERT_EQUAL(conf.nb_vchans, info.nb_vchans, + "Configure nb_vchans not match"); + + return TEST_SUCCESS; +} + +static int +test_dmadev_vchan_setup(void) +{ + struct rte_dmadev_vchan_conf vchan_conf = { 0 }; + struct rte_dmadev_conf dev_conf = { 0 }; + struct rte_dmadev_info dev_info = { 0 }; + int ret; + + /* Check for invalid parameters */ + ret = rte_dmadev_vchan_setup(invalid_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dmadev_vchan_setup(test_dev_id, 0, NULL); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Make sure configure success */ + ret = rte_dmadev_info_get(test_dev_id, &dev_info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info"); + dev_conf.nb_vchans = dev_info.max_vchans; + ret = rte_dmadev_configure(test_dev_id, &dev_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure dmadev, %d", ret); + + /* Check for invalid vchan */ + ret = rte_dmadev_vchan_setup(test_dev_id, dev_conf.nb_vchans, + &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for direction */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_DEV_TO_DEV + 1; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM - 1; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for direction and dev_capa combination */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_DEV; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + vchan_conf.direction = RTE_DMA_DIR_DEV_TO_MEM; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + vchan_conf.direction = RTE_DMA_DIR_DEV_TO_DEV; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for nb_desc validation */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info.min_desc - 1; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + vchan_conf.nb_desc = dev_info.max_desc + 1; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check src port type validation */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info.min_desc; + vchan_conf.src_port.port_type = RTE_DMADEV_PORT_PCIE; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check dst port type validation */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info.min_desc; + vchan_conf.dst_port.port_type = RTE_DMADEV_PORT_PCIE; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check vchan setup success */ + memset(&vchan_conf, 0, sizeof(vchan_conf)); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info.min_desc; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup vchan, %d", ret); + + return TEST_SUCCESS; +} + +static int +setup_one_vchan(void) +{ + struct rte_dmadev_vchan_conf vchan_conf = { 0 }; + struct rte_dmadev_info dev_info = { 0 }; + struct rte_dmadev_conf dev_conf = { 0 }; + int ret; + + ret = rte_dmadev_info_get(test_dev_id, &dev_info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info, %d", ret); + dev_conf.nb_vchans = dev_info.max_vchans; + ret = rte_dmadev_configure(test_dev_id, &dev_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure, %d", ret); + vchan_conf.direction = RTE_DMA_DIR_MEM_TO_MEM; + vchan_conf.nb_desc = dev_info.min_desc; + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup vchan, %d", ret); + + return TEST_SUCCESS; +} + +static int +test_dmadev_start_stop(void) +{ + struct rte_dmadev_vchan_conf vchan_conf = { 0 }; + struct rte_dmadev_conf dev_conf = { 0 }; + int ret; + + /* Check for invalid parameters */ + ret = rte_dmadev_start(invalid_dev_id); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dmadev_stop(invalid_dev_id); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Setup one vchan for later test */ + ret = setup_one_vchan(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret); + + ret = rte_dmadev_start(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret); + + /* Check reconfigure and vchan setup when device started */ + ret = rte_dmadev_configure(test_dev_id, &dev_conf); + RTE_TEST_ASSERT(ret == -EBUSY, "Failed to configure, %d", ret); + ret = rte_dmadev_vchan_setup(test_dev_id, 0, &vchan_conf); + RTE_TEST_ASSERT(ret == -EBUSY, "Failed to setup vchan, %d", ret); + + ret = rte_dmadev_stop(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret); + + return TEST_SUCCESS; +} + +static int +test_dmadev_stats(void) +{ + struct rte_dmadev_info dev_info = { 0 }; + struct rte_dmadev_stats stats = { 0 }; + int ret; + + /* Check for invalid parameters */ + ret = rte_dmadev_stats_get(invalid_dev_id, 0, &stats); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dmadev_stats_get(invalid_dev_id, 0, NULL); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dmadev_stats_reset(invalid_dev_id, 0); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Setup one vchan for later test */ + ret = setup_one_vchan(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret); + + /* Check for invalid vchan */ + ret = rte_dmadev_info_get(test_dev_id, &dev_info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info, %d", ret); + ret = rte_dmadev_stats_get(test_dev_id, dev_info.max_vchans, &stats); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + ret = rte_dmadev_stats_reset(test_dev_id, dev_info.max_vchans); + RTE_TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); + + /* Check for valid vchan */ + ret = rte_dmadev_stats_get(test_dev_id, 0, &stats); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get stats, %d", ret); + ret = rte_dmadev_stats_get(test_dev_id, RTE_DMADEV_ALL_VCHAN, &stats); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get all stats, %d", ret); + ret = rte_dmadev_stats_reset(test_dev_id, 0); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to reset stats, %d", ret); + ret = rte_dmadev_stats_reset(test_dev_id, RTE_DMADEV_ALL_VCHAN); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to reset all stats, %d", ret); + + return TEST_SUCCESS; +} + +static int +test_dmadev_completed(void) +{ + uint16_t last_idx = 1; + bool has_error = true; + uint16_t cpl_ret; + int ret, i; + + /* Setup one vchan for later test */ + ret = setup_one_vchan(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret); + + ret = rte_dmadev_start(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret); + + /* Setup test memory */ + for (i = 0; i < TEST_MEMCPY_SIZE; i++) + src[i] = (char)i; + memset(dst, 0, TEST_MEMCPY_SIZE); + + /* Check enqueue without submit */ + ret = rte_dmadev_copy(test_dev_id, 0, (rte_iova_t)src, (rte_iova_t)dst, + TEST_MEMCPY_SIZE, 0); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to enqueue copy, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dmadev_completed(test_dev_id, 0, 1, &last_idx, + &has_error); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 0, "Failed to get completed"); + + /* Check add submit */ + ret = rte_dmadev_submit(test_dev_id, 0); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to submit, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dmadev_completed(test_dev_id, 0, 1, &last_idx, + &has_error); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to get completed"); + RTE_TEST_ASSERT_EQUAL(last_idx, 0, "Last idx should be zero, %u", + last_idx); + RTE_TEST_ASSERT_EQUAL(has_error, false, "Should have no error"); + for (i = 0; i < TEST_MEMCPY_SIZE; i++) { + if (src[i] != dst[i]) { + RTE_TEST_ASSERT_EQUAL(src[i], dst[i], + "Failed to copy memory, %d %d", src[i], dst[i]); + break; + } + } + + /* Setup test memory */ + for (i = 0; i < TEST_MEMCPY_SIZE; i++) + src[i] = (char)i; + memset(dst, 0, TEST_MEMCPY_SIZE); + + /* Check for enqueue with submit */ + ret = rte_dmadev_copy(test_dev_id, 0, (rte_iova_t)src, (rte_iova_t)dst, + TEST_MEMCPY_SIZE, RTE_DMA_OP_FLAG_SUBMIT); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to enqueue copy, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dmadev_completed(test_dev_id, 0, 1, &last_idx, + &has_error); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to get completed"); + RTE_TEST_ASSERT_EQUAL(last_idx, 1, "Last idx should be 1, %u", + last_idx); + RTE_TEST_ASSERT_EQUAL(has_error, false, "Should have no error"); + for (i = 0; i < TEST_MEMCPY_SIZE; i++) { + if (src[i] != dst[i]) { + RTE_TEST_ASSERT_EQUAL(src[i], dst[i], + "Failed to copy memory, %d %d", src[i], dst[i]); + break; + } + } + + /* Stop dmadev to make sure dmadev to a known state */ + ret = rte_dmadev_stop(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret); + + return TEST_SUCCESS; +} + +static int +test_dmadev_completed_status(void) +{ + enum rte_dma_status_code status[1] = { 1 }; + uint16_t last_idx = 1; + uint16_t cpl_ret, i; + int ret; + + /* Setup one vchan for later test */ + ret = setup_one_vchan(); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret); + + ret = rte_dmadev_start(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret); + + /* Check for enqueue with submit */ + ret = rte_dmadev_copy(test_dev_id, 0, (rte_iova_t)src, (rte_iova_t)dst, + TEST_MEMCPY_SIZE, RTE_DMA_OP_FLAG_SUBMIT); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to enqueue copy, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dmadev_completed_status(test_dev_id, 0, 1, &last_idx, + status); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to completed status"); + RTE_TEST_ASSERT_EQUAL(last_idx, 0, "Last idx should be zero, %u", + last_idx); + for (i = 0; i < RTE_DIM(status); i++) + RTE_TEST_ASSERT_EQUAL(status[i], 0, + "Failed to completed status, %d", status[i]); + + /* Check do completed status again */ + cpl_ret = rte_dmadev_completed_status(test_dev_id, 0, 1, &last_idx, + status); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 0, "Failed to completed status"); + + /* Check for enqueue with submit again */ + ret = rte_dmadev_copy(test_dev_id, 0, (rte_iova_t)src, (rte_iova_t)dst, + TEST_MEMCPY_SIZE, RTE_DMA_OP_FLAG_SUBMIT); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to enqueue copy, %d", ret); + rte_delay_us_sleep(TEST_WAIT_US_VAL); + cpl_ret = rte_dmadev_completed_status(test_dev_id, 0, 1, &last_idx, + status); + RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to completed status"); + RTE_TEST_ASSERT_EQUAL(last_idx, 1, "Last idx should be 1, %u", + last_idx); + for (i = 0; i < RTE_DIM(status); i++) + RTE_TEST_ASSERT_EQUAL(status[i], 0, + "Failed to completed status, %d", status[i]); + + /* Stop dmadev to make sure dmadev to a known state */ + ret = rte_dmadev_stop(test_dev_id); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret); + + return TEST_SUCCESS; +} + +int +test_dma_skeleton(uint16_t dev_id) +{ + int ret = testsuite_setup(dev_id); + if (ret) { + SKELDMA_TEST_INFO("testsuite setup fail!"); + return -1; + } + + /* If the testcase exit successfully, ensure that the test dmadev exist + * and the dmadev is in the stopped state. + */ + SKELDMA_TEST_RUN(test_dmadev_get_dev_id); + SKELDMA_TEST_RUN(test_dmadev_is_valid_dev); + SKELDMA_TEST_RUN(test_dmadev_count); + SKELDMA_TEST_RUN(test_dmadev_info_get); + SKELDMA_TEST_RUN(test_dmadev_configure); + SKELDMA_TEST_RUN(test_dmadev_vchan_setup); + SKELDMA_TEST_RUN(test_dmadev_start_stop); + SKELDMA_TEST_RUN(test_dmadev_stats); + SKELDMA_TEST_RUN(test_dmadev_completed); + SKELDMA_TEST_RUN(test_dmadev_completed_status); + + testsuite_teardown(); + + SKELDMA_TEST_INFO("Total tests : %d\n", total); + SKELDMA_TEST_INFO("Passed : %d\n", passed); + SKELDMA_TEST_INFO("Failed : %d\n", failed); + + if (failed) + return -1; + + return 0; +}; From patchwork Mon Aug 23 03:31:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97189 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0E4BA0C54; Mon, 23 Aug 2021 05:36:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DDABD41161; Mon, 23 Aug 2021 05:35:44 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 84E16410FE for ; Mon, 23 Aug 2021 05:35:35 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GtHpn4Tdjzbh9X; Mon, 23 Aug 2021 11:31:45 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:33 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Mon, 23 Aug 2021 11:31:33 +0800 Message-ID: <1629689494-55091-9-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v16 8/9] test: enable dmadev skeleton test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Skeleton dmadevice test cases are part of driver layer. This patch allows test cases to be executed using 'dma_autotest' command in test framework. Signed-off-by: Chengwen Feng --- app/test/meson.build | 3 +++ app/test/test_dmadev.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 56 insertions(+) create mode 100644 app/test/test_dmadev.c diff --git a/app/test/meson.build b/app/test/meson.build index a761168..881cb4f 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -43,6 +43,7 @@ test_sources = files( 'test_debug.c', 'test_distributor.c', 'test_distributor_perf.c', + 'test_dmadev.c', 'test_eal_flags.c', 'test_eal_fs.c', 'test_efd.c', @@ -162,6 +163,7 @@ test_deps = [ 'cmdline', 'cryptodev', 'distributor', + 'dmadev', 'efd', 'ethdev', 'eventdev', @@ -333,6 +335,7 @@ driver_test_names = [ 'cryptodev_sw_mvsam_autotest', 'cryptodev_sw_snow3g_autotest', 'cryptodev_sw_zuc_autotest', + 'dmadev_autotest', 'eventdev_selftest_octeontx', 'eventdev_selftest_sw', 'rawdev_autotest', diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c new file mode 100644 index 0000000..90e8faa --- /dev/null +++ b/app/test/test_dmadev.c @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + */ + +#include +#include +#include +#include + +#include "test.h" + +static int +test_dmadev_selftest_skeleton(void) +{ + const char *pmd = "dma_skeleton"; + int ret; + + printf("\n### Test dmadev infrastructure using skeleton driver\n"); + rte_vdev_init(pmd, NULL); + ret = rte_dmadev_selftest(rte_dmadev_get_dev_id(pmd)); + rte_vdev_uninit(pmd); + + return ret; +} + +static int +test_dmadev_selftests(void) +{ + const int count = rte_dmadev_count(); + int ret = 0; + int i; + + /* basic sanity on dmadev infrastructure */ + if (test_dmadev_selftest_skeleton() < 0) + return -1; + + /* now run self-test on all dmadevs */ + if (count > 0) + printf("\n### Run selftest on each available dmadev\n"); + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if (rte_dmadevices[i].state != RTE_DMADEV_ATTACHED) + continue; + int result = rte_dmadev_selftest(i); + printf("dmadev %u (%s) selftest: %s\n", i, + rte_dmadevices[i].data->dev_name, + result == 0 ? "Passed" : "Failed"); + ret |= result; + } + + return ret; +} + +REGISTER_TEST_COMMAND(dmadev_autotest, test_dmadev_selftests); From patchwork Mon Aug 23 03:31:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97191 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D57C9A0C54; Mon, 23 Aug 2021 05:36:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D54CC4116D; Mon, 23 Aug 2021 05:35:49 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id C24C64118D for ; Mon, 23 Aug 2021 05:35:48 +0200 (CEST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GtHph42Vmz8tFs; Mon, 23 Aug 2021 11:31:40 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:33 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Mon, 23 Aug 2021 11:35:32 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Mon, 23 Aug 2021 11:31:34 +0800 Message-ID: <1629689494-55091-10-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1629689494-55091-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v16 9/9] maintainers: add for dmadev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add myself as dmadev's maintainer and update release notes. Signed-off-by: Chengwen Feng --- MAINTAINERS | 7 +++++++ doc/guides/rel_notes/release_21_11.rst | 6 ++++++ 2 files changed, 13 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 266f5ac..1661428 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -496,6 +496,13 @@ F: drivers/raw/skeleton/ F: app/test/test_rawdev.c F: doc/guides/prog_guide/rawdev.rst +DMA device API - EXPERIMENTAL +M: Chengwen Feng +F: lib/dmadev/ +F: drivers/dma/skeleton/ +F: app/test/test_dmadev.c +F: doc/guides/prog_guide/dmadev.rst + Memory Pool Drivers ------------------- diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index d707a55..0d3c38f 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -55,6 +55,12 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added dmadev library support.** + + The dmadev library provides a DMA device framework for management and + provisioning of hardware and software DMA poll mode drivers, defining generic + APIs which support a number of different DMA operations. + Removed Items -------------