From patchwork Fri Jul 2 13:18:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 95217 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B81E1A0A0C; Fri, 2 Jul 2021 15:21:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C1B041353; Fri, 2 Jul 2021 15:21:57 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 455E640686 for ; Fri, 2 Jul 2021 15:21:54 +0200 (CEST) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GGbJ15bqyzZpp4; Fri, 2 Jul 2021 21:18:41 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 21:21:49 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 2 Jul 2021 21:21:49 +0800 From: Chengwen Feng To: , , , , CC: , , , , , , , , , , Date: Fri, 2 Jul 2021 21:18:11 +0800 Message-ID: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH] dmadev: introduce DMA device library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduces 'dmadevice' which is a generic type of DMA device. The APIs of dmadev library exposes some generic operations which can enable configuration and I/O with the DMA devices. Signed-off-by: Chengwen Feng --- MAINTAINERS | 4 + config/rte_config.h | 3 + lib/dmadev/meson.build | 6 + lib/dmadev/rte_dmadev.c | 438 +++++++++++++++++++++ lib/dmadev/rte_dmadev.h | 919 +++++++++++++++++++++++++++++++++++++++++++ lib/dmadev/rte_dmadev_core.h | 98 +++++ lib/dmadev/rte_dmadev_pmd.h | 210 ++++++++++ lib/dmadev/version.map | 32 ++ lib/meson.build | 1 + 9 files changed, 1711 insertions(+) create mode 100644 lib/dmadev/meson.build create mode 100644 lib/dmadev/rte_dmadev.c create mode 100644 lib/dmadev/rte_dmadev.h create mode 100644 lib/dmadev/rte_dmadev_core.h create mode 100644 lib/dmadev/rte_dmadev_pmd.h create mode 100644 lib/dmadev/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 4347555..2019783 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -496,6 +496,10 @@ F: drivers/raw/skeleton/ F: app/test/test_rawdev.c F: doc/guides/prog_guide/rawdev.rst +Dma device API +M: Chengwen Feng +F: lib/dmadev/ + Memory Pool Drivers ------------------- diff --git a/config/rte_config.h b/config/rte_config.h index 590903c..331a431 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -81,6 +81,9 @@ /* rawdev defines */ #define RTE_RAWDEV_MAX_DEVS 64 +/* dmadev defines */ +#define RTE_DMADEV_MAX_DEVS 64 + /* ip_fragmentation defines */ #define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4 #undef RTE_LIBRTE_IP_FRAG_TBL_STAT diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build new file mode 100644 index 0000000..c918dae --- /dev/null +++ b/lib/dmadev/meson.build @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2021 HiSilicon Limited. + +sources = files('rte_dmadev.c') +headers = files('rte_dmadev.h', 'rte_dmadev_pmd.h') +indirect_headers += files('rte_dmadev_core.h') diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c new file mode 100644 index 0000000..a94e839 --- /dev/null +++ b/lib/dmadev/rte_dmadev.c @@ -0,0 +1,438 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 HiSilicon Limited. + */ + +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rte_dmadev.h" +#include "rte_dmadev_pmd.h" + +struct rte_dmadev rte_dmadevices[RTE_DMADEV_MAX_DEVS]; + +uint16_t +rte_dmadev_count(void) +{ + uint16_t count = 0; + uint16_t i; + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if (rte_dmadevices[i].attached) + count++; + } + + return count; +} + +int +rte_dmadev_get_dev_id(const char *name) +{ + uint16_t i; + + if (name == NULL) + return -EINVAL; + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) + if ((strcmp(rte_dmadevices[i].name, name) == 0) && + (rte_dmadevices[i].attached == RTE_DMADEV_ATTACHED)) + return i; + + return -ENODEV; +} + +int +rte_dmadev_socket_id(uint16_t dev_id) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + dev = &rte_dmadevices[dev_id]; + + return dev->socket_id; +} + +int +rte_dmadev_info_get(uint16_t dev_id, struct rte_dmadev_info *dev_info) +{ + struct rte_dmadev *dev; + int diag; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + RTE_FUNC_PTR_OR_ERR_RET(dev_info, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_info_get, -ENOTSUP); + + memset(dev_info, 0, sizeof(struct rte_dmadev_info)); + diag = (*dev->dev_ops->dev_info_get)(dev, dev_info); + if (diag != 0) + return diag; + + dev_info->device = dev->device; + dev_info->driver_name = dev->driver_name; + dev_info->socket_id = dev->socket_id; + + return 0; +} + +int +rte_dmadev_configure(uint16_t dev_id, const struct rte_dmadev_conf *dev_conf) +{ + struct rte_dmadev *dev; + int diag; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + RTE_FUNC_PTR_OR_ERR_RET(dev_conf, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP); + + if (dev->started) { + RTE_DMADEV_LOG(ERR, + "device %u must be stopped to allow configuration", dev_id); + return -EBUSY; + } + + diag = (*dev->dev_ops->dev_configure)(dev, dev_conf); + if (diag != 0) + RTE_DMADEV_LOG(ERR, "device %u dev_configure failed, ret = %d", + dev_id, diag); + else + dev->attached = 1; + + return diag; +} + +int +rte_dmadev_start(uint16_t dev_id) +{ + struct rte_dmadev *dev; + int diag; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + if (dev->started != 0) { + RTE_DMADEV_LOG(ERR, "device %u already started", dev_id); + return 0; + } + + if (dev->dev_ops->dev_start == NULL) + goto mark_started; + + diag = (*dev->dev_ops->dev_start)(dev); + if (diag != 0) + return diag; + +mark_started: + dev->started = 1; + return 0; +} + +int +rte_dmadev_stop(uint16_t dev_id) +{ + struct rte_dmadev *dev; + int diag; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + if (dev->started == 0) { + RTE_DMADEV_LOG(ERR, "device %u already stopped", dev_id); + return 0; + } + + if (dev->dev_ops->dev_stop == NULL) + goto mark_stopped; + + diag = (*dev->dev_ops->dev_stop)(dev); + if (diag != 0) + return diag; + +mark_stopped: + dev->started = 0; + return 0; +} + +int +rte_dmadev_close(uint16_t dev_id) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP); + + /* Device must be stopped before it can be closed */ + if (dev->started == 1) { + RTE_DMADEV_LOG(ERR, "device %u must be stopped before closing", + dev_id); + return -EBUSY; + } + + return (*dev->dev_ops->dev_close)(dev); +} + +int +rte_dmadev_reset(uint16_t dev_id) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP); + + /* Reset is not dependent on state of the device */ + return (*dev->dev_ops->dev_reset)(dev); +} + +int +rte_dmadev_queue_setup(uint16_t dev_id, + const struct rte_dmadev_queue_conf *conf) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + RTE_FUNC_PTR_OR_ERR_RET(conf, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -ENOTSUP); + + return (*dev->dev_ops->queue_setup)(dev, conf); +} + +int +rte_dmadev_queue_release(uint16_t dev_id, uint16_t vq_id) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP); + + return (*dev->dev_ops->queue_release)(dev, vq_id); +} + +int +rte_dmadev_queue_info_get(uint16_t dev_id, uint16_t vq_id, + struct rte_dmadev_queue_info *info) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + RTE_FUNC_PTR_OR_ERR_RET(info, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_info_get, -ENOTSUP); + + memset(info, 0, sizeof(struct rte_dmadev_queue_info)); + return (*dev->dev_ops->queue_info_get)(dev, vq_id, info); +} + +int +rte_dmadev_stats_get(uint16_t dev_id, int vq_id, + struct rte_dmadev_stats *stats) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + RTE_FUNC_PTR_OR_ERR_RET(stats, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP); + + return (*dev->dev_ops->stats_get)(dev, vq_id, stats); +} + +int +rte_dmadev_stats_reset(uint16_t dev_id, int vq_id) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_reset, -ENOTSUP); + + return (*dev->dev_ops->stats_reset)(dev, vq_id); +} + +static int +xstats_get_count(uint16_t dev_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->xstats_get_names, -ENOTSUP); + + return (*dev->dev_ops->xstats_get_names)(dev, NULL, 0); +} + +int +rte_dmadev_xstats_names_get(uint16_t dev_id, + struct rte_dmadev_xstats_name *xstats_names, + uint32_t size) +{ + struct rte_dmadev *dev; + int cnt_expected_entries; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + cnt_expected_entries = xstats_get_count(dev_id); + + if (xstats_names == NULL || cnt_expected_entries < 0 || + (int)size < cnt_expected_entries || size == 0) + return cnt_expected_entries; + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->xstats_get_names, -ENOTSUP); + return (*dev->dev_ops->xstats_get_names)(dev, xstats_names, size); +} + +int +rte_dmadev_xstats_get(uint16_t dev_id, const uint32_t ids[], + uint64_t values[], uint32_t n) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + RTE_FUNC_PTR_OR_ERR_RET(ids, -EINVAL); + RTE_FUNC_PTR_OR_ERR_RET(values, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->xstats_get, -ENOTSUP); + + return (*dev->dev_ops->xstats_get)(dev, ids, values, n); +} + +int +rte_dmadev_xstats_reset(uint16_t dev_id, const uint32_t ids[], uint32_t nb_ids) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->xstats_reset, -ENOTSUP); + + return (*dev->dev_ops->xstats_reset)(dev, ids, nb_ids); +} + +int +rte_dmadev_selftest(uint16_t dev_id) +{ + struct rte_dmadev *dev; + + RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_dmadevices[dev_id]; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_selftest, -ENOTSUP); + + return (*dev->dev_ops->dev_selftest)(dev_id); +} + +static inline uint16_t +rte_dmadev_find_free_device_index(void) +{ + uint16_t i; + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if (rte_dmadevices[i].attached == RTE_DMADEV_DETACHED) + return i; + } + + return RTE_DMADEV_MAX_DEVS; +} + +struct rte_dmadev * +rte_dmadev_pmd_allocate(const char *name, size_t dev_priv_size, int socket_id) +{ + struct rte_dmadev *dev; + uint16_t dev_id; + + if (rte_dmadev_get_dev_id(name) >= 0) { + RTE_DMADEV_LOG(ERR, + "device with name %s already allocated!", name); + return NULL; + } + + dev_id = rte_dmadev_find_free_device_index(); + if (dev_id == RTE_DMADEV_MAX_DEVS) { + RTE_DMADEV_LOG(ERR, "reached maximum number of DMA devices"); + return NULL; + } + + dev = &rte_dmadevices[dev_id]; + + if (dev_priv_size > 0) { + dev->dev_private = rte_zmalloc_socket("dmadev private", + dev_priv_size, + RTE_CACHE_LINE_SIZE, + socket_id); + if (dev->dev_private == NULL) { + RTE_DMADEV_LOG(ERR, + "unable to allocate memory for dmadev"); + return NULL; + } + } + + dev->dev_id = dev_id; + dev->socket_id = socket_id; + dev->started = 0; + strlcpy(dev->name, name, RTE_DMADEV_NAME_MAX_LEN); + + dev->attached = RTE_DMADEV_ATTACHED; + + return dev; +} + +int +rte_dmadev_pmd_release(struct rte_dmadev *dev) +{ + int ret; + + if (dev == NULL) + return -EINVAL; + + ret = rte_dmadev_close(dev->dev_id); + if (ret != 0) + return ret; + + if (dev->dev_private != NULL) + rte_free(dev->dev_private); + + memset(dev, 0, sizeof(struct rte_dmadev)); + dev->attached = RTE_DMADEV_DETACHED; + + return 0; +} + +RTE_LOG_REGISTER(libdmadev_logtype, lib.dmadev, INFO); diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h new file mode 100644 index 0000000..f74fc6a --- /dev/null +++ b/lib/dmadev/rte_dmadev.h @@ -0,0 +1,919 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 HiSilicon Limited. + */ + +#ifndef _RTE_DMADEV_H_ +#define _RTE_DMADEV_H_ + +/** + * @file rte_dmadev.h + * + * RTE DMA (Direct Memory Access) device APIs. + * + * The generic DMA device diagram: + * + * ------------ ------------ + * | HW-queue | | HW-queue | + * ------------ ------------ + * \ / + * \ / + * \ / + * ---------------- + * |dma-controller| + * ---------------- + * + * The DMA could have multiple HW-queues, each HW-queue could have multiple + * capabilities, e.g. whether to support fill operation, supported DMA + * transfter direction and etc. + * + * The DMA framework is built on the following abstraction model: + * + * ------------ ------------ + * |virt-queue| |virt-queue| + * ------------ ------------ + * \ / + * \ / + * \ / + * ------------ ------------ + * | HW-queue | | HW-queue | + * ------------ ------------ + * \ / + * \ / + * \ / + * ---------- + * | dmadev | + * ---------- + * + * a) The DMA operation request must be submitted to the virt queue, virt + * queues must be created based on HW queues, the DMA device could have + * multiple HW queues. + * b) The virt queues on the same HW-queue could represent different contexts, + * e.g. user could create virt-queue-0 on HW-queue-0 for mem-to-mem + * transfer scenario, and create virt-queue-1 on the same HW-queue for + * mem-to-dev transfer scenario. + * NOTE: user could also create multiple virt queues for mem-to-mem transfer + * scenario as long as the corresponding driver supports. + * + * The control plane APIs include configure/queue_setup/queue_release/start/ + * stop/reset/close, in order to start device work, the call sequence must be + * as follows: + * - rte_dmadev_configure() + * - rte_dmadev_queue_setup() + * - rte_dmadev_start() + * + * The dataplane APIs include two parts: + * a) The first part is the submission of operation requests: + * - rte_dmadev_copy() + * - rte_dmadev_copy_sg() - scatter-gather form of copy + * - rte_dmadev_fill() + * - rte_dmadev_fill_sg() - scatter-gather form of fill + * - rte_dmadev_fence() - add a fence force ordering between operations + * - rte_dmadev_perform() - issue doorbell to hardware + * These APIs could work with different virt queues which have different + * contexts. + * The first four APIs are used to submit the operation request to the virt + * queue, if the submission is successful, a cookie (as type + * 'dma_cookie_t') is returned, otherwise a negative number is returned. + * b) The second part is to obtain the result of requests: + * - rte_dmadev_completed() + * - return the number of operation requests completed successfully. + * - rte_dmadev_completed_fails() + * - return the number of operation requests failed to complete. + * + * The misc APIs include info_get/queue_info_get/stats/xstats/selftest, provide + * information query and self-test capabilities. + * + * About the dataplane APIs MT-safe, there are two dimensions: + * a) For one virt queue, the submit/completion API could be MT-safe, + * e.g. one thread do submit operation, another thread do completion + * operation. + * If driver support it, then declare RTE_DMA_DEV_CAPA_MT_VQ. + * If driver don't support it, it's up to the application to guarantee + * MT-safe. + * b) For multiple virt queues on the same HW queue, e.g. one thread do + * operation on virt-queue-0, another thread do operation on virt-queue-1. + * If driver support it, then declare RTE_DMA_DEV_CAPA_MT_MVQ. + * If driver don't support it, it's up to the application to guarantee + * MT-safe. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include +#include + +/** + * dma_cookie_t - an opaque DMA cookie + * + * If dma_cookie_t is >=0 it's a DMA operation request cookie, <0 it's a error + * code. + * When using cookies, comply with the following rules: + * a) Cookies for each virtual queue are independent. + * b) For a virt queue, the cookie are monotonically incremented, when it reach + * the INT_MAX, it wraps back to zero. + * c) The initial cookie of a virt queue is zero, after the device is stopped or + * reset, the virt queue's cookie needs to be reset to zero. + * Example: + * step-1: start one dmadev + * step-2: enqueue a copy operation, the cookie return is 0 + * step-3: enqueue a copy operation again, the cookie return is 1 + * ... + * step-101: stop the dmadev + * step-102: start the dmadev + * step-103: enqueue a copy operation, the cookie return is 0 + * ... + */ +typedef int32_t dma_cookie_t; + +/** + * dma_scatterlist - can hold scatter DMA operation request + */ +struct dma_scatterlist { + void *src; + void *dst; + uint32_t length; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get the total number of DMA devices that have been successfully + * initialised. + * + * @return + * The total number of usable DMA devices. + */ +__rte_experimental +uint16_t +rte_dmadev_count(void); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get the device identifier for the named DMA device. + * + * @param name + * DMA device name to select the DMA device identifier. + * + * @return + * Returns DMA device identifier on success. + * - <0: Failure to find named DMA device. + */ +__rte_experimental +int +rte_dmadev_get_dev_id(const char *name); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Return the NUMA socket to which a device is connected. + * + * @param dev_id + * The identifier of the device. + * + * @return + * The NUMA socket id to which the device is connected or + * a default of zero if the socket could not be determined. + * - -EINVAL: dev_id value is out of range. + */ +__rte_experimental +int +rte_dmadev_socket_id(uint16_t dev_id); + +/** + * The capabilities of a DMA device + */ +#define RTE_DMA_DEV_CAPA_M2M (1ull << 0) /**< Support mem-to-mem transfer */ +#define RTE_DMA_DEV_CAPA_M2D (1ull << 1) /**< Support mem-to-dev transfer */ +#define RTE_DMA_DEV_CAPA_D2M (1ull << 2) /**< Support dev-to-mem transfer */ +#define RTE_DMA_DEV_CAPA_D2D (1ull << 3) /**< Support dev-to-dev transfer */ +#define RTE_DMA_DEV_CAPA_COPY (1ull << 4) /**< Support copy ops */ +#define RTE_DMA_DEV_CAPA_FILL (1ull << 5) /**< Support fill ops */ +#define RTE_DMA_DEV_CAPA_SG (1ull << 6) /**< Support scatter-gather ops */ +#define RTE_DMA_DEV_CAPA_FENCE (1ull << 7) /**< Support fence ops */ +#define RTE_DMA_DEV_CAPA_IOVA (1ull << 8) /**< Support IOVA as DMA address */ +#define RTE_DMA_DEV_CAPA_VA (1ull << 9) /**< Support VA as DMA address */ +#define RTE_DMA_DEV_CAPA_MT_VQ (1ull << 10) /**< Support MT-safe of one virt queue */ +#define RTE_DMA_DEV_CAPA_MT_MVQ (1ull << 11) /**< Support MT-safe of multiple virt queues */ + +/** + * A structure used to retrieve the contextual information of + * an DMA device + */ +struct rte_dmadev_info { + /** + * Fields filled by framewok + */ + struct rte_device *device; /**< Generic Device information */ + const char *driver_name; /**< Device driver name */ + int socket_id; /**< Socket ID where memory is allocated */ + + /** + * Specification fields filled by driver + */ + uint64_t dev_capa; /**< Device capabilities (RTE_DMA_DEV_CAPA_) */ + uint16_t max_hw_queues; /**< Maximum number of HW queues. */ + uint16_t max_vqs_per_hw_queue; + /**< Maximum number of virt queues to allocate per HW queue */ + uint16_t max_desc; + /**< Maximum allowed number of virt queue descriptors */ + uint16_t min_desc; + /**< Minimum allowed number of virt queue descriptors */ + + /** + * Status fields filled by driver + */ + uint16_t nb_hw_queues; /**< Number of HW queues configured */ + uint16_t nb_vqs; /**< Number of virt queues configured */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve the contextual information of a DMA device. + * + * @param dev_id + * The identifier of the device. + * + * @param[out] dev_info + * A pointer to a structure of type *rte_dmadev_info* to be filled with the + * contextual information of the device. + * @return + * - =0: Success, driver updates the contextual information of the DMA device + * - <0: Error code returned by the driver info get function. + * + */ +__rte_experimental +int +rte_dmadev_info_get(uint16_t dev_id, struct rte_dmadev_info *dev_info); + +/** + * dma_address_type + */ +enum dma_address_type { + DMA_ADDRESS_TYPE_IOVA, /**< Use IOVA as dma address */ + DMA_ADDRESS_TYPE_VA, /**< Use VA as dma address */ +}; + +/** + * A structure used to configure a DMA device. + */ +struct rte_dmadev_conf { + enum dma_address_type addr_type; /**< Address type to used */ + uint16_t nb_hw_queues; /**< Number of HW-queues enable to use */ + uint16_t max_vqs; /**< Maximum number of virt queues to use */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure a DMA device. + * + * This function must be invoked first before any other function in the + * API. This function can also be re-invoked when a device is in the + * stopped state. + * + * The caller may use rte_dmadev_info_get() to get the capability of each + * resources available for this DMA device. + * + * @param dev_id + * The identifier of the device to configure. + * @param dev_conf + * The DMA device configuration structure encapsulated into rte_dmadev_conf + * object. + * + * @return + * - =0: Success, device configured. + * - <0: Error code returned by the driver configuration function. + */ +__rte_experimental +int +rte_dmadev_configure(uint16_t dev_id, const struct rte_dmadev_conf *dev_conf); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Start a DMA device. + * + * The device start step is the last one and consists of setting the DMA + * to start accepting jobs. + * + * @param dev_id + * The identifier of the device. + * + * @return + * - =0: Success, device started. + * - <0: Error code returned by the driver start function. + */ +__rte_experimental +int +rte_dmadev_start(uint16_t dev_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Stop a DMA device. + * + * The device can be restarted with a call to rte_dmadev_start() + * + * @param dev_id + * The identifier of the device. + * + * @return + * - =0: Success, device stopped. + * - <0: Error code returned by the driver stop function. + */ +__rte_experimental +int +rte_dmadev_stop(uint16_t dev_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Close a DMA device. + * + * The device cannot be restarted after this call. + * + * @param dev_id + * The identifier of the device. + * + * @return + * - =0: Successfully closing device + * - <0: Failure to close device + */ +__rte_experimental +int +rte_dmadev_close(uint16_t dev_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Reset a DMA device. + * + * This is different from cycle of rte_dmadev_start->rte_dmadev_stop in the + * sense similar to hard or soft reset. + * + * @param dev_id + * The identifier of the device. + * + * @return + * - =0: Successful reset device. + * - <0: Failure to reset device. + * - (-ENOTSUP): If the device doesn't support this function. + */ +__rte_experimental +int +rte_dmadev_reset(uint16_t dev_id); + +/** + * dma_transfer_direction + */ +enum dma_transfer_direction { + DMA_MEM_TO_MEM, + DMA_MEM_TO_DEV, + DMA_DEV_TO_MEM, + DMA_DEV_TO_DEV, +}; + +/** + * A structure used to configure a DMA virt queue. + */ +struct rte_dmadev_queue_conf { + enum dma_transfer_direction direction; + /**< Associated transfer direction */ + uint16_t hw_queue_id; /**< The HW queue on which to create virt queue */ + uint16_t nb_desc; /**< Number of descriptor for this virt queue */ + uint64_t dev_flags; /**< Device specific flags */ + void *dev_ctx; /**< Device specific context */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Allocate and set up a virt queue. + * + * @param dev_id + * The identifier of the device. + * @param conf + * The queue configuration structure encapsulated into rte_dmadev_queue_conf + * object. + * + * @return + * - >=0: Allocate virt queue success, it is virt queue id. + * - <0: Error code returned by the driver queue setup function. + */ +__rte_experimental +int +rte_dmadev_queue_setup(uint16_t dev_id, + const struct rte_dmadev_queue_conf *conf); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Release a virt queue. + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue which return by queue setup. + * + * @return + * - =0: Successful release the virt queue. + * - <0: Error code returned by the driver queue release function. + */ +__rte_experimental +int +rte_dmadev_queue_release(uint16_t dev_id, uint16_t vq_id); + +/** + * A structure used to retrieve information of a DMA virt queue. + */ +struct rte_dmadev_queue_info { + enum dma_transfer_direction direction; + /**< Associated transfer direction */ + uint16_t hw_queue_id; /**< The HW queue on which to create virt queue */ + uint16_t nb_desc; /**< Number of descriptor for this virt queue */ + uint64_t dev_flags; /**< Device specific flags */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve information of a DMA virt queue. + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue which return by queue setup. + * @param[out] info + * The queue info structure encapsulated into rte_dmadev_queue_info object. + * + * @return + * - =0: Successful retrieve information. + * - <0: Error code returned by the driver queue release function. + */ +__rte_experimental +int +rte_dmadev_queue_info_get(uint16_t dev_id, uint16_t vq_id, + struct rte_dmadev_queue_info *info); + +#include "rte_dmadev_core.h" + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a copy operation onto the DMA virt queue. + * + * This queues up a copy operation to be performed by hardware, but does not + * trigger hardware to begin that operation. + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue. + * @param src + * The address of the source buffer. + * @param dst + * The address of the destination buffer. + * @param length + * The length of the data to be copied. + * @param flags + * An opaque flags for this operation. + * + * @return + * dma_cookie_t: please refer to the corresponding definition. + * + * NOTE: The caller must ensure that the input parameter is valid and the + * corresponding device supports the operation. + */ +__rte_experimental +static inline dma_cookie_t +rte_dmadev_copy(uint16_t dev_id, uint16_t vq_id, void *src, void *dst, + uint32_t length, uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + return (*dev->copy)(dev, vq_id, src, dst, length, flags); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a scatter list copy operation onto the DMA virt queue. + * + * This queues up a scatter list copy operation to be performed by hardware, + * but does not trigger hardware to begin that operation. + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue. + * @param sg + * The pointer of scatterlist. + * @param sg_len + * The number of scatterlist elements. + * @param flags + * An opaque flags for this operation. + * + * @return + * dma_cookie_t: please refer to the corresponding definition. + * + * NOTE: The caller must ensure that the input parameter is valid and the + * corresponding device supports the operation. + */ +__rte_experimental +static inline dma_cookie_t +rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vq_id, + const struct dma_scatterlist *sg, + uint32_t sg_len, uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + return (*dev->copy_sg)(dev, vq_id, sg, sg_len, flags); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a fill operation onto the DMA virt queue + * + * This queues up a fill operation to be performed by hardware, but does not + * trigger hardware to begin that operation. + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue. + * @param pattern + * The pattern to populate the destination buffer with. + * @param dst + * The address of the destination buffer. + * @param length + * The length of the destination buffer. + * @param flags + * An opaque flags for this operation. + * + * @return + * dma_cookie_t: please refer to the corresponding definition. + * + * NOTE: The caller must ensure that the input parameter is valid and the + * corresponding device supports the operation. + */ +__rte_experimental +static inline dma_cookie_t +rte_dmadev_fill(uint16_t dev_id, uint16_t vq_id, uint64_t pattern, + void *dst, uint32_t length, uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + return (*dev->fill)(dev, vq_id, pattern, dst, length, flags); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a scatter list fill operation onto the DMA virt queue + * + * This queues up a scatter list fill operation to be performed by hardware, + * but does not trigger hardware to begin that operation. + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue. + * @param pattern + * The pattern to populate the destination buffer with. + * @param sg + * The pointer of scatterlist. + * @param sg_len + * The number of scatterlist elements. + * @param flags + * An opaque flags for this operation. + * + * @return + * dma_cookie_t: please refer to the corresponding definition. + * + * NOTE: The caller must ensure that the input parameter is valid and the + * corresponding device supports the operation. + */ +__rte_experimental +static inline dma_cookie_t +rte_dmadev_fill_sg(uint16_t dev_id, uint16_t vq_id, uint64_t pattern, + const struct dma_scatterlist *sg, uint32_t sg_len, + uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + return (*dev->fill_sg)(dev, vq_id, pattern, sg, sg_len, flags); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Add a fence to force ordering between operations + * + * This adds a fence to a sequence of operations to enforce ordering, such that + * all operations enqueued before the fence must be completed before operations + * after the fence. + * NOTE: Since this fence may be added as a flag to the last operation enqueued, + * this API may not function correctly when called immediately after an + * "rte_dmadev_perform" call i.e. before any new operations are enqueued. + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue. + * + * @return + * - =0: Successful add fence. + * - <0: Failure to add fence. + * + * NOTE: The caller must ensure that the input parameter is valid and the + * corresponding device supports the operation. + */ +__rte_experimental +static inline int +rte_dmadev_fence(uint16_t dev_id, uint16_t vq_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + return (*dev->fence)(dev, vq_id); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Trigger hardware to begin performing enqueued operations + * + * This API is used to write the "doorbell" to the hardware to trigger it + * to begin the operations previously enqueued by rte_dmadev_copy/fill() + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue. + * + * @return + * - =0: Successful trigger hardware. + * - <0: Failure to trigger hardware. + * + * NOTE: The caller must ensure that the input parameter is valid and the + * corresponding device supports the operation. + */ +__rte_experimental +static inline int +rte_dmadev_perform(uint16_t dev_id, uint16_t vq_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + return (*dev->perform)(dev, vq_id); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Returns the number of operations that have been successful completed. + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue. + * @param nb_cpls + * The maximum number of completed operations that can be processed. + * @param[out] cookie + * The last completed operation's cookie. + * @param[out] has_error + * Indicates if there are transfer error. + * + * @return + * The number of operations that successful completed. + * + * NOTE: The caller must ensure that the input parameter is valid and the + * corresponding device supports the operation. + */ +__rte_experimental +static inline uint16_t +rte_dmadev_completed(uint16_t dev_id, uint16_t vq_id, const uint16_t nb_cpls, + dma_cookie_t *cookie, bool *has_error) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + has_error = false; + return (*dev->completed)(dev, vq_id, nb_cpls, cookie, has_error); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Returns the number of operations that failed to complete. + * NOTE: This API was used when rte_dmadev_completed has_error was set. + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue. + * @param nb_status + * Indicates the size of status array. + * @param[out] status + * The error code of operations that failed to complete. + * @param[out] cookie + * The last failed completed operation's cookie. + * + * @return + * The number of operations that failed to complete. + * + * NOTE: The caller must ensure that the input parameter is valid and the + * corresponding device supports the operation. + */ +__rte_experimental +static inline uint16_t +rte_dmadev_completed_fails(uint16_t dev_id, uint16_t vq_id, + const uint16_t nb_status, uint32_t *status, + dma_cookie_t *cookie) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + return (*dev->completed_fails)(dev, vq_id, nb_status, status, cookie); +} + +struct rte_dmadev_stats { + uint64_t enqueue_fail_count; + /**< Conut of all operations which failed enqueued */ + uint64_t enqueued_count; + /**< Count of all operations which successful enqueued */ + uint64_t completed_fail_count; + /**< Count of all operations which failed to complete */ + uint64_t completed_count; + /**< Count of all operations which successful complete */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve basic statistics of a or all DMA virt queue(s). + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue, -1 means all virt queues. + * @param[out] stats + * The basic statistics structure encapsulated into rte_dmadev_stats + * object. + * + * @return + * - =0: Successful retrieve stats. + * - <0: Failure to retrieve stats. + */ +__rte_experimental +int +rte_dmadev_stats_get(uint16_t dev_id, int vq_id, + struct rte_dmadev_stats *stats); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Reset basic statistics of a or all DMA virt queue(s). + * + * @param dev_id + * The identifier of the device. + * @param vq_id + * The identifier of virt queue, -1 means all virt queues. + * + * @return + * - =0: Successful retrieve stats. + * - <0: Failure to retrieve stats. + */ +__rte_experimental +int +rte_dmadev_stats_reset(uint16_t dev_id, int vq_id); + +/** Maximum name length for extended statistics counters */ +#define RTE_DMA_DEV_XSTATS_NAME_SIZE 64 + +/** + * A name-key lookup element for extended statistics. + * + * This structure is used to map between names and ID numbers + * for extended ethdev statistics. + */ +struct rte_dmadev_xstats_name { + char name[RTE_DMA_DEV_XSTATS_NAME_SIZE]; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve names of extended statistics of a DMA device. + * + * @param dev_id + * The identifier of the device. + * @param[out] xstats_names + * Block of memory to insert names into. Must be at least size in capacity. + * If set to NULL, function returns required capacity. + * @param size + * Capacity of xstats_names (number of names). + * @return + * - positive value lower or equal to size: success. The return value + * is the number of entries filled in the stats table. + * - positive value higher than size: error, the given statistics table + * is too small. The return value corresponds to the size that should + * be given to succeed. The entries in the table are not valid and + * shall not be used by the caller. + * - negative value on error. + */ +__rte_experimental +int +rte_dmadev_xstats_names_get(uint16_t dev_id, + struct rte_dmadev_xstats_name *xstats_names, + uint32_t size); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve extended statistics of a DMA device. + * + * @param dev_id + * The identifier of the device. + * @param ids + * The id numbers of the stats to get. The ids can be got from the stat + * position in the stat list from rte_dmadev_get_xstats_names(). + * @param[out] values + * The values for each stats request by ID. + * @param n + * The number of stats requested. + * + * @return + * - positive value: number of stat entries filled into the values array. + * - negative value on error. + */ +__rte_experimental +int +rte_dmadev_xstats_get(uint16_t dev_id, const uint32_t ids[], + uint64_t values[], uint32_t n); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Reset the values of the xstats of the selected component in the device. + * + * @param dev_id + * The identifier of the device. + * @param ids + * Selects specific statistics to be reset. When NULL, all statistics + * will be reset. If non-NULL, must point to array of at least + * *nb_ids* size. + * @param nb_ids + * The number of ids available from the *ids* array. Ignored when ids is NULL. + * + * @return + * - zero: successfully reset the statistics to zero. + * - negative value on error. + */ +__rte_experimental +int +rte_dmadev_xstats_reset(uint16_t dev_id, const uint32_t ids[], uint32_t nb_ids); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Trigger the dmadev self test. + * + * @param dev_id + * The identifier of the device. + * + * @return + * - 0: Selftest successful. + * - -ENOTSUP if the device doesn't support selftest + * - other values < 0 on failure. + */ +__rte_experimental +int +rte_dmadev_selftest(uint16_t dev_id); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_DMADEV_H_ */ diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h new file mode 100644 index 0000000..a3afea2 --- /dev/null +++ b/lib/dmadev/rte_dmadev_core.h @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 HiSilicon Limited. + */ + +#ifndef _RTE_DMADEV_CORE_H_ +#define _RTE_DMADEV_CORE_H_ + +/** + * @file + * + * RTE DMA Device internal header. + * + * This header contains internal data types. But they are still part of the + * public API because they are used by inline public functions. + */ + +struct rte_dmadev; + +typedef dma_cookie_t (*dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vq_id, + void *src, void *dst, + uint32_t length, uint64_t flags); +/**< @internal Function used to enqueue a copy operation. */ + +typedef dma_cookie_t (*dmadev_copy_sg_t)(struct rte_dmadev *dev, uint16_t vq_id, + const struct dma_scatterlist *sg, + uint32_t sg_len, uint64_t flags); +/**< @internal Function used to enqueue a scatter list copy operation. */ + +typedef dma_cookie_t (*dmadev_fill_t)(struct rte_dmadev *dev, uint16_t vq_id, + uint64_t pattern, void *dst, + uint32_t length, uint64_t flags); +/**< @internal Function used to enqueue a fill operation. */ + +typedef dma_cookie_t (*dmadev_fill_sg_t)(struct rte_dmadev *dev, uint16_t vq_id, + uint64_t pattern, const struct dma_scatterlist *sg, + uint32_t sg_len, uint64_t flags); +/**< @internal Function used to enqueue a scatter list fill operation. */ + +typedef int (*dmadev_fence_t)(struct rte_dmadev *dev, uint16_t vq_id); +/**< @internal Function used to add a fence ordering between operations. */ + +typedef int (*dmadev_perform_t)(struct rte_dmadev *dev, uint16_t vq_id); +/**< @internal Function used to trigger hardware to begin performing. */ + +typedef uint16_t (*dmadev_completed_t)(struct rte_dmadev *dev, uint16_t vq_id, + const uint16_t nb_cpls, + dma_cookie_t *cookie, bool *has_error); +/**< @internal Function used to return number of successful completed operations */ + +typedef uint16_t (*dmadev_completed_fails_t)(struct rte_dmadev *dev, + uint16_t vq_id, const uint16_t nb_status, + uint32_t *status, dma_cookie_t *cookie); +/**< @internal Function used to return number of failed completed operations */ + +#define RTE_DMADEV_NAME_MAX_LEN 64 /**< Max length of name of DMA PMD */ + +struct rte_dmadev_ops; + +/** + * The data structure associated with each DMA device. + */ +struct rte_dmadev { + /**< Enqueue a copy operation onto the DMA device. */ + dmadev_copy_t copy; + /**< Enqueue a scatter list copy operation onto the DMA device. */ + dmadev_copy_sg_t copy_sg; + /**< Enqueue a fill operation onto the DMA device. */ + dmadev_fill_t fill; + /**< Enqueue a scatter list fill operation onto the DMA device. */ + dmadev_fill_sg_t fill_sg; + /**< Add a fence to force ordering between operations. */ + dmadev_fence_t fence; + /**< Trigger hardware to begin performing enqueued operations. */ + dmadev_perform_t perform; + /**< Returns the number of operations that successful completed. */ + dmadev_completed_t completed; + /**< Returns the number of operations that failed to complete. */ + dmadev_completed_fails_t completed_fails; + + void *dev_private; /**< PMD-specific private data */ + const struct rte_dmadev_ops *dev_ops; /**< Functions exported by PMD */ + + uint16_t dev_id; /**< Device ID for this instance */ + int socket_id; /**< Socket ID where memory is allocated */ + struct rte_device *device; + /**< Device info. supplied during device initialization */ + const char *driver_name; /**< Driver info. supplied by probing */ + char name[RTE_DMADEV_NAME_MAX_LEN]; /**< Device name */ + + RTE_STD_C11 + uint8_t attached : 1; /**< Flag indicating the device is attached */ + uint8_t started : 1; /**< Device state: STARTED(1)/STOPPED(0) */ + +} __rte_cache_aligned; + +extern struct rte_dmadev rte_dmadevices[]; + +#endif /* _RTE_DMADEV_CORE_H_ */ diff --git a/lib/dmadev/rte_dmadev_pmd.h b/lib/dmadev/rte_dmadev_pmd.h new file mode 100644 index 0000000..ef03cf7 --- /dev/null +++ b/lib/dmadev/rte_dmadev_pmd.h @@ -0,0 +1,210 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 HiSilicon Limited. + */ + +#ifndef _RTE_DMADEV_PMD_H_ +#define _RTE_DMADEV_PMD_H_ + +/** @file + * RTE DMA PMD APIs + * + * @note + * Driver facing APIs for a DMA device. These are not to be called directly by + * any application. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +#include +#include +#include + +#include "rte_dmadev.h" + +extern int libdmadev_logtype; + +#define RTE_DMADEV_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, libdmadev_logtype, "%s(): " fmt "\n", \ + __func__, ##args) + +/* Macros to check for valid device */ +#define RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \ + if (!rte_dmadev_pmd_is_valid_dev((dev_id))) { \ + RTE_DMADEV_LOG(ERR, "Invalid dev_id=%d", dev_id); \ + return retval; \ + } \ +} while (0) + +#define RTE_DMADEV_VALID_DEVID_OR_RET(dev_id) do { \ + if (!rte_dmadev_pmd_is_valid_dev((dev_id))) { \ + RTE_DMADEV_LOG(ERR, "Invalid dev_id=%d", dev_id); \ + return; \ + } \ +} while (0) + +#define RTE_DMADEV_DETACHED 0 +#define RTE_DMADEV_ATTACHED 1 + +/** + * Validate if the DMA device index is a valid attached DMA device. + * + * @param dev_id + * DMA device index. + * + * @return + * - If the device index is valid (1) or not (0). + */ +static inline unsigned +rte_dmadev_pmd_is_valid_dev(uint16_t dev_id) +{ + struct rte_dmadev *dev; + + if (dev_id >= RTE_DMADEV_MAX_DEVS) + return 0; + + dev = &rte_dmadevices[dev_id]; + if (dev->attached != RTE_DMADEV_ATTACHED) + return 0; + else + return 1; +} + +/** + * Definitions of control-plane functions exported by a driver through the + * generic structure of type *rte_dmadev_ops* supplied in the *rte_dmadev* + * structure associated with a device. + */ + +typedef int (*dmadev_info_get_t)(struct rte_dmadev *dev, + struct rte_dmadev_info *dev_info); +/**< @internal Function used to get device information of a device. */ + +typedef int (*dmadev_configure_t)(struct rte_dmadev *dev, + const struct rte_dmadev_conf *dev_conf); +/**< @internal Function used to configure a device. */ + +typedef int (*dmadev_start_t)(struct rte_dmadev *dev); +/**< @internal Function used to start a configured device. */ + +typedef int (*dmadev_stop_t)(struct rte_dmadev *dev); +/**< @internal Function used to stop a configured device. */ + +typedef int (*dmadev_close_t)(struct rte_dmadev *dev); +/**< @internal Function used to close a configured device. */ + +typedef int (*dmadev_reset_t)(struct rte_dmadev *dev); +/**< @internal Function used to reset a configured device. */ + +typedef int (*dmadev_queue_setup_t)(struct rte_dmadev *dev, + const struct rte_dmadev_queue_conf *conf); +/**< @internal Function used to allocate and set up a virt queue. */ + +typedef int (*dmadev_queue_release_t)(struct rte_dmadev *dev, uint16_t vq_id); +/**< @internal Function used to release a virt queue. */ + +typedef int (*dmadev_queue_info_t)(struct rte_dmadev *dev, uint16_t vq_id, + struct rte_dmadev_queue_info *info); +/**< @internal Function used to retrieve information of a virt queue. */ + +typedef int (*dmadev_stats_get_t)(struct rte_dmadev *dev, int vq_id, + struct rte_dmadev_stats *stats); +/**< @internal Function used to retrieve basic statistics. */ + +typedef int (*dmadev_stats_reset_t)(struct rte_dmadev *dev, int vq_id); +/**< @internal Function used to reset basic statistics. */ + +typedef int (*dmadev_xstats_get_names_t)(const struct rte_dmadev *dev, + struct rte_dmadev_xstats_name *xstats_names, + uint32_t size); +/**< @internal Function used to get names of extended stats. */ + +typedef int (*dmadev_xstats_get_t)(const struct rte_dmadev *dev, + const uint32_t ids[], uint64_t values[], uint32_t n); +/**< @internal Function used to retrieve extended stats. */ + +typedef int (*dmadev_xstats_reset_t)(struct rte_dmadev *dev, + const uint32_t ids[], uint32_t nb_ids); +/**< @internal Function used to reset extended stats. */ + +typedef int (*dmadev_selftest_t)(uint16_t dev_id); +/**< @internal Function used to start dmadev selftest. */ + +/** DMA device operations function pointer table */ +struct rte_dmadev_ops { + /**< Get device info. */ + dmadev_info_get_t dev_info_get; + /**< Configure device. */ + dmadev_configure_t dev_configure; + /**< Start device. */ + dmadev_start_t dev_start; + /**< Stop device. */ + dmadev_stop_t dev_stop; + /**< Close device. */ + dmadev_close_t dev_close; + /**< Reset device. */ + dmadev_reset_t dev_reset; + + /**< Allocate and set up a virt queue. */ + dmadev_queue_setup_t queue_setup; + /**< Release a virt queue. */ + dmadev_queue_release_t queue_release; + /**< Retrieve information of a virt queue */ + dmadev_queue_info_t queue_info_get; + + /**< Get basic statistics. */ + dmadev_stats_get_t stats_get; + /**< Reset basic statistics. */ + dmadev_stats_reset_t stats_reset; + /**< Get names of extended stats. */ + dmadev_xstats_get_names_t xstats_get_names; + /**< Get extended statistics. */ + dmadev_xstats_get_t xstats_get; + /**< Reset extended statistics values. */ + dmadev_xstats_reset_t xstats_reset; + + /**< Device selftest function */ + dmadev_selftest_t dev_selftest; +}; + +/** + * Allocates a new dmadev slot for an DMA device and returns the pointer + * to that slot for the driver to use. + * + * @param name + * Unique identifier name for each device + * @param dev_private_size + * Size of private data memory allocated within rte_dmadev object. + * Set to 0 to disable internal memory allocation and allow for + * self-allocation. + * @param socket_id + * Socket to allocate resources on. + * + * @return + * - NULL: Failure to allocate + * - Other: The rte_dmadev structure pointer for the new device + */ +struct rte_dmadev * +rte_dmadev_pmd_allocate(const char *name, size_t dev_private_size, + int socket_id); + +/** + * Release the specified dmadev device. + * + * @param dev + * The *dmadev* pointer is the address of the *rte_dmadev* structure. + * + * @return + * - 0 on success, negative on error + */ +int +rte_dmadev_pmd_release(struct rte_dmadev *dev); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_DMADEV_PMD_H_ */ diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map new file mode 100644 index 0000000..383b3ca --- /dev/null +++ b/lib/dmadev/version.map @@ -0,0 +1,32 @@ +EXPERIMENTAL { + global: + + rte_dmadev_count; + rte_dmadev_get_dev_id; + rte_dmadev_socket_id; + rte_dmadev_info_get; + rte_dmadev_configure; + rte_dmadev_start; + rte_dmadev_stop; + rte_dmadev_close; + rte_dmadev_reset; + rte_dmadev_queue_setup; + rte_dmadev_queue_release; + rte_dmadev_queue_info_get; + rte_dmadev_copy; + rte_dmadev_copy_sg; + rte_dmadev_fill; + rte_dmadev_fill_sg; + rte_dmadev_fence; + rte_dmadev_perform; + rte_dmadev_completed; + rte_dmadev_completed_fails; + rte_dmadev_stats_get; + rte_dmadev_stats_reset; + rte_dmadev_xstats_names_get; + rte_dmadev_xstats_get; + rte_dmadev_xstats_reset; + rte_dmadev_selftest; + + local: *; +}; diff --git a/lib/meson.build b/lib/meson.build index 1673ca4..68d239f 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -60,6 +60,7 @@ libraries = [ 'bpf', 'graph', 'node', + 'dmadev', ] if is_windows From patchwork Tue Aug 3 11:29:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 96608 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3843BA0A0C; Tue, 3 Aug 2021 13:33:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8428A411D1; Tue, 3 Aug 2021 13:33:40 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id F3C64411BF for ; Tue, 3 Aug 2021 13:33:36 +0200 (CEST) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GfCRs4k7PzYkgC; Tue, 3 Aug 2021 19:33:29 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 3 Aug 2021 19:33:34 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 3 Aug 2021 19:33:34 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Tue, 3 Aug 2021 19:29:45 +0800 Message-ID: <1627990189-36531-3-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1627990189-36531-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1627990189-36531-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v13 2/6] dmadev: introduce DMA device library internal header X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce DMA device library internal header, which contains internal data types that are used by the DMA devices in order to expose their ops to the class. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup --- lib/dmadev/meson.build | 1 + lib/dmadev/rte_dmadev_core.h | 180 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 181 insertions(+) create mode 100644 lib/dmadev/rte_dmadev_core.h diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build index 6d5bd85..f421ec1 100644 --- a/lib/dmadev/meson.build +++ b/lib/dmadev/meson.build @@ -2,3 +2,4 @@ # Copyright(c) 2021 HiSilicon Limited. headers = files('rte_dmadev.h') +indirect_headers += files('rte_dmadev_core.h') diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h new file mode 100644 index 0000000..599ab15 --- /dev/null +++ b/lib/dmadev/rte_dmadev_core.h @@ -0,0 +1,180 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + * Copyright(c) 2021 Intel Corporation. + */ + +#ifndef _RTE_DMADEV_CORE_H_ +#define _RTE_DMADEV_CORE_H_ + +/** + * @file + * + * RTE DMA Device internal header. + * + * This header contains internal data types, that are used by the DMA devices + * in order to expose their ops to the class. + * + * Applications should not use these API directly. + * + */ + +struct rte_dmadev; + +typedef int (*rte_dmadev_info_get_t)(const struct rte_dmadev *dev, + struct rte_dmadev_info *dev_info, + uint32_t info_sz); +/**< @internal Used to get device information of a device. */ + +typedef int (*rte_dmadev_configure_t)(struct rte_dmadev *dev, + const struct rte_dmadev_conf *dev_conf); +/**< @internal Used to configure a device. */ + +typedef int (*rte_dmadev_start_t)(struct rte_dmadev *dev); +/**< @internal Used to start a configured device. */ + +typedef int (*rte_dmadev_stop_t)(struct rte_dmadev *dev); +/**< @internal Used to stop a configured device. */ + +typedef int (*rte_dmadev_close_t)(struct rte_dmadev *dev); +/**< @internal Used to close a configured device. */ + +typedef int (*rte_dmadev_vchan_setup_t)(struct rte_dmadev *dev, + const struct rte_dmadev_vchan_conf *conf); +/**< @internal Used to allocate and set up a virtual DMA channel. */ + +typedef int (*rte_dmadev_stats_get_t)(const struct rte_dmadev *dev, + uint16_t vchan, struct rte_dmadev_stats *stats, + uint32_t stats_sz); +/**< @internal Used to retrieve basic statistics. */ + +typedef int (*rte_dmadev_stats_reset_t)(struct rte_dmadev *dev, uint16_t vchan); +/**< @internal Used to reset basic statistics. */ + +typedef int (*rte_dmadev_dump_t)(const struct rte_dmadev *dev, FILE *f); +/**< @internal Used to dump internal information. */ + +typedef int (*rte_dmadev_selftest_t)(uint16_t dev_id); +/**< @internal Used to start dmadev selftest. */ + +typedef int (*rte_dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vchan, + rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags); +/**< @internal Used to enqueue a copy operation. */ + +typedef int (*rte_dmadev_copy_sg_t)(struct rte_dmadev *dev, uint16_t vchan, + const struct rte_dmadev_sge *src, + const struct rte_dmadev_sge *dst, + uint16_t nb_src, uint16_t nb_dst, + uint64_t flags); +/**< @internal Used to enqueue a scatter-gather list copy operation. */ + +typedef int (*rte_dmadev_fill_t)(struct rte_dmadev *dev, uint16_t vchan, + uint64_t pattern, rte_iova_t dst, + uint32_t length, uint64_t flags); +/**< @internal Used to enqueue a fill operation. */ + +typedef int (*rte_dmadev_submit_t)(struct rte_dmadev *dev, uint16_t vchan); +/**< @internal Used to trigger hardware to begin working. */ + +typedef uint16_t (*rte_dmadev_completed_t)(struct rte_dmadev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error); +/**< @internal Used to return number of successful completed operations. */ + +typedef uint16_t (*rte_dmadev_completed_status_t)(struct rte_dmadev *dev, + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, enum rte_dma_status_code *status); +/**< @internal Used to return number of completed operations. */ + +/** + * Possible states of a DMA device. + */ +enum rte_dmadev_state { + RTE_DMADEV_UNUSED = 0, + /**< Device is unused before being probed. */ + RTE_DMADEV_ATTACHED, + /**< Device is attached when allocated in probing. */ +}; + +/** + * DMA device operations function pointer table + */ +struct rte_dmadev_ops { + rte_dmadev_info_get_t dev_info_get; + rte_dmadev_configure_t dev_configure; + rte_dmadev_start_t dev_start; + rte_dmadev_stop_t dev_stop; + rte_dmadev_close_t dev_close; + rte_dmadev_vchan_setup_t vchan_setup; + rte_dmadev_stats_get_t stats_get; + rte_dmadev_stats_reset_t stats_reset; + rte_dmadev_dump_t dev_dump; + rte_dmadev_selftest_t dev_selftest; +}; + +/** + * @internal + * The data part, with no function pointers, associated with each DMA device. + * + * This structure is safe to place in shared memory to be common among different + * processes in a multi-process configuration. + */ +struct rte_dmadev_data { + void *dev_private; + /**< PMD-specific private data. + * This is a copy of the 'dev_private' field in the 'struct rte_dmadev' + * from primary process, it is used by the secondary process to get + * dev_private information. + */ + uint16_t dev_id; /**< Device [external] identifier. */ + char dev_name[RTE_DMADEV_NAME_MAX_LEN]; /**< Unique identifier name */ + struct rte_dmadev_conf dev_conf; /**< DMA device configuration. */ + uint8_t dev_started : 1; /**< Device state: STARTED(1)/STOPPED(0). */ + uint64_t reserved[2]; /**< Reserved for future fields */ +} __rte_cache_aligned; + +/** + * @internal + * The generic data structure associated with each DMA device. + * + * The dataplane APIs are located at the beginning of the structure, along + * with the pointer to where all the data elements for the particular device + * are stored in shared memory. This split scheme allows the function pointer + * and driver data to be per-process, while the actual configuration data for + * the device is shared. + * And the 'dev_private' field was placed in the first cache line to optimize + * performance because the PMD driver mainly depends on this field. + */ +struct rte_dmadev { + rte_dmadev_copy_t copy; + rte_dmadev_copy_sg_t copy_sg; + rte_dmadev_fill_t fill; + rte_dmadev_submit_t submit; + rte_dmadev_completed_t completed; + rte_dmadev_completed_status_t completed_status; + void *reserved_ptr; /**< Reserved for future IO function. */ + void *dev_private; + /**< PMD-specific private data. + * + * - If is the primary process, after dmadev allocated by + * rte_dmadev_pmd_allocate(), the PCI/SoC device probing should + * initialize this field, and copy it's value to the 'dev_private' + * field of 'struct rte_dmadev_data' which pointer by 'data' filed. + * + * - If is the secondary process, dmadev framework will initialize this + * field by copy from 'dev_private' field of 'struct rte_dmadev_data' + * which initialized by primary process. + * + * @note It's the primary process responsibility to deinitialize this + * field after invoke rte_dmadev_pmd_release() in the PCI/SoC device + * removing stage. + */ + struct rte_dmadev_data *data; /**< Pointer to device data. */ + const struct rte_dmadev_ops *dev_ops; /**< Functions exported by PMD. */ + struct rte_device *device; + /**< Device info which supplied during device initialization. */ + enum rte_dmadev_state state; /**< Flag indicating the device state. */ + uint64_t reserved[2]; /**< Reserved for future fields. */ +} __rte_cache_aligned; + +#endif /* _RTE_DMADEV_CORE_H_ */ From patchwork Tue Aug 3 11:29:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 96607 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C219FA0A0C; Tue, 3 Aug 2021 13:33:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 68A42411CA; Tue, 3 Aug 2021 13:33:39 +0200 (CEST) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by mails.dpdk.org (Postfix) with ESMTP id E597C40E3C for ; Tue, 3 Aug 2021 13:33:36 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4GfCLP38msz82JZ; Tue, 3 Aug 2021 19:28:45 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 3 Aug 2021 19:33:34 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 3 Aug 2021 19:33:34 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Tue, 3 Aug 2021 19:29:46 +0800 Message-ID: <1627990189-36531-4-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1627990189-36531-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1627990189-36531-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v13 3/6] dmadev: introduce DMA device library PMD header X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce DMA device library PMD header which was driver facing APIs for a DMA device. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup --- lib/dmadev/meson.build | 1 + lib/dmadev/rte_dmadev.h | 2 ++ lib/dmadev/rte_dmadev_pmd.h | 72 +++++++++++++++++++++++++++++++++++++++++++++ lib/dmadev/version.map | 10 +++++++ 4 files changed, 85 insertions(+) create mode 100644 lib/dmadev/rte_dmadev_pmd.h diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build index f421ec1..833baf7 100644 --- a/lib/dmadev/meson.build +++ b/lib/dmadev/meson.build @@ -3,3 +3,4 @@ headers = files('rte_dmadev.h') indirect_headers += files('rte_dmadev_core.h') +driver_sdk_headers += files('rte_dmadev_pmd.h') diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index 1090b06..439ad95 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -743,6 +743,8 @@ struct rte_dmadev_sge { uint32_t length; /**< The DMA operation length. */ }; +#include "rte_dmadev_core.h" + /* DMA flags to augment operation preparation. */ #define RTE_DMA_OP_FLAG_FENCE (1ull << 0) /**< DMA fence flag. diff --git a/lib/dmadev/rte_dmadev_pmd.h b/lib/dmadev/rte_dmadev_pmd.h new file mode 100644 index 0000000..45141f9 --- /dev/null +++ b/lib/dmadev/rte_dmadev_pmd.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + */ + +#ifndef _RTE_DMADEV_PMD_H_ +#define _RTE_DMADEV_PMD_H_ + +/** + * @file + * + * RTE DMA Device PMD APIs + * + * Driver facing APIs for a DMA device. These are not to be called directly by + * any application. + */ + +#include "rte_dmadev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @internal + * Allocates a new dmadev slot for an DMA device and returns the pointer + * to that slot for the driver to use. + * + * @param name + * DMA device name. + * + * @return + * A pointer to the DMA device slot case of success, + * NULL otherwise. + */ +__rte_internal +struct rte_dmadev * +rte_dmadev_pmd_allocate(const char *name); + +/** + * @internal + * Release the specified dmadev. + * + * @param dev + * Device to be released. + * + * @return + * - 0 on success, negative on error + */ +__rte_internal +int +rte_dmadev_pmd_release(struct rte_dmadev *dev); + +/** + * @internal + * Return the DMA device based on the device name. + * + * @param name + * DMA device name. + * + * @return + * A pointer to the DMA device slot case of success, + * NULL otherwise. + */ +__rte_internal +struct rte_dmadev * +rte_dmadev_get_device_by_name(const char *name); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_DMADEV_PMD_H_ */ diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map index 02fffe3..408b93c 100644 --- a/lib/dmadev/version.map +++ b/lib/dmadev/version.map @@ -23,3 +23,13 @@ EXPERIMENTAL { local: *; }; + +INTERNAL { + global: + + rte_dmadev_get_device_by_name; + rte_dmadev_pmd_allocate; + rte_dmadev_pmd_release; + + local: *; +}; From patchwork Thu Jul 29 13:06:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 96397 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F1CBBA034F; Thu, 29 Jul 2021 15:11:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7537140E01; Thu, 29 Jul 2021 15:11:13 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 67E8640687 for ; Thu, 29 Jul 2021 15:11:11 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Gb9jz1rk2zYfJ9; Thu, 29 Jul 2021 21:05:11 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Thu, 29 Jul 2021 21:11:08 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Thu, 29 Jul 2021 21:11:08 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Thu, 29 Jul 2021 21:06:57 +0800 Message-ID: <1627564019-10649-5-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1627564019-10649-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1627564019-10649-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v12 4/6] dmadev: introduce DMA device library implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduce DMA device library implementation which includes configuration and I/O with the DMA devices. Signed-off-by: Chengwen Feng Acked-by: Bruce Richardson Acked-by: Morten Brørup --- config/rte_config.h | 3 + lib/dmadev/meson.build | 1 + lib/dmadev/rte_dmadev.c | 563 +++++++++++++++++++++++++++++++++++++++++++ lib/dmadev/rte_dmadev.h | 118 ++++++++- lib/dmadev/rte_dmadev_core.h | 2 + lib/dmadev/version.map | 1 + 6 files changed, 676 insertions(+), 12 deletions(-) create mode 100644 lib/dmadev/rte_dmadev.c diff --git a/config/rte_config.h b/config/rte_config.h index 590903c..331a431 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -81,6 +81,9 @@ /* rawdev defines */ #define RTE_RAWDEV_MAX_DEVS 64 +/* dmadev defines */ +#define RTE_DMADEV_MAX_DEVS 64 + /* ip_fragmentation defines */ #define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4 #undef RTE_LIBRTE_IP_FRAG_TBL_STAT diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build index 833baf7..d2fc85e 100644 --- a/lib/dmadev/meson.build +++ b/lib/dmadev/meson.build @@ -1,6 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2021 HiSilicon Limited. +sources = files('rte_dmadev.c') headers = files('rte_dmadev.h') indirect_headers += files('rte_dmadev_core.h') driver_sdk_headers += files('rte_dmadev_pmd.h') diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c new file mode 100644 index 0000000..b4f5498 --- /dev/null +++ b/lib/dmadev/rte_dmadev.c @@ -0,0 +1,563 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 HiSilicon Limited. + * Copyright(c) 2021 Intel Corporation. + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rte_dmadev.h" +#include "rte_dmadev_pmd.h" + +struct rte_dmadev rte_dmadevices[RTE_DMADEV_MAX_DEVS]; + +static const char *mz_rte_dmadev_data = "rte_dmadev_data"; +/* Shared memory between primary and secondary processes. */ +static struct { + struct rte_dmadev_data data[RTE_DMADEV_MAX_DEVS]; +} *dmadev_shared_data; + +RTE_LOG_REGISTER_DEFAULT(rte_dmadev_logtype, INFO); +#define RTE_DMADEV_LOG(level, ...) \ + rte_log(RTE_LOG_ ## level, rte_dmadev_logtype, "" __VA_ARGS__) + +/* Macros to check for valid device id */ +#define RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ + if (!rte_dmadev_is_valid_dev(dev_id)) { \ + RTE_DMADEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + return retval; \ + } \ +} while (0) + +static int +dmadev_check_name(const char *name) +{ + size_t name_len; + + if (name == NULL) { + RTE_DMADEV_LOG(ERR, "Name can't be NULL\n"); + return -EINVAL; + } + + name_len = strnlen(name, RTE_DMADEV_NAME_MAX_LEN); + if (name_len == 0) { + RTE_DMADEV_LOG(ERR, "Zero length DMA device name\n"); + return -EINVAL; + } + if (name_len >= RTE_DMADEV_NAME_MAX_LEN) { + RTE_DMADEV_LOG(ERR, "DMA device name is too long\n"); + return -EINVAL; + } + + return 0; +} + +static uint16_t +dmadev_find_free_dev(void) +{ + uint16_t i; + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if (dmadev_shared_data->data[i].dev_name[0] == '\0') + return i; + } + + return RTE_DMADEV_MAX_DEVS; +} + +static struct rte_dmadev* +dmadev_find(const char *name) +{ + uint16_t i; + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if ((rte_dmadevices[i].state == RTE_DMADEV_ATTACHED) && + (!strcmp(name, rte_dmadevices[i].data->dev_name))) + return &rte_dmadevices[i]; + } + + return NULL; +} + +static int +dmadev_shared_data_prepare(void) +{ + const struct rte_memzone *mz; + + if (dmadev_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate port data and ownership shared memory. */ + mz = rte_memzone_reserve(mz_rte_dmadev_data, + sizeof(*dmadev_shared_data), + rte_socket_id(), 0); + } else + mz = rte_memzone_lookup(mz_rte_dmadev_data); + if (mz == NULL) + return -ENOMEM; + + dmadev_shared_data = mz->addr; + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + memset(dmadev_shared_data->data, 0, + sizeof(dmadev_shared_data->data)); + } + + return 0; +} + +static struct rte_dmadev * +dmadev_allocate(const char *name) +{ + struct rte_dmadev *dev; + uint16_t dev_id; + + dev = dmadev_find(name); + if (dev != NULL) { + RTE_DMADEV_LOG(ERR, "DMA device already allocated\n"); + return NULL; + } + + if (dmadev_shared_data_prepare() != 0) { + RTE_DMADEV_LOG(ERR, "Cannot allocate DMA shared data\n"); + return NULL; + } + + dev_id = dmadev_find_free_dev(); + if (dev_id == RTE_DMADEV_MAX_DEVS) { + RTE_DMADEV_LOG(ERR, "Reached maximum number of DMA devices\n"); + return NULL; + } + + dev = &rte_dmadevices[dev_id]; + dev->data = &dmadev_shared_data->data[dev_id]; + dev->data->dev_id = dev_id; + rte_strscpy(dev->data->dev_name, name, sizeof(dev->data->dev_name)); + + return dev; +} + +static struct rte_dmadev * +dmadev_attach_secondary(const char *name) +{ + struct rte_dmadev *dev; + uint16_t i; + + if (dmadev_shared_data_prepare() != 0) { + RTE_DMADEV_LOG(ERR, "Cannot allocate DMA shared data\n"); + return NULL; + } + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if (!strcmp(dmadev_shared_data->data[i].dev_name, name)) + break; + } + if (i == RTE_DMADEV_MAX_DEVS) { + RTE_DMADEV_LOG(ERR, + "Device %s is not driven by the primary process\n", + name); + return NULL; + } + + dev = &rte_dmadevices[i]; + dev->data = &dmadev_shared_data->data[i]; + dev->dev_private = dev->data->dev_private; + + return dev; +} + +struct rte_dmadev * +rte_dmadev_pmd_allocate(const char *name) +{ + struct rte_dmadev *dev; + + if (dmadev_check_name(name) != 0) + return NULL; + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + dev = dmadev_allocate(name); + else + dev = dmadev_attach_secondary(name); + + if (dev == NULL) + return NULL; + dev->state = RTE_DMADEV_ATTACHED; + + return dev; +} + +int +rte_dmadev_pmd_release(struct rte_dmadev *dev) +{ + void *dev_private_tmp; + + if (dev == NULL) + return -EINVAL; + + if (dev->state == RTE_DMADEV_UNUSED) + return 0; + + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + memset(dev->data, 0, sizeof(struct rte_dmadev_data)); + + dev_private_tmp = dev->dev_private; + memset(dev, 0, sizeof(struct rte_dmadev)); + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + dev->dev_private = dev_private_tmp; + dev->state = RTE_DMADEV_UNUSED; + + return 0; +} + +struct rte_dmadev * +rte_dmadev_get_device_by_name(const char *name) +{ + if (dmadev_check_name(name) != 0) + return NULL; + return dmadev_find(name); +} + +int +rte_dmadev_get_dev_id(const char *name) +{ + struct rte_dmadev *dev = rte_dmadev_get_device_by_name(name); + if (dev != NULL) + return dev->data->dev_id; + return -EINVAL; +} + +bool +rte_dmadev_is_valid_dev(uint16_t dev_id) +{ + return (dev_id < RTE_DMADEV_MAX_DEVS) && + rte_dmadevices[dev_id].state == RTE_DMADEV_ATTACHED; +} + +uint16_t +rte_dmadev_count(void) +{ + uint16_t count = 0; + uint16_t i; + + for (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) { + if (rte_dmadevices[i].state == RTE_DMADEV_ATTACHED) + count++; + } + + return count; +} + +int +rte_dmadev_info_get(uint16_t dev_id, struct rte_dmadev_info *dev_info) +{ + const struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (dev_info == NULL) + return -EINVAL; + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_info_get, -ENOTSUP); + memset(dev_info, 0, sizeof(struct rte_dmadev_info)); + ret = (*dev->dev_ops->dev_info_get)(dev, dev_info, + sizeof(struct rte_dmadev_info)); + if (ret != 0) + return ret; + + dev_info->device = dev->device; + dev_info->nb_vchans = dev->data->dev_conf.max_vchans; + + return 0; +} + +int +rte_dmadev_configure(uint16_t dev_id, const struct rte_dmadev_conf *dev_conf) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + struct rte_dmadev_info info; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (dev_conf == NULL) + return -EINVAL; + + if (dev->data->dev_started != 0) { + RTE_DMADEV_LOG(ERR, + "Device %u must be stopped to allow configuration\n", + dev_id); + return -EBUSY; + } + + ret = rte_dmadev_info_get(dev_id, &info); + if (ret != 0) { + RTE_DMADEV_LOG(ERR, "Device %u get device info fail\n", dev_id); + return -EINVAL; + } + if (dev_conf->max_vchans == 0) { + RTE_DMADEV_LOG(ERR, + "Device %u configure zero vchans\n", dev_id); + return -EINVAL; + } + if (dev_conf->max_vchans > info.max_vchans) { + RTE_DMADEV_LOG(ERR, + "Device %u configure too many vchans\n", dev_id); + return -EINVAL; + } + if (dev_conf->enable_silent && + !(info.dev_capa & RTE_DMADEV_CAPA_SILENT)) { + RTE_DMADEV_LOG(ERR, "Device %u don't support silent\n", dev_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP); + ret = (*dev->dev_ops->dev_configure)(dev, dev_conf); + if (ret == 0) + memcpy(&dev->data->dev_conf, dev_conf, sizeof(*dev_conf)); + + return ret; +} + +int +rte_dmadev_start(uint16_t dev_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + + if (dev->data->dev_started != 0) { + RTE_DMADEV_LOG(WARNING, "Device %u already started\n", dev_id); + return 0; + } + + if (dev->dev_ops->dev_start == NULL) + goto mark_started; + + ret = (*dev->dev_ops->dev_start)(dev); + if (ret != 0) + return ret; + +mark_started: + dev->data->dev_started = 1; + return 0; +} + +int +rte_dmadev_stop(uint16_t dev_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + + if (dev->data->dev_started == 0) { + RTE_DMADEV_LOG(WARNING, "Device %u already stopped\n", dev_id); + return 0; + } + + if (dev->dev_ops->dev_stop == NULL) + goto mark_stopped; + + ret = (*dev->dev_ops->dev_stop)(dev); + if (ret != 0) + return ret; + +mark_stopped: + dev->data->dev_started = 0; + return 0; +} + +int +rte_dmadev_close(uint16_t dev_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + + /* Device must be stopped before it can be closed */ + if (dev->data->dev_started == 1) { + RTE_DMADEV_LOG(ERR, + "Device %u must be stopped before closing\n", dev_id); + return -EBUSY; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP); + return (*dev->dev_ops->dev_close)(dev); +} + +int +rte_dmadev_vchan_setup(uint16_t dev_id, + const struct rte_dmadev_vchan_conf *conf) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + struct rte_dmadev_info info; + bool src_is_dev, dst_is_dev; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (conf == NULL) + return -EINVAL; + + if (dev->data->dev_started != 0) { + RTE_DMADEV_LOG(ERR, + "Device %u must be stopped to allow configuration\n", + dev_id); + return -EBUSY; + } + + ret = rte_dmadev_info_get(dev_id, &info); + if (ret != 0) { + RTE_DMADEV_LOG(ERR, "Device %u get device info fail\n", dev_id); + return -EINVAL; + } + if (conf->direction != RTE_DMA_DIR_MEM_TO_MEM && + conf->direction != RTE_DMA_DIR_MEM_TO_DEV && + conf->direction != RTE_DMA_DIR_DEV_TO_MEM && + conf->direction != RTE_DMA_DIR_DEV_TO_DEV) { + RTE_DMADEV_LOG(ERR, "Device %u direction invalid!\n", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_MEM_TO_MEM && + !(info.dev_capa & RTE_DMADEV_CAPA_MEM_TO_MEM)) { + RTE_DMADEV_LOG(ERR, + "Device %u don't support mem2mem transfer\n", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_MEM_TO_DEV && + !(info.dev_capa & RTE_DMADEV_CAPA_MEM_TO_DEV)) { + RTE_DMADEV_LOG(ERR, + "Device %u don't support mem2dev transfer\n", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_DEV_TO_MEM && + !(info.dev_capa & RTE_DMADEV_CAPA_DEV_TO_MEM)) { + RTE_DMADEV_LOG(ERR, + "Device %u don't support dev2mem transfer\n", dev_id); + return -EINVAL; + } + if (conf->direction == RTE_DMA_DIR_DEV_TO_DEV && + !(info.dev_capa & RTE_DMADEV_CAPA_DEV_TO_DEV)) { + RTE_DMADEV_LOG(ERR, + "Device %u don't support dev2dev transfer\n", dev_id); + return -EINVAL; + } + if (conf->nb_desc < info.min_desc || conf->nb_desc > info.max_desc) { + RTE_DMADEV_LOG(ERR, + "Device %u number of descriptors invalid\n", dev_id); + return -EINVAL; + } + src_is_dev = conf->direction == RTE_DMA_DIR_DEV_TO_MEM || + conf->direction == RTE_DMA_DIR_DEV_TO_DEV; + if ((conf->src_port.port_type == RTE_DMADEV_PORT_NONE && src_is_dev) || + (conf->src_port.port_type != RTE_DMADEV_PORT_NONE && !src_is_dev)) { + RTE_DMADEV_LOG(ERR, + "Device %u source port type invalid\n", dev_id); + return -EINVAL; + } + dst_is_dev = conf->direction == RTE_DMA_DIR_MEM_TO_DEV || + conf->direction == RTE_DMA_DIR_DEV_TO_DEV; + if ((conf->dst_port.port_type == RTE_DMADEV_PORT_NONE && dst_is_dev) || + (conf->dst_port.port_type != RTE_DMADEV_PORT_NONE && !dst_is_dev)) { + RTE_DMADEV_LOG(ERR, + "Device %u destination port type invalid\n", dev_id); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vchan_setup, -ENOTSUP); + return (*dev->dev_ops->vchan_setup)(dev, conf); +} + +int +rte_dmadev_stats_get(uint16_t dev_id, uint16_t vchan, + struct rte_dmadev_stats *stats) +{ + const struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (stats == NULL) + return -EINVAL; + if (vchan >= dev->data->dev_conf.max_vchans && + vchan != RTE_DMADEV_ALL_VCHAN) { + RTE_DMADEV_LOG(ERR, + "Device %u vchan %u out of range\n", dev_id, vchan); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP); + memset(stats, 0, sizeof(struct rte_dmadev_stats)); + return (*dev->dev_ops->stats_get)(dev, vchan, stats, + sizeof(struct rte_dmadev_stats)); +} + +int +rte_dmadev_stats_reset(uint16_t dev_id, uint16_t vchan) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (vchan >= dev->data->dev_conf.max_vchans && + vchan != RTE_DMADEV_ALL_VCHAN) { + RTE_DMADEV_LOG(ERR, + "Device %u vchan %u out of range\n", dev_id, vchan); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_reset, -ENOTSUP); + return (*dev->dev_ops->stats_reset)(dev, vchan); +} + +int +rte_dmadev_dump(uint16_t dev_id, FILE *f) +{ + const struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + struct rte_dmadev_info info; + int ret; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + if (f == NULL) + return -EINVAL; + + ret = rte_dmadev_info_get(dev_id, &info); + if (ret != 0) { + RTE_DMADEV_LOG(ERR, "Device %u get device info fail\n", dev_id); + return -EINVAL; + } + + fprintf(f, "DMA Dev %u, '%s' [%s]\n", + dev->data->dev_id, + dev->data->dev_name, + dev->data->dev_started ? "started" : "stopped"); + fprintf(f, " dev_capa: 0x%" PRIx64 "\n", info.dev_capa); + fprintf(f, " max_vchans_supported: %u\n", info.max_vchans); + fprintf(f, " max_vchans_configured: %u\n", info.nb_vchans); + fprintf(f, " silent_mode: %s\n", + dev->data->dev_conf.enable_silent ? "on" : "off"); + + if (dev->dev_ops->dev_dump != NULL) + return (*dev->dev_ops->dev_dump)(dev, f); + + return 0; +} + +int +rte_dmadev_selftest(uint16_t dev_id) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + + RTE_DMADEV_VALID_DEV_ID_OR_ERR_RET(dev_id, -EINVAL); + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_selftest, -ENOTSUP); + return (*dev->dev_ops->dev_selftest)(dev_id); +} diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index 329e3a3..f732b4c 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -800,9 +800,21 @@ struct rte_dmadev_sge { * - <0: Error code returned by the driver copy function. */ __rte_experimental -int +static inline int rte_dmadev_copy(uint16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst, - uint32_t length, uint64_t flags); + uint32_t length, uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.max_vchans || length == 0) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy, -ENOTSUP); +#endif + + return (*dev->copy)(dev, vchan, src, dst, length, flags); +} /** * @warning @@ -837,10 +849,23 @@ rte_dmadev_copy(uint16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst, * - <0: Error code returned by the driver copy scatter-gather list function. */ __rte_experimental -int +static inline int rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, struct rte_dmadev_sge *src, struct rte_dmadev_sge *dst, uint16_t nb_src, uint16_t nb_dst, - uint64_t flags); + uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.max_vchans || + src == NULL || dst == NULL || nb_src == 0 || nb_dst == 0) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->copy_sg, -ENOTSUP); +#endif + + return (*dev->copy_sg)(dev, vchan, src, dst, nb_src, nb_dst, flags); +} /** * @warning @@ -871,9 +896,21 @@ rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, struct rte_dmadev_sge *src, * - <0: Error code returned by the driver fill function. */ __rte_experimental -int +static inline int rte_dmadev_fill(uint16_t dev_id, uint16_t vchan, uint64_t pattern, - rte_iova_t dst, uint32_t length, uint64_t flags); + rte_iova_t dst, uint32_t length, uint64_t flags) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.max_vchans || length == 0) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->fill, -ENOTSUP); +#endif + + return (*dev->fill)(dev, vchan, pattern, dst, length, flags); +} /** * @warning @@ -894,8 +931,20 @@ rte_dmadev_fill(uint16_t dev_id, uint16_t vchan, uint64_t pattern, * - <0: Failure to trigger hardware. */ __rte_experimental -int -rte_dmadev_submit(uint16_t dev_id, uint16_t vchan); +static inline int +rte_dmadev_submit(uint16_t dev_id, uint16_t vchan) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.max_vchans) + return -EINVAL; + RTE_FUNC_PTR_OR_ERR_RET(*dev->submit, -ENOTSUP); +#endif + + return (*dev->submit)(dev, vchan); +} /** * @warning @@ -921,9 +970,37 @@ rte_dmadev_submit(uint16_t dev_id, uint16_t vchan); * must be less than or equal to the value of nb_cpls. */ __rte_experimental -uint16_t +static inline uint16_t rte_dmadev_completed(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, - uint16_t *last_idx, bool *has_error); + uint16_t *last_idx, bool *has_error) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + uint16_t idx; + bool err; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.max_vchans || nb_cpls == 0) + return 0; + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed, 0); +#endif + + /* Ensure the pointer values are non-null to simplify drivers. + * In most cases these should be compile time evaluated, since this is + * an inline function. + * - If NULL is explicitly passed as parameter, then compiler knows the + * value is NULL + * - If address of local variable is passed as parameter, then compiler + * can know it's non-NULL. + */ + if (last_idx == NULL) + last_idx = &idx; + if (has_error == NULL) + has_error = &err; + + *has_error = false; + return (*dev->completed)(dev, vchan, nb_cpls, last_idx, has_error); +} /** * @warning @@ -953,10 +1030,27 @@ rte_dmadev_completed(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, * status array are also set. */ __rte_experimental -uint16_t +static inline uint16_t rte_dmadev_completed_status(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls, uint16_t *last_idx, - enum rte_dma_status_code *status); + enum rte_dma_status_code *status) +{ + struct rte_dmadev *dev = &rte_dmadevices[dev_id]; + uint16_t idx; + +#ifdef RTE_DMADEV_DEBUG + if (!rte_dmadev_is_valid_dev(dev_id) || + vchan >= dev->data->dev_conf.max_vchans || + nb_cpls == 0 || status == NULL) + return 0; + RTE_FUNC_PTR_OR_ERR_RET(*dev->completed_status, 0); +#endif + + if (last_idx == NULL) + last_idx = &idx; + + return (*dev->completed_status)(dev, vchan, nb_cpls, last_idx, status); +} #ifdef __cplusplus } diff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h index 599ab15..9272725 100644 --- a/lib/dmadev/rte_dmadev_core.h +++ b/lib/dmadev/rte_dmadev_core.h @@ -177,4 +177,6 @@ struct rte_dmadev { uint64_t reserved[2]; /**< Reserved for future fields. */ } __rte_cache_aligned; +extern struct rte_dmadev rte_dmadevices[]; + #endif /* _RTE_DMADEV_CORE_H_ */ diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map index 408b93c..86c5e75 100644 --- a/lib/dmadev/version.map +++ b/lib/dmadev/version.map @@ -27,6 +27,7 @@ EXPERIMENTAL { INTERNAL { global: + rte_dmadevices; rte_dmadev_get_device_by_name; rte_dmadev_pmd_allocate; rte_dmadev_pmd_release; From patchwork Tue Aug 3 11:29:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 96606 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E6877A0A0C; Tue, 3 Aug 2021 13:33:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 75D3F411BE; Tue, 3 Aug 2021 13:33:38 +0200 (CEST) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by mails.dpdk.org (Postfix) with ESMTP id EB10C411A7 for ; Tue, 3 Aug 2021 13:33:36 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4GfCLP3W97z82Js; Tue, 3 Aug 2021 19:28:45 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 3 Aug 2021 19:33:35 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 3 Aug 2021 19:33:34 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Tue, 3 Aug 2021 19:29:49 +0800 Message-ID: <1627990189-36531-7-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1627990189-36531-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1627990189-36531-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v13 6/6] maintainers: add for dmadev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add Chengwen Feng as dmadev's maintainer. Signed-off-by: Chengwen Feng --- MAINTAINERS | 5 +++++ doc/guides/rel_notes/release_21_08.rst | 6 ++++++ 2 files changed, 11 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 8013ba1..84cfb1a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -496,6 +496,11 @@ F: drivers/raw/skeleton/ F: app/test/test_rawdev.c F: doc/guides/prog_guide/rawdev.rst +DMA device API - EXPERIMENTAL +M: Chengwen Feng +F: lib/dmadev/ +F: doc/guides/prog_guide/dmadev.rst + Memory Pool Drivers ------------------- diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index 16bb9ce..93068a2 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -175,6 +175,12 @@ New Features Updated testpmd application to log errors and warnings to stderr instead of stdout used before. +* **Added dmadev library support.** + + The dmadev library provides a DMA device framework for management and + provisioning of hardware and software DMA poll mode drivers, defining generic + APIs which support a number of different DMA operations. + Removed Items ------------- From patchwork Sat Aug 28 07:30:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 97493 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 889F6A0C5C; Sat, 28 Aug 2021 09:34:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B92E40150; Sat, 28 Aug 2021 09:34:09 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id EFEA0410D8 for ; Sat, 28 Aug 2021 09:34:07 +0200 (CEST) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GxSsf2QYyzbjw9; Sat, 28 Aug 2021 15:30:14 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Sat, 28 Aug 2021 15:34:06 +0800 Received: from localhost.localdomain (10.67.165.24) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Sat, 28 Aug 2021 15:34:06 +0800 From: Chengwen Feng To: , , , , , CC: , , , , , , , , , Date: Sat, 28 Aug 2021 15:30:06 +0800 Message-ID: <1630135806-21931-9-git-send-email-fengchengwen@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1630135806-21931-1-git-send-email-fengchengwen@huawei.com> References: <1625231891-2963-1-git-send-email-fengchengwen@huawei.com> <1630135806-21931-1-git-send-email-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v17 8/8] maintainers: add for dmadev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add myself as dmadev's maintainer and update release notes. Signed-off-by: Chengwen Feng --- MAINTAINERS | 7 +++++++ doc/guides/rel_notes/release_21_11.rst | 5 +++++ 2 files changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 266f5ac..c057a09 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -496,6 +496,13 @@ F: drivers/raw/skeleton/ F: app/test/test_rawdev.c F: doc/guides/prog_guide/rawdev.rst +DMA device API - EXPERIMENTAL +M: Chengwen Feng +F: lib/dmadev/ +F: drivers/dma/skeleton/ +F: app/test/test_dmadev* +F: doc/guides/prog_guide/dmadev.rst + Memory Pool Drivers ------------------- diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index d707a55..78b9691 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -55,6 +55,11 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added dmadev library support.** + + The dmadev library provides a DMA device framework for management and + provision of hardware and software DMA devices. + Removed Items -------------