Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/95217/?format=api
https://patches.dpdk.org/api/patches/95217/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/patch/1625231891-2963-1-git-send-email-fengchengwen@huawei.com/", "project": { "id": 1, "url": "https://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<1625231891-2963-1-git-send-email-fengchengwen@huawei.com>", "list_archive_url": "https://inbox.dpdk.org/dev/1625231891-2963-1-git-send-email-fengchengwen@huawei.com", "date": "2021-07-02T13:18:11", "name": "dmadev: introduce DMA device library", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "be386caa4ad93feac6c0c85b6b6b30b4ae9fdac8", "submitter": { "id": 2146, "url": "https://patches.dpdk.org/api/people/2146/?format=api", "name": "fengchengwen", "email": "fengchengwen@huawei.com" }, "delegate": { "id": 1, "url": "https://patches.dpdk.org/api/users/1/?format=api", "username": "tmonjalo", "first_name": "Thomas", "last_name": "Monjalon", "email": "thomas@monjalon.net" }, "mbox": "https://patches.dpdk.org/project/dpdk/patch/1625231891-2963-1-git-send-email-fengchengwen@huawei.com/mbox/", "series": [ { "id": 17598, "url": "https://patches.dpdk.org/api/series/17598/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=17598", "date": "2021-07-02T13:18:11", "name": "dmadev: introduce DMA device library", "version": 1, "mbox": "https://patches.dpdk.org/series/17598/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/95217/comments/", "check": "fail", "checks": "https://patches.dpdk.org/api/patches/95217/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id B81E1A0A0C;\n\tFri, 2 Jul 2021 15:21:57 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 6C1B041353;\n\tFri, 2 Jul 2021 15:21:57 +0200 (CEST)", "from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188])\n by mails.dpdk.org (Postfix) with ESMTP id 455E640686\n for <dev@dpdk.org>; Fri, 2 Jul 2021 15:21:54 +0200 (CEST)", "from dggemv703-chm.china.huawei.com (unknown [172.30.72.55])\n by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GGbJ15bqyzZpp4;\n Fri, 2 Jul 2021 21:18:41 +0800 (CST)", "from dggpeml500024.china.huawei.com (7.185.36.10) by\n dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server\n (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id\n 15.1.2176.2; Fri, 2 Jul 2021 21:21:49 +0800", "from localhost.localdomain (10.67.165.24) by\n dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server\n (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id\n 15.1.2176.2; Fri, 2 Jul 2021 21:21:49 +0800" ], "From": "Chengwen Feng <fengchengwen@huawei.com>", "To": "<thomas@monjalon.net>, <ferruh.yigit@intel.com>,\n <bruce.richardson@intel.com>, <jerinj@marvell.com>, <jerinjacobk@gmail.com>", "CC": "<dev@dpdk.org>, <mb@smartsharesystems.com>, <nipun.gupta@nxp.com>,\n <hemant.agrawal@nxp.com>, <maxime.coquelin@redhat.com>,\n <honnappa.nagarahalli@arm.com>, <david.marchand@redhat.com>,\n <sburla@marvell.com>, <pkapoor@marvell.com>, <konstantin.ananyev@intel.com>,\n <liangma@liangbit.com>", "Date": "Fri, 2 Jul 2021 21:18:11 +0800", "Message-ID": "<1625231891-2963-1-git-send-email-fengchengwen@huawei.com>", "X-Mailer": "git-send-email 2.8.1", "MIME-Version": "1.0", "Content-Type": "text/plain", "X-Originating-IP": "[10.67.165.24]", "X-ClientProxiedBy": "dggems706-chm.china.huawei.com (10.3.19.183) To\n dggpeml500024.china.huawei.com (7.185.36.10)", "X-CFilter-Loop": "Reflected", "Subject": "[dpdk-dev] [PATCH] dmadev: introduce DMA device library", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "This patch introduces 'dmadevice' which is a generic type of DMA\ndevice.\n\nThe APIs of dmadev library exposes some generic operations which can\nenable configuration and I/O with the DMA devices.\n\nSigned-off-by: Chengwen Feng <fengchengwen@huawei.com>\n---\n MAINTAINERS | 4 +\n config/rte_config.h | 3 +\n lib/dmadev/meson.build | 6 +\n lib/dmadev/rte_dmadev.c | 438 +++++++++++++++++++++\n lib/dmadev/rte_dmadev.h | 919 +++++++++++++++++++++++++++++++++++++++++++\n lib/dmadev/rte_dmadev_core.h | 98 +++++\n lib/dmadev/rte_dmadev_pmd.h | 210 ++++++++++\n lib/dmadev/version.map | 32 ++\n lib/meson.build | 1 +\n 9 files changed, 1711 insertions(+)\n create mode 100644 lib/dmadev/meson.build\n create mode 100644 lib/dmadev/rte_dmadev.c\n create mode 100644 lib/dmadev/rte_dmadev.h\n create mode 100644 lib/dmadev/rte_dmadev_core.h\n create mode 100644 lib/dmadev/rte_dmadev_pmd.h\n create mode 100644 lib/dmadev/version.map", "diff": "diff --git a/MAINTAINERS b/MAINTAINERS\nindex 4347555..2019783 100644\n--- a/MAINTAINERS\n+++ b/MAINTAINERS\n@@ -496,6 +496,10 @@ F: drivers/raw/skeleton/\n F: app/test/test_rawdev.c\n F: doc/guides/prog_guide/rawdev.rst\n \n+Dma device API\n+M: Chengwen Feng <fengchengwen@huawei.com>\n+F: lib/dmadev/\n+\n \n Memory Pool Drivers\n -------------------\ndiff --git a/config/rte_config.h b/config/rte_config.h\nindex 590903c..331a431 100644\n--- a/config/rte_config.h\n+++ b/config/rte_config.h\n@@ -81,6 +81,9 @@\n /* rawdev defines */\n #define RTE_RAWDEV_MAX_DEVS 64\n \n+/* dmadev defines */\n+#define RTE_DMADEV_MAX_DEVS 64\n+\n /* ip_fragmentation defines */\n #define RTE_LIBRTE_IP_FRAG_MAX_FRAG 4\n #undef RTE_LIBRTE_IP_FRAG_TBL_STAT\ndiff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build\nnew file mode 100644\nindex 0000000..c918dae\n--- /dev/null\n+++ b/lib/dmadev/meson.build\n@@ -0,0 +1,6 @@\n+# SPDX-License-Identifier: BSD-3-Clause\n+# Copyright(c) 2021 HiSilicon Limited.\n+\n+sources = files('rte_dmadev.c')\n+headers = files('rte_dmadev.h', 'rte_dmadev_pmd.h')\n+indirect_headers += files('rte_dmadev_core.h')\ndiff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c\nnew file mode 100644\nindex 0000000..a94e839\n--- /dev/null\n+++ b/lib/dmadev/rte_dmadev.c\n@@ -0,0 +1,438 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright 2021 HiSilicon Limited.\n+ */\n+\n+#include <ctype.h>\n+#include <stdlib.h>\n+#include <string.h>\n+#include <stdint.h>\n+\n+#include <rte_log.h>\n+#include <rte_debug.h>\n+#include <rte_dev.h>\n+#include <rte_memory.h>\n+#include <rte_memzone.h>\n+#include <rte_malloc.h>\n+#include <rte_errno.h>\n+#include <rte_string_fns.h>\n+\n+#include \"rte_dmadev.h\"\n+#include \"rte_dmadev_pmd.h\"\n+\n+struct rte_dmadev rte_dmadevices[RTE_DMADEV_MAX_DEVS];\n+\n+uint16_t\n+rte_dmadev_count(void)\n+{\n+\tuint16_t count = 0;\n+\tuint16_t i;\n+\n+\tfor (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) {\n+\t\tif (rte_dmadevices[i].attached)\n+\t\t\tcount++;\n+\t}\n+\n+\treturn count;\n+}\n+\n+int\n+rte_dmadev_get_dev_id(const char *name)\n+{\n+\tuint16_t i;\n+\n+\tif (name == NULL)\n+\t\treturn -EINVAL;\n+\n+\tfor (i = 0; i < RTE_DMADEV_MAX_DEVS; i++)\n+\t\tif ((strcmp(rte_dmadevices[i].name, name) == 0) &&\n+\t\t (rte_dmadevices[i].attached == RTE_DMADEV_ATTACHED))\n+\t\t\treturn i;\n+\n+\treturn -ENODEV;\n+}\n+\n+int\n+rte_dmadev_socket_id(uint16_t dev_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\treturn dev->socket_id;\n+}\n+\n+int\n+rte_dmadev_info_get(uint16_t dev_id, struct rte_dmadev_info *dev_info)\n+{\n+\tstruct rte_dmadev *dev;\n+\tint diag;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\tRTE_FUNC_PTR_OR_ERR_RET(dev_info, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_info_get, -ENOTSUP);\n+\n+\tmemset(dev_info, 0, sizeof(struct rte_dmadev_info));\n+\tdiag = (*dev->dev_ops->dev_info_get)(dev, dev_info);\n+\tif (diag != 0)\n+\t\treturn diag;\n+\n+\tdev_info->device = dev->device;\n+\tdev_info->driver_name = dev->driver_name;\n+\tdev_info->socket_id = dev->socket_id;\n+\n+\treturn 0;\n+}\n+\n+int\n+rte_dmadev_configure(uint16_t dev_id, const struct rte_dmadev_conf *dev_conf)\n+{\n+\tstruct rte_dmadev *dev;\n+\tint diag;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\tRTE_FUNC_PTR_OR_ERR_RET(dev_conf, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);\n+\n+\tif (dev->started) {\n+\t\tRTE_DMADEV_LOG(ERR,\n+\t\t \"device %u must be stopped to allow configuration\", dev_id);\n+\t\treturn -EBUSY;\n+\t}\n+\n+\tdiag = (*dev->dev_ops->dev_configure)(dev, dev_conf);\n+\tif (diag != 0)\n+\t\tRTE_DMADEV_LOG(ERR, \"device %u dev_configure failed, ret = %d\",\n+\t\t\t dev_id, diag);\n+\telse\n+\t\tdev->attached = 1;\n+\n+\treturn diag;\n+}\n+\n+int\n+rte_dmadev_start(uint16_t dev_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\tint diag;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\tif (dev->started != 0) {\n+\t\tRTE_DMADEV_LOG(ERR, \"device %u already started\", dev_id);\n+\t\treturn 0;\n+\t}\n+\n+\tif (dev->dev_ops->dev_start == NULL)\n+\t\tgoto mark_started;\n+\n+\tdiag = (*dev->dev_ops->dev_start)(dev);\n+\tif (diag != 0)\n+\t\treturn diag;\n+\n+mark_started:\n+\tdev->started = 1;\n+\treturn 0;\n+}\n+\n+int\n+rte_dmadev_stop(uint16_t dev_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\tint diag;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tif (dev->started == 0) {\n+\t\tRTE_DMADEV_LOG(ERR, \"device %u already stopped\", dev_id);\n+\t\treturn 0;\n+\t}\n+\n+\tif (dev->dev_ops->dev_stop == NULL)\n+\t\tgoto mark_stopped;\n+\n+\tdiag = (*dev->dev_ops->dev_stop)(dev);\n+\tif (diag != 0)\n+\t\treturn diag;\n+\n+mark_stopped:\n+\tdev->started = 0;\n+\treturn 0;\n+}\n+\n+int\n+rte_dmadev_close(uint16_t dev_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);\n+\n+\t/* Device must be stopped before it can be closed */\n+\tif (dev->started == 1) {\n+\t\tRTE_DMADEV_LOG(ERR, \"device %u must be stopped before closing\",\n+\t\t\t dev_id);\n+\t\treturn -EBUSY;\n+\t}\n+\n+\treturn (*dev->dev_ops->dev_close)(dev);\n+}\n+\n+int\n+rte_dmadev_reset(uint16_t dev_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);\n+\n+\t/* Reset is not dependent on state of the device */\n+\treturn (*dev->dev_ops->dev_reset)(dev);\n+}\n+\n+int\n+rte_dmadev_queue_setup(uint16_t dev_id,\n+\t\t const struct rte_dmadev_queue_conf *conf)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\tRTE_FUNC_PTR_OR_ERR_RET(conf, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -ENOTSUP);\n+\n+\treturn (*dev->dev_ops->queue_setup)(dev, conf);\n+}\n+\n+int\n+rte_dmadev_queue_release(uint16_t dev_id, uint16_t vq_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);\n+\n+\treturn (*dev->dev_ops->queue_release)(dev, vq_id);\n+}\n+\n+int\n+rte_dmadev_queue_info_get(uint16_t dev_id, uint16_t vq_id,\n+\t\t\t struct rte_dmadev_queue_info *info)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\tRTE_FUNC_PTR_OR_ERR_RET(info, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_info_get, -ENOTSUP);\n+\n+\tmemset(info, 0, sizeof(struct rte_dmadev_queue_info));\n+\treturn (*dev->dev_ops->queue_info_get)(dev, vq_id, info);\n+}\n+\n+int\n+rte_dmadev_stats_get(uint16_t dev_id, int vq_id,\n+\t\t struct rte_dmadev_stats *stats)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\tRTE_FUNC_PTR_OR_ERR_RET(stats, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);\n+\n+\treturn (*dev->dev_ops->stats_get)(dev, vq_id, stats);\n+}\n+\n+int\n+rte_dmadev_stats_reset(uint16_t dev_id, int vq_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_reset, -ENOTSUP);\n+\n+\treturn (*dev->dev_ops->stats_reset)(dev, vq_id);\n+}\n+\n+static int\n+xstats_get_count(uint16_t dev_id)\n+{\n+\tstruct rte_dmadev *dev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->xstats_get_names, -ENOTSUP);\n+\n+\treturn (*dev->dev_ops->xstats_get_names)(dev, NULL, 0);\n+}\n+\n+int\n+rte_dmadev_xstats_names_get(uint16_t dev_id,\n+\t\t\t struct rte_dmadev_xstats_name *xstats_names,\n+\t\t\t uint32_t size)\n+{\n+\tstruct rte_dmadev *dev;\n+\tint cnt_expected_entries;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\n+\tcnt_expected_entries = xstats_get_count(dev_id);\n+\n+\tif (xstats_names == NULL || cnt_expected_entries < 0 ||\n+\t (int)size < cnt_expected_entries || size == 0)\n+\t\treturn cnt_expected_entries;\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->xstats_get_names, -ENOTSUP);\n+\treturn (*dev->dev_ops->xstats_get_names)(dev, xstats_names, size);\n+}\n+\n+int\n+rte_dmadev_xstats_get(uint16_t dev_id, const uint32_t ids[],\n+\t\t uint64_t values[], uint32_t n)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\tRTE_FUNC_PTR_OR_ERR_RET(ids, -EINVAL);\n+\tRTE_FUNC_PTR_OR_ERR_RET(values, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->xstats_get, -ENOTSUP);\n+\n+\treturn (*dev->dev_ops->xstats_get)(dev, ids, values, n);\n+}\n+\n+int\n+rte_dmadev_xstats_reset(uint16_t dev_id, const uint32_t ids[], uint32_t nb_ids)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->xstats_reset, -ENOTSUP);\n+\n+\treturn (*dev->dev_ops->xstats_reset)(dev, ids, nb_ids);\n+}\n+\n+int\n+rte_dmadev_selftest(uint16_t dev_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tRTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tRTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_selftest, -ENOTSUP);\n+\n+\treturn (*dev->dev_ops->dev_selftest)(dev_id);\n+}\n+\n+static inline uint16_t\n+rte_dmadev_find_free_device_index(void)\n+{\n+\tuint16_t i;\n+\n+\tfor (i = 0; i < RTE_DMADEV_MAX_DEVS; i++) {\n+\t\tif (rte_dmadevices[i].attached == RTE_DMADEV_DETACHED)\n+\t\t\treturn i;\n+\t}\n+\n+\treturn RTE_DMADEV_MAX_DEVS;\n+}\n+\n+struct rte_dmadev *\n+rte_dmadev_pmd_allocate(const char *name, size_t dev_priv_size, int socket_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\tuint16_t dev_id;\n+\n+\tif (rte_dmadev_get_dev_id(name) >= 0) {\n+\t\tRTE_DMADEV_LOG(ERR,\n+\t\t\t\"device with name %s already allocated!\", name);\n+\t\treturn NULL;\n+\t}\n+\n+\tdev_id = rte_dmadev_find_free_device_index();\n+\tif (dev_id == RTE_DMADEV_MAX_DEVS) {\n+\t\tRTE_DMADEV_LOG(ERR, \"reached maximum number of DMA devices\");\n+\t\treturn NULL;\n+\t}\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\n+\tif (dev_priv_size > 0) {\n+\t\tdev->dev_private = rte_zmalloc_socket(\"dmadev private\",\n+\t\t\t\t dev_priv_size,\n+\t\t\t\t RTE_CACHE_LINE_SIZE,\n+\t\t\t\t socket_id);\n+\t\tif (dev->dev_private == NULL) {\n+\t\t\tRTE_DMADEV_LOG(ERR,\n+\t\t\t\t\"unable to allocate memory for dmadev\");\n+\t\t\treturn NULL;\n+\t\t}\n+\t}\n+\n+\tdev->dev_id = dev_id;\n+\tdev->socket_id = socket_id;\n+\tdev->started = 0;\n+\tstrlcpy(dev->name, name, RTE_DMADEV_NAME_MAX_LEN);\n+\n+\tdev->attached = RTE_DMADEV_ATTACHED;\n+\n+\treturn dev;\n+}\n+\n+int\n+rte_dmadev_pmd_release(struct rte_dmadev *dev)\n+{\n+\tint ret;\n+\n+\tif (dev == NULL)\n+\t\treturn -EINVAL;\n+\n+\tret = rte_dmadev_close(dev->dev_id);\n+\tif (ret != 0)\n+\t\treturn ret;\n+\n+\tif (dev->dev_private != NULL)\n+\t\trte_free(dev->dev_private);\n+\n+\tmemset(dev, 0, sizeof(struct rte_dmadev));\n+\tdev->attached = RTE_DMADEV_DETACHED;\n+\n+\treturn 0;\n+}\n+\n+RTE_LOG_REGISTER(libdmadev_logtype, lib.dmadev, INFO);\ndiff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h\nnew file mode 100644\nindex 0000000..f74fc6a\n--- /dev/null\n+++ b/lib/dmadev/rte_dmadev.h\n@@ -0,0 +1,919 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright 2021 HiSilicon Limited.\n+ */\n+\n+#ifndef _RTE_DMADEV_H_\n+#define _RTE_DMADEV_H_\n+\n+/**\n+ * @file rte_dmadev.h\n+ *\n+ * RTE DMA (Direct Memory Access) device APIs.\n+ *\n+ * The generic DMA device diagram:\n+ *\n+ * ------------ ------------\n+ * | HW-queue | | HW-queue |\n+ * ------------ ------------\n+ * \\ /\n+ * \\ /\n+ * \\ /\n+ * ----------------\n+ * |dma-controller|\n+ * ----------------\n+ *\n+ * The DMA could have multiple HW-queues, each HW-queue could have multiple\n+ * capabilities, e.g. whether to support fill operation, supported DMA\n+ * transfter direction and etc.\n+ *\n+ * The DMA framework is built on the following abstraction model:\n+ *\n+ * ------------ ------------\n+ * |virt-queue| |virt-queue|\n+ * ------------ ------------\n+ * \\ /\n+ * \\ /\n+ * \\ /\n+ * ------------ ------------\n+ * | HW-queue | | HW-queue |\n+ * ------------ ------------\n+ * \\ /\n+ * \\ /\n+ * \\ /\n+ * ----------\n+ * | dmadev |\n+ * ----------\n+ *\n+ * a) The DMA operation request must be submitted to the virt queue, virt\n+ * queues must be created based on HW queues, the DMA device could have\n+ * multiple HW queues.\n+ * b) The virt queues on the same HW-queue could represent different contexts,\n+ * e.g. user could create virt-queue-0 on HW-queue-0 for mem-to-mem\n+ * transfer scenario, and create virt-queue-1 on the same HW-queue for\n+ * mem-to-dev transfer scenario.\n+ * NOTE: user could also create multiple virt queues for mem-to-mem transfer\n+ * scenario as long as the corresponding driver supports.\n+ *\n+ * The control plane APIs include configure/queue_setup/queue_release/start/\n+ * stop/reset/close, in order to start device work, the call sequence must be\n+ * as follows:\n+ * - rte_dmadev_configure()\n+ * - rte_dmadev_queue_setup()\n+ * - rte_dmadev_start()\n+ *\n+ * The dataplane APIs include two parts:\n+ * a) The first part is the submission of operation requests:\n+ * - rte_dmadev_copy()\n+ * - rte_dmadev_copy_sg() - scatter-gather form of copy\n+ * - rte_dmadev_fill()\n+ * - rte_dmadev_fill_sg() - scatter-gather form of fill\n+ * - rte_dmadev_fence() - add a fence force ordering between operations\n+ * - rte_dmadev_perform() - issue doorbell to hardware\n+ * These APIs could work with different virt queues which have different\n+ * contexts.\n+ * The first four APIs are used to submit the operation request to the virt\n+ * queue, if the submission is successful, a cookie (as type\n+ * 'dma_cookie_t') is returned, otherwise a negative number is returned.\n+ * b) The second part is to obtain the result of requests:\n+ * - rte_dmadev_completed()\n+ * - return the number of operation requests completed successfully.\n+ * - rte_dmadev_completed_fails()\n+ * - return the number of operation requests failed to complete.\n+ *\n+ * The misc APIs include info_get/queue_info_get/stats/xstats/selftest, provide\n+ * information query and self-test capabilities.\n+ *\n+ * About the dataplane APIs MT-safe, there are two dimensions:\n+ * a) For one virt queue, the submit/completion API could be MT-safe,\n+ * e.g. one thread do submit operation, another thread do completion\n+ * operation.\n+ * If driver support it, then declare RTE_DMA_DEV_CAPA_MT_VQ.\n+ * If driver don't support it, it's up to the application to guarantee\n+ * MT-safe.\n+ * b) For multiple virt queues on the same HW queue, e.g. one thread do\n+ * operation on virt-queue-0, another thread do operation on virt-queue-1.\n+ * If driver support it, then declare RTE_DMA_DEV_CAPA_MT_MVQ.\n+ * If driver don't support it, it's up to the application to guarantee\n+ * MT-safe.\n+ */\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#include <rte_common.h>\n+#include <rte_memory.h>\n+#include <rte_errno.h>\n+#include <rte_compat.h>\n+\n+/**\n+ * dma_cookie_t - an opaque DMA cookie\n+ *\n+ * If dma_cookie_t is >=0 it's a DMA operation request cookie, <0 it's a error\n+ * code.\n+ * When using cookies, comply with the following rules:\n+ * a) Cookies for each virtual queue are independent.\n+ * b) For a virt queue, the cookie are monotonically incremented, when it reach\n+ * the INT_MAX, it wraps back to zero.\n+ * c) The initial cookie of a virt queue is zero, after the device is stopped or\n+ * reset, the virt queue's cookie needs to be reset to zero.\n+ * Example:\n+ * step-1: start one dmadev\n+ * step-2: enqueue a copy operation, the cookie return is 0\n+ * step-3: enqueue a copy operation again, the cookie return is 1\n+ * ...\n+ * step-101: stop the dmadev\n+ * step-102: start the dmadev\n+ * step-103: enqueue a copy operation, the cookie return is 0\n+ * ...\n+ */\n+typedef int32_t dma_cookie_t;\n+\n+/**\n+ * dma_scatterlist - can hold scatter DMA operation request\n+ */\n+struct dma_scatterlist {\n+\tvoid *src;\n+\tvoid *dst;\n+\tuint32_t length;\n+};\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Get the total number of DMA devices that have been successfully\n+ * initialised.\n+ *\n+ * @return\n+ * The total number of usable DMA devices.\n+ */\n+__rte_experimental\n+uint16_t\n+rte_dmadev_count(void);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Get the device identifier for the named DMA device.\n+ *\n+ * @param name\n+ * DMA device name to select the DMA device identifier.\n+ *\n+ * @return\n+ * Returns DMA device identifier on success.\n+ * - <0: Failure to find named DMA device.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_get_dev_id(const char *name);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Return the NUMA socket to which a device is connected.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ *\n+ * @return\n+ * The NUMA socket id to which the device is connected or\n+ * a default of zero if the socket could not be determined.\n+ * - -EINVAL: dev_id value is out of range.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_socket_id(uint16_t dev_id);\n+\n+/**\n+ * The capabilities of a DMA device\n+ */\n+#define RTE_DMA_DEV_CAPA_M2M\t(1ull << 0) /**< Support mem-to-mem transfer */\n+#define RTE_DMA_DEV_CAPA_M2D\t(1ull << 1) /**< Support mem-to-dev transfer */\n+#define RTE_DMA_DEV_CAPA_D2M\t(1ull << 2) /**< Support dev-to-mem transfer */\n+#define RTE_DMA_DEV_CAPA_D2D\t(1ull << 3) /**< Support dev-to-dev transfer */\n+#define RTE_DMA_DEV_CAPA_COPY\t(1ull << 4) /**< Support copy ops */\n+#define RTE_DMA_DEV_CAPA_FILL\t(1ull << 5) /**< Support fill ops */\n+#define RTE_DMA_DEV_CAPA_SG\t(1ull << 6) /**< Support scatter-gather ops */\n+#define RTE_DMA_DEV_CAPA_FENCE\t(1ull << 7) /**< Support fence ops */\n+#define RTE_DMA_DEV_CAPA_IOVA\t(1ull << 8) /**< Support IOVA as DMA address */\n+#define RTE_DMA_DEV_CAPA_VA\t(1ull << 9) /**< Support VA as DMA address */\n+#define RTE_DMA_DEV_CAPA_MT_VQ\t(1ull << 10) /**< Support MT-safe of one virt queue */\n+#define RTE_DMA_DEV_CAPA_MT_MVQ\t(1ull << 11) /**< Support MT-safe of multiple virt queues */\n+\n+/**\n+ * A structure used to retrieve the contextual information of\n+ * an DMA device\n+ */\n+struct rte_dmadev_info {\n+\t/**\n+\t * Fields filled by framewok\n+\t */\n+\tstruct rte_device *device; /**< Generic Device information */\n+\tconst char *driver_name; /**< Device driver name */\n+\tint socket_id; /**< Socket ID where memory is allocated */\n+\n+\t/**\n+\t * Specification fields filled by driver\n+\t */\n+\tuint64_t dev_capa; /**< Device capabilities (RTE_DMA_DEV_CAPA_) */\n+\tuint16_t max_hw_queues; /**< Maximum number of HW queues. */\n+\tuint16_t max_vqs_per_hw_queue;\n+\t/**< Maximum number of virt queues to allocate per HW queue */\n+\tuint16_t max_desc;\n+\t/**< Maximum allowed number of virt queue descriptors */\n+\tuint16_t min_desc;\n+\t/**< Minimum allowed number of virt queue descriptors */\n+\n+\t/**\n+\t * Status fields filled by driver\n+\t */\n+\tuint16_t nb_hw_queues; /**< Number of HW queues configured */\n+\tuint16_t nb_vqs; /**< Number of virt queues configured */\n+};\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Retrieve the contextual information of a DMA device.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ *\n+ * @param[out] dev_info\n+ * A pointer to a structure of type *rte_dmadev_info* to be filled with the\n+ * contextual information of the device.\n+ * @return\n+ * - =0: Success, driver updates the contextual information of the DMA device\n+ * - <0: Error code returned by the driver info get function.\n+ *\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_info_get(uint16_t dev_id, struct rte_dmadev_info *dev_info);\n+\n+/**\n+ * dma_address_type\n+ */\n+enum dma_address_type {\n+\tDMA_ADDRESS_TYPE_IOVA, /**< Use IOVA as dma address */\n+\tDMA_ADDRESS_TYPE_VA, /**< Use VA as dma address */\n+};\n+\n+/**\n+ * A structure used to configure a DMA device.\n+ */\n+struct rte_dmadev_conf {\n+\tenum dma_address_type addr_type; /**< Address type to used */\n+\tuint16_t nb_hw_queues; /**< Number of HW-queues enable to use */\n+\tuint16_t max_vqs; /**< Maximum number of virt queues to use */\n+};\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Configure a DMA device.\n+ *\n+ * This function must be invoked first before any other function in the\n+ * API. This function can also be re-invoked when a device is in the\n+ * stopped state.\n+ *\n+ * The caller may use rte_dmadev_info_get() to get the capability of each\n+ * resources available for this DMA device.\n+ *\n+ * @param dev_id\n+ * The identifier of the device to configure.\n+ * @param dev_conf\n+ * The DMA device configuration structure encapsulated into rte_dmadev_conf\n+ * object.\n+ *\n+ * @return\n+ * - =0: Success, device configured.\n+ * - <0: Error code returned by the driver configuration function.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_configure(uint16_t dev_id, const struct rte_dmadev_conf *dev_conf);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Start a DMA device.\n+ *\n+ * The device start step is the last one and consists of setting the DMA\n+ * to start accepting jobs.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ *\n+ * @return\n+ * - =0: Success, device started.\n+ * - <0: Error code returned by the driver start function.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_start(uint16_t dev_id);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Stop a DMA device.\n+ *\n+ * The device can be restarted with a call to rte_dmadev_start()\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ *\n+ * @return\n+ * - =0: Success, device stopped.\n+ * - <0: Error code returned by the driver stop function.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_stop(uint16_t dev_id);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Close a DMA device.\n+ *\n+ * The device cannot be restarted after this call.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ *\n+ * @return\n+ * - =0: Successfully closing device\n+ * - <0: Failure to close device\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_close(uint16_t dev_id);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Reset a DMA device.\n+ *\n+ * This is different from cycle of rte_dmadev_start->rte_dmadev_stop in the\n+ * sense similar to hard or soft reset.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ *\n+ * @return\n+ * - =0: Successful reset device.\n+ * - <0: Failure to reset device.\n+ * - (-ENOTSUP): If the device doesn't support this function.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_reset(uint16_t dev_id);\n+\n+/**\n+ * dma_transfer_direction\n+ */\n+enum dma_transfer_direction {\n+\tDMA_MEM_TO_MEM,\n+\tDMA_MEM_TO_DEV,\n+\tDMA_DEV_TO_MEM,\n+\tDMA_DEV_TO_DEV,\n+};\n+\n+/**\n+ * A structure used to configure a DMA virt queue.\n+ */\n+struct rte_dmadev_queue_conf {\n+\tenum dma_transfer_direction direction;\n+\t/**< Associated transfer direction */\n+\tuint16_t hw_queue_id; /**< The HW queue on which to create virt queue */\n+\tuint16_t nb_desc; /**< Number of descriptor for this virt queue */\n+\tuint64_t dev_flags; /**< Device specific flags */\n+\tvoid *dev_ctx; /**< Device specific context */\n+};\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Allocate and set up a virt queue.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param conf\n+ * The queue configuration structure encapsulated into rte_dmadev_queue_conf\n+ * object.\n+ *\n+ * @return\n+ * - >=0: Allocate virt queue success, it is virt queue id.\n+ * - <0: Error code returned by the driver queue setup function.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_queue_setup(uint16_t dev_id,\n+\t\t const struct rte_dmadev_queue_conf *conf);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Release a virt queue.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue which return by queue setup.\n+ *\n+ * @return\n+ * - =0: Successful release the virt queue.\n+ * - <0: Error code returned by the driver queue release function.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_queue_release(uint16_t dev_id, uint16_t vq_id);\n+\n+/**\n+ * A structure used to retrieve information of a DMA virt queue.\n+ */\n+struct rte_dmadev_queue_info {\n+\tenum dma_transfer_direction direction;\n+\t/**< Associated transfer direction */\n+\tuint16_t hw_queue_id; /**< The HW queue on which to create virt queue */\n+\tuint16_t nb_desc; /**< Number of descriptor for this virt queue */\n+\tuint64_t dev_flags; /**< Device specific flags */\n+};\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Retrieve information of a DMA virt queue.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue which return by queue setup.\n+ * @param[out] info\n+ * The queue info structure encapsulated into rte_dmadev_queue_info object.\n+ *\n+ * @return\n+ * - =0: Successful retrieve information.\n+ * - <0: Error code returned by the driver queue release function.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_queue_info_get(uint16_t dev_id, uint16_t vq_id,\n+\t\t\t struct rte_dmadev_queue_info *info);\n+\n+#include \"rte_dmadev_core.h\"\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Enqueue a copy operation onto the DMA virt queue.\n+ *\n+ * This queues up a copy operation to be performed by hardware, but does not\n+ * trigger hardware to begin that operation.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue.\n+ * @param src\n+ * The address of the source buffer.\n+ * @param dst\n+ * The address of the destination buffer.\n+ * @param length\n+ * The length of the data to be copied.\n+ * @param flags\n+ * An opaque flags for this operation.\n+ *\n+ * @return\n+ * dma_cookie_t: please refer to the corresponding definition.\n+ *\n+ * NOTE: The caller must ensure that the input parameter is valid and the\n+ * corresponding device supports the operation.\n+ */\n+__rte_experimental\n+static inline dma_cookie_t\n+rte_dmadev_copy(uint16_t dev_id, uint16_t vq_id, void *src, void *dst,\n+\t\tuint32_t length, uint64_t flags)\n+{\n+\tstruct rte_dmadev *dev = &rte_dmadevices[dev_id];\n+\treturn (*dev->copy)(dev, vq_id, src, dst, length, flags);\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Enqueue a scatter list copy operation onto the DMA virt queue.\n+ *\n+ * This queues up a scatter list copy operation to be performed by hardware,\n+ * but does not trigger hardware to begin that operation.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue.\n+ * @param sg\n+ * The pointer of scatterlist.\n+ * @param sg_len\n+ * The number of scatterlist elements.\n+ * @param flags\n+ * An opaque flags for this operation.\n+ *\n+ * @return\n+ * dma_cookie_t: please refer to the corresponding definition.\n+ *\n+ * NOTE: The caller must ensure that the input parameter is valid and the\n+ * corresponding device supports the operation.\n+ */\n+__rte_experimental\n+static inline dma_cookie_t\n+rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vq_id,\n+\t\t const struct dma_scatterlist *sg,\n+\t\t uint32_t sg_len, uint64_t flags)\n+{\n+\tstruct rte_dmadev *dev = &rte_dmadevices[dev_id];\n+\treturn (*dev->copy_sg)(dev, vq_id, sg, sg_len, flags);\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Enqueue a fill operation onto the DMA virt queue\n+ *\n+ * This queues up a fill operation to be performed by hardware, but does not\n+ * trigger hardware to begin that operation.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue.\n+ * @param pattern\n+ * The pattern to populate the destination buffer with.\n+ * @param dst\n+ * The address of the destination buffer.\n+ * @param length\n+ * The length of the destination buffer.\n+ * @param flags\n+ * An opaque flags for this operation.\n+ *\n+ * @return\n+ * dma_cookie_t: please refer to the corresponding definition.\n+ *\n+ * NOTE: The caller must ensure that the input parameter is valid and the\n+ * corresponding device supports the operation.\n+ */\n+__rte_experimental\n+static inline dma_cookie_t\n+rte_dmadev_fill(uint16_t dev_id, uint16_t vq_id, uint64_t pattern,\n+\t\tvoid *dst, uint32_t length, uint64_t flags)\n+{\n+\tstruct rte_dmadev *dev = &rte_dmadevices[dev_id];\n+\treturn (*dev->fill)(dev, vq_id, pattern, dst, length, flags);\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Enqueue a scatter list fill operation onto the DMA virt queue\n+ *\n+ * This queues up a scatter list fill operation to be performed by hardware,\n+ * but does not trigger hardware to begin that operation.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue.\n+ * @param pattern\n+ * The pattern to populate the destination buffer with.\n+ * @param sg\n+ * The pointer of scatterlist.\n+ * @param sg_len\n+ * The number of scatterlist elements.\n+ * @param flags\n+ * An opaque flags for this operation.\n+ *\n+ * @return\n+ * dma_cookie_t: please refer to the corresponding definition.\n+ *\n+ * NOTE: The caller must ensure that the input parameter is valid and the\n+ * corresponding device supports the operation.\n+ */\n+__rte_experimental\n+static inline dma_cookie_t\n+rte_dmadev_fill_sg(uint16_t dev_id, uint16_t vq_id, uint64_t pattern,\n+\t\t const struct dma_scatterlist *sg, uint32_t sg_len,\n+\t\t uint64_t flags)\n+{\n+\tstruct rte_dmadev *dev = &rte_dmadevices[dev_id];\n+\treturn (*dev->fill_sg)(dev, vq_id, pattern, sg, sg_len, flags);\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Add a fence to force ordering between operations\n+ *\n+ * This adds a fence to a sequence of operations to enforce ordering, such that\n+ * all operations enqueued before the fence must be completed before operations\n+ * after the fence.\n+ * NOTE: Since this fence may be added as a flag to the last operation enqueued,\n+ * this API may not function correctly when called immediately after an\n+ * \"rte_dmadev_perform\" call i.e. before any new operations are enqueued.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue.\n+ *\n+ * @return\n+ * - =0: Successful add fence.\n+ * - <0: Failure to add fence.\n+ *\n+ * NOTE: The caller must ensure that the input parameter is valid and the\n+ * corresponding device supports the operation.\n+ */\n+__rte_experimental\n+static inline int\n+rte_dmadev_fence(uint16_t dev_id, uint16_t vq_id)\n+{\n+\tstruct rte_dmadev *dev = &rte_dmadevices[dev_id];\n+\treturn (*dev->fence)(dev, vq_id);\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Trigger hardware to begin performing enqueued operations\n+ *\n+ * This API is used to write the \"doorbell\" to the hardware to trigger it\n+ * to begin the operations previously enqueued by rte_dmadev_copy/fill()\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue.\n+ *\n+ * @return\n+ * - =0: Successful trigger hardware.\n+ * - <0: Failure to trigger hardware.\n+ *\n+ * NOTE: The caller must ensure that the input parameter is valid and the\n+ * corresponding device supports the operation.\n+ */\n+__rte_experimental\n+static inline int\n+rte_dmadev_perform(uint16_t dev_id, uint16_t vq_id)\n+{\n+\tstruct rte_dmadev *dev = &rte_dmadevices[dev_id];\n+\treturn (*dev->perform)(dev, vq_id);\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Returns the number of operations that have been successful completed.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue.\n+ * @param nb_cpls\n+ * The maximum number of completed operations that can be processed.\n+ * @param[out] cookie\n+ * The last completed operation's cookie.\n+ * @param[out] has_error\n+ * Indicates if there are transfer error.\n+ *\n+ * @return\n+ * The number of operations that successful completed.\n+ *\n+ * NOTE: The caller must ensure that the input parameter is valid and the\n+ * corresponding device supports the operation.\n+ */\n+__rte_experimental\n+static inline uint16_t\n+rte_dmadev_completed(uint16_t dev_id, uint16_t vq_id, const uint16_t nb_cpls,\n+\t\t dma_cookie_t *cookie, bool *has_error)\n+{\n+\tstruct rte_dmadev *dev = &rte_dmadevices[dev_id];\n+\thas_error = false;\n+\treturn (*dev->completed)(dev, vq_id, nb_cpls, cookie, has_error);\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Returns the number of operations that failed to complete.\n+ * NOTE: This API was used when rte_dmadev_completed has_error was set.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue.\n+ * @param nb_status\n+ * Indicates the size of status array.\n+ * @param[out] status\n+ * The error code of operations that failed to complete.\n+ * @param[out] cookie\n+ * The last failed completed operation's cookie.\n+ *\n+ * @return\n+ * The number of operations that failed to complete.\n+ *\n+ * NOTE: The caller must ensure that the input parameter is valid and the\n+ * corresponding device supports the operation.\n+ */\n+__rte_experimental\n+static inline uint16_t\n+rte_dmadev_completed_fails(uint16_t dev_id, uint16_t vq_id,\n+\t\t\t const uint16_t nb_status, uint32_t *status,\n+\t\t\t dma_cookie_t *cookie)\n+{\n+\tstruct rte_dmadev *dev = &rte_dmadevices[dev_id];\n+\treturn (*dev->completed_fails)(dev, vq_id, nb_status, status, cookie);\n+}\n+\n+struct rte_dmadev_stats {\n+\tuint64_t enqueue_fail_count;\n+\t/**< Conut of all operations which failed enqueued */\n+\tuint64_t enqueued_count;\n+\t/**< Count of all operations which successful enqueued */\n+\tuint64_t completed_fail_count;\n+\t/**< Count of all operations which failed to complete */\n+\tuint64_t completed_count;\n+\t/**< Count of all operations which successful complete */\n+};\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Retrieve basic statistics of a or all DMA virt queue(s).\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue, -1 means all virt queues.\n+ * @param[out] stats\n+ * The basic statistics structure encapsulated into rte_dmadev_stats\n+ * object.\n+ *\n+ * @return\n+ * - =0: Successful retrieve stats.\n+ * - <0: Failure to retrieve stats.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_stats_get(uint16_t dev_id, int vq_id,\n+\t\t struct rte_dmadev_stats *stats);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Reset basic statistics of a or all DMA virt queue(s).\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param vq_id\n+ * The identifier of virt queue, -1 means all virt queues.\n+ *\n+ * @return\n+ * - =0: Successful retrieve stats.\n+ * - <0: Failure to retrieve stats.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_stats_reset(uint16_t dev_id, int vq_id);\n+\n+/** Maximum name length for extended statistics counters */\n+#define RTE_DMA_DEV_XSTATS_NAME_SIZE 64\n+\n+/**\n+ * A name-key lookup element for extended statistics.\n+ *\n+ * This structure is used to map between names and ID numbers\n+ * for extended ethdev statistics.\n+ */\n+struct rte_dmadev_xstats_name {\n+\tchar name[RTE_DMA_DEV_XSTATS_NAME_SIZE];\n+};\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Retrieve names of extended statistics of a DMA device.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param[out] xstats_names\n+ * Block of memory to insert names into. Must be at least size in capacity.\n+ * If set to NULL, function returns required capacity.\n+ * @param size\n+ * Capacity of xstats_names (number of names).\n+ * @return\n+ * - positive value lower or equal to size: success. The return value\n+ * is the number of entries filled in the stats table.\n+ * - positive value higher than size: error, the given statistics table\n+ * is too small. The return value corresponds to the size that should\n+ * be given to succeed. The entries in the table are not valid and\n+ * shall not be used by the caller.\n+ * - negative value on error.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_xstats_names_get(uint16_t dev_id,\n+\t\t\t struct rte_dmadev_xstats_name *xstats_names,\n+\t\t\t uint32_t size);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Retrieve extended statistics of a DMA device.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param ids\n+ * The id numbers of the stats to get. The ids can be got from the stat\n+ * position in the stat list from rte_dmadev_get_xstats_names().\n+ * @param[out] values\n+ * The values for each stats request by ID.\n+ * @param n\n+ * The number of stats requested.\n+ *\n+ * @return\n+ * - positive value: number of stat entries filled into the values array.\n+ * - negative value on error.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_xstats_get(uint16_t dev_id, const uint32_t ids[],\n+\t\t uint64_t values[], uint32_t n);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Reset the values of the xstats of the selected component in the device.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ * @param ids\n+ * Selects specific statistics to be reset. When NULL, all statistics\n+ * will be reset. If non-NULL, must point to array of at least\n+ * *nb_ids* size.\n+ * @param nb_ids\n+ * The number of ids available from the *ids* array. Ignored when ids is NULL.\n+ *\n+ * @return\n+ * - zero: successfully reset the statistics to zero.\n+ * - negative value on error.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_xstats_reset(uint16_t dev_id, const uint32_t ids[], uint32_t nb_ids);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Trigger the dmadev self test.\n+ *\n+ * @param dev_id\n+ * The identifier of the device.\n+ *\n+ * @return\n+ * - 0: Selftest successful.\n+ * - -ENOTSUP if the device doesn't support selftest\n+ * - other values < 0 on failure.\n+ */\n+__rte_experimental\n+int\n+rte_dmadev_selftest(uint16_t dev_id);\n+\n+#ifdef __cplusplus\n+}\n+#endif\n+\n+#endif /* _RTE_DMADEV_H_ */\ndiff --git a/lib/dmadev/rte_dmadev_core.h b/lib/dmadev/rte_dmadev_core.h\nnew file mode 100644\nindex 0000000..a3afea2\n--- /dev/null\n+++ b/lib/dmadev/rte_dmadev_core.h\n@@ -0,0 +1,98 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright 2021 HiSilicon Limited.\n+ */\n+\n+#ifndef _RTE_DMADEV_CORE_H_\n+#define _RTE_DMADEV_CORE_H_\n+\n+/**\n+ * @file\n+ *\n+ * RTE DMA Device internal header.\n+ *\n+ * This header contains internal data types. But they are still part of the\n+ * public API because they are used by inline public functions.\n+ */\n+\n+struct rte_dmadev;\n+\n+typedef dma_cookie_t (*dmadev_copy_t)(struct rte_dmadev *dev, uint16_t vq_id,\n+\t\t\t\t void *src, void *dst,\n+\t\t\t\t uint32_t length, uint64_t flags);\n+/**< @internal Function used to enqueue a copy operation. */\n+\n+typedef dma_cookie_t (*dmadev_copy_sg_t)(struct rte_dmadev *dev, uint16_t vq_id,\n+\t\t\t\t\t const struct dma_scatterlist *sg,\n+\t\t\t\t\t uint32_t sg_len, uint64_t flags);\n+/**< @internal Function used to enqueue a scatter list copy operation. */\n+\n+typedef dma_cookie_t (*dmadev_fill_t)(struct rte_dmadev *dev, uint16_t vq_id,\n+\t\t\t\t uint64_t pattern, void *dst,\n+\t\t\t\t uint32_t length, uint64_t flags);\n+/**< @internal Function used to enqueue a fill operation. */\n+\n+typedef dma_cookie_t (*dmadev_fill_sg_t)(struct rte_dmadev *dev, uint16_t vq_id,\n+\t\t\tuint64_t pattern, const struct dma_scatterlist *sg,\n+\t\t\tuint32_t sg_len, uint64_t flags);\n+/**< @internal Function used to enqueue a scatter list fill operation. */\n+\n+typedef int (*dmadev_fence_t)(struct rte_dmadev *dev, uint16_t vq_id);\n+/**< @internal Function used to add a fence ordering between operations. */\n+\n+typedef int (*dmadev_perform_t)(struct rte_dmadev *dev, uint16_t vq_id);\n+/**< @internal Function used to trigger hardware to begin performing. */\n+\n+typedef uint16_t (*dmadev_completed_t)(struct rte_dmadev *dev, uint16_t vq_id,\n+\t\t\t\t const uint16_t nb_cpls,\n+\t\t\t\t dma_cookie_t *cookie, bool *has_error);\n+/**< @internal Function used to return number of successful completed operations */\n+\n+typedef uint16_t (*dmadev_completed_fails_t)(struct rte_dmadev *dev,\n+\t\t\tuint16_t vq_id, const uint16_t nb_status,\n+\t\t\tuint32_t *status, dma_cookie_t *cookie);\n+/**< @internal Function used to return number of failed completed operations */\n+\n+#define RTE_DMADEV_NAME_MAX_LEN\t64 /**< Max length of name of DMA PMD */\n+\n+struct rte_dmadev_ops;\n+\n+/**\n+ * The data structure associated with each DMA device.\n+ */\n+struct rte_dmadev {\n+\t/**< Enqueue a copy operation onto the DMA device. */\n+\tdmadev_copy_t copy;\n+\t/**< Enqueue a scatter list copy operation onto the DMA device. */\n+\tdmadev_copy_sg_t copy_sg;\n+\t/**< Enqueue a fill operation onto the DMA device. */\n+\tdmadev_fill_t fill;\n+\t/**< Enqueue a scatter list fill operation onto the DMA device. */\n+\tdmadev_fill_sg_t fill_sg;\n+\t/**< Add a fence to force ordering between operations. */\n+\tdmadev_fence_t fence;\n+\t/**< Trigger hardware to begin performing enqueued operations. */\n+\tdmadev_perform_t perform;\n+\t/**< Returns the number of operations that successful completed. */\n+\tdmadev_completed_t completed;\n+\t/**< Returns the number of operations that failed to complete. */\n+\tdmadev_completed_fails_t completed_fails;\n+\n+\tvoid *dev_private; /**< PMD-specific private data */\n+\tconst struct rte_dmadev_ops *dev_ops; /**< Functions exported by PMD */\n+\n+\tuint16_t dev_id; /**< Device ID for this instance */\n+\tint socket_id; /**< Socket ID where memory is allocated */\n+\tstruct rte_device *device;\n+\t/**< Device info. supplied during device initialization */\n+\tconst char *driver_name; /**< Driver info. supplied by probing */\n+\tchar name[RTE_DMADEV_NAME_MAX_LEN]; /**< Device name */\n+\n+\tRTE_STD_C11\n+\tuint8_t attached : 1; /**< Flag indicating the device is attached */\n+\tuint8_t started : 1; /**< Device state: STARTED(1)/STOPPED(0) */\n+\n+} __rte_cache_aligned;\n+\n+extern struct rte_dmadev rte_dmadevices[];\n+\n+#endif /* _RTE_DMADEV_CORE_H_ */\ndiff --git a/lib/dmadev/rte_dmadev_pmd.h b/lib/dmadev/rte_dmadev_pmd.h\nnew file mode 100644\nindex 0000000..ef03cf7\n--- /dev/null\n+++ b/lib/dmadev/rte_dmadev_pmd.h\n@@ -0,0 +1,210 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright 2021 HiSilicon Limited.\n+ */\n+\n+#ifndef _RTE_DMADEV_PMD_H_\n+#define _RTE_DMADEV_PMD_H_\n+\n+/** @file\n+ * RTE DMA PMD APIs\n+ *\n+ * @note\n+ * Driver facing APIs for a DMA device. These are not to be called directly by\n+ * any application.\n+ */\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#include <string.h>\n+\n+#include <rte_dev.h>\n+#include <rte_log.h>\n+#include <rte_common.h>\n+\n+#include \"rte_dmadev.h\"\n+\n+extern int libdmadev_logtype;\n+\n+#define RTE_DMADEV_LOG(level, fmt, args...) \\\n+\trte_log(RTE_LOG_ ## level, libdmadev_logtype, \"%s(): \" fmt \"\\n\", \\\n+\t\t__func__, ##args)\n+\n+/* Macros to check for valid device */\n+#define RTE_DMADEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \\\n+\tif (!rte_dmadev_pmd_is_valid_dev((dev_id))) { \\\n+\t\tRTE_DMADEV_LOG(ERR, \"Invalid dev_id=%d\", dev_id); \\\n+\t\treturn retval; \\\n+\t} \\\n+} while (0)\n+\n+#define RTE_DMADEV_VALID_DEVID_OR_RET(dev_id) do { \\\n+\tif (!rte_dmadev_pmd_is_valid_dev((dev_id))) { \\\n+\t\tRTE_DMADEV_LOG(ERR, \"Invalid dev_id=%d\", dev_id); \\\n+\t\treturn; \\\n+\t} \\\n+} while (0)\n+\n+#define RTE_DMADEV_DETACHED 0\n+#define RTE_DMADEV_ATTACHED 1\n+\n+/**\n+ * Validate if the DMA device index is a valid attached DMA device.\n+ *\n+ * @param dev_id\n+ * DMA device index.\n+ *\n+ * @return\n+ * - If the device index is valid (1) or not (0).\n+ */\n+static inline unsigned\n+rte_dmadev_pmd_is_valid_dev(uint16_t dev_id)\n+{\n+\tstruct rte_dmadev *dev;\n+\n+\tif (dev_id >= RTE_DMADEV_MAX_DEVS)\n+\t\treturn 0;\n+\n+\tdev = &rte_dmadevices[dev_id];\n+\tif (dev->attached != RTE_DMADEV_ATTACHED)\n+\t\treturn 0;\n+\telse\n+\t\treturn 1;\n+}\n+\n+/**\n+ * Definitions of control-plane functions exported by a driver through the\n+ * generic structure of type *rte_dmadev_ops* supplied in the *rte_dmadev*\n+ * structure associated with a device.\n+ */\n+\n+typedef int (*dmadev_info_get_t)(struct rte_dmadev *dev,\n+\t\t\t\t struct rte_dmadev_info *dev_info);\n+/**< @internal Function used to get device information of a device. */\n+\n+typedef int (*dmadev_configure_t)(struct rte_dmadev *dev,\n+\t\t\t\t const struct rte_dmadev_conf *dev_conf);\n+/**< @internal Function used to configure a device. */\n+\n+typedef int (*dmadev_start_t)(struct rte_dmadev *dev);\n+/**< @internal Function used to start a configured device. */\n+\n+typedef int (*dmadev_stop_t)(struct rte_dmadev *dev);\n+/**< @internal Function used to stop a configured device. */\n+\n+typedef int (*dmadev_close_t)(struct rte_dmadev *dev);\n+/**< @internal Function used to close a configured device. */\n+\n+typedef int (*dmadev_reset_t)(struct rte_dmadev *dev);\n+/**< @internal Function used to reset a configured device. */\n+\n+typedef int (*dmadev_queue_setup_t)(struct rte_dmadev *dev,\n+\t\t\t\t const struct rte_dmadev_queue_conf *conf);\n+/**< @internal Function used to allocate and set up a virt queue. */\n+\n+typedef int (*dmadev_queue_release_t)(struct rte_dmadev *dev, uint16_t vq_id);\n+/**< @internal Function used to release a virt queue. */\n+\n+typedef int (*dmadev_queue_info_t)(struct rte_dmadev *dev, uint16_t vq_id,\n+\t\t\t\t struct rte_dmadev_queue_info *info);\n+/**< @internal Function used to retrieve information of a virt queue. */\n+\n+typedef int (*dmadev_stats_get_t)(struct rte_dmadev *dev, int vq_id,\n+\t\t\t\t struct rte_dmadev_stats *stats);\n+/**< @internal Function used to retrieve basic statistics. */\n+\n+typedef int (*dmadev_stats_reset_t)(struct rte_dmadev *dev, int vq_id);\n+/**< @internal Function used to reset basic statistics. */\n+\n+typedef int (*dmadev_xstats_get_names_t)(const struct rte_dmadev *dev,\n+\t\tstruct rte_dmadev_xstats_name *xstats_names,\n+\t\tuint32_t size);\n+/**< @internal Function used to get names of extended stats. */\n+\n+typedef int (*dmadev_xstats_get_t)(const struct rte_dmadev *dev,\n+\t\tconst uint32_t ids[], uint64_t values[], uint32_t n);\n+/**< @internal Function used to retrieve extended stats. */\n+\n+typedef int (*dmadev_xstats_reset_t)(struct rte_dmadev *dev,\n+\t\t\t\t const uint32_t ids[], uint32_t nb_ids);\n+/**< @internal Function used to reset extended stats. */\n+\n+typedef int (*dmadev_selftest_t)(uint16_t dev_id);\n+/**< @internal Function used to start dmadev selftest. */\n+\n+/** DMA device operations function pointer table */\n+struct rte_dmadev_ops {\n+\t/**< Get device info. */\n+\tdmadev_info_get_t dev_info_get;\n+\t/**< Configure device. */\n+\tdmadev_configure_t dev_configure;\n+\t/**< Start device. */\n+\tdmadev_start_t dev_start;\n+\t/**< Stop device. */\n+\tdmadev_stop_t dev_stop;\n+\t/**< Close device. */\n+\tdmadev_close_t dev_close;\n+\t/**< Reset device. */\n+\tdmadev_reset_t dev_reset;\n+\n+\t/**< Allocate and set up a virt queue. */\n+\tdmadev_queue_setup_t queue_setup;\n+\t/**< Release a virt queue. */\n+\tdmadev_queue_release_t queue_release;\n+\t/**< Retrieve information of a virt queue */\n+\tdmadev_queue_info_t queue_info_get;\n+\n+\t/**< Get basic statistics. */\n+\tdmadev_stats_get_t stats_get;\n+\t/**< Reset basic statistics. */\n+\tdmadev_stats_reset_t stats_reset;\n+\t/**< Get names of extended stats. */\n+\tdmadev_xstats_get_names_t xstats_get_names;\n+\t/**< Get extended statistics. */\n+\tdmadev_xstats_get_t xstats_get;\n+\t/**< Reset extended statistics values. */\n+\tdmadev_xstats_reset_t xstats_reset;\n+\n+\t/**< Device selftest function */\n+\tdmadev_selftest_t dev_selftest;\n+};\n+\n+/**\n+ * Allocates a new dmadev slot for an DMA device and returns the pointer\n+ * to that slot for the driver to use.\n+ *\n+ * @param name\n+ * Unique identifier name for each device\n+ * @param dev_private_size\n+ * Size of private data memory allocated within rte_dmadev object.\n+ * Set to 0 to disable internal memory allocation and allow for\n+ * self-allocation.\n+ * @param socket_id\n+ * Socket to allocate resources on.\n+ *\n+ * @return\n+ * - NULL: Failure to allocate\n+ * - Other: The rte_dmadev structure pointer for the new device\n+ */\n+struct rte_dmadev *\n+rte_dmadev_pmd_allocate(const char *name, size_t dev_private_size,\n+\t\t\tint socket_id);\n+\n+/**\n+ * Release the specified dmadev device.\n+ *\n+ * @param dev\n+ * The *dmadev* pointer is the address of the *rte_dmadev* structure.\n+ *\n+ * @return\n+ * - 0 on success, negative on error\n+ */\n+int\n+rte_dmadev_pmd_release(struct rte_dmadev *dev);\n+\n+#ifdef __cplusplus\n+}\n+#endif\n+\n+#endif /* _RTE_DMADEV_PMD_H_ */\ndiff --git a/lib/dmadev/version.map b/lib/dmadev/version.map\nnew file mode 100644\nindex 0000000..383b3ca\n--- /dev/null\n+++ b/lib/dmadev/version.map\n@@ -0,0 +1,32 @@\n+EXPERIMENTAL {\n+\tglobal:\n+\n+\trte_dmadev_count;\n+\trte_dmadev_get_dev_id;\n+\trte_dmadev_socket_id;\n+\trte_dmadev_info_get;\n+\trte_dmadev_configure;\n+\trte_dmadev_start;\n+\trte_dmadev_stop;\n+\trte_dmadev_close;\n+\trte_dmadev_reset;\n+\trte_dmadev_queue_setup;\n+\trte_dmadev_queue_release;\n+\trte_dmadev_queue_info_get;\n+\trte_dmadev_copy;\n+\trte_dmadev_copy_sg;\n+\trte_dmadev_fill;\n+\trte_dmadev_fill_sg;\n+\trte_dmadev_fence;\n+\trte_dmadev_perform;\n+\trte_dmadev_completed;\n+\trte_dmadev_completed_fails;\n+\trte_dmadev_stats_get;\n+\trte_dmadev_stats_reset;\n+\trte_dmadev_xstats_names_get;\n+\trte_dmadev_xstats_get;\n+\trte_dmadev_xstats_reset;\n+\trte_dmadev_selftest;\n+\n+\tlocal: *;\n+};\ndiff --git a/lib/meson.build b/lib/meson.build\nindex 1673ca4..68d239f 100644\n--- a/lib/meson.build\n+++ b/lib/meson.build\n@@ -60,6 +60,7 @@ libraries = [\n 'bpf',\n 'graph',\n 'node',\n+ 'dmadev',\n ]\n \n if is_windows\n", "prefixes": [] }{ "id": 95217, "url": "