From patchwork Tue Aug 27 11:57:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chaitanya Babu, TalluriX" X-Patchwork-Id: 58038 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E0B631C0BC; Tue, 27 Aug 2019 13:58:20 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id F18B51C0B2; Tue, 27 Aug 2019 13:58:18 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Aug 2019 04:58:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,437,1559545200"; d="scan'208";a="264250851" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga001.jf.intel.com with ESMTP; 27 Aug 2019 04:58:16 -0700 Received: from wgcvswdev001.ir.intel.com (wgcvswdev001.ir.intel.com [10.102.246.100]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id x7RBwFc4021182; Tue, 27 Aug 2019 12:58:15 +0100 Received: from wgcvswdev001.ir.intel.com (localhost [127.0.0.1]) by wgcvswdev001.ir.intel.com with ESMTP id x7RBvd4L002273; Tue, 27 Aug 2019 12:57:39 +0100 Received: (from tchaitax@localhost) by wgcvswdev001.ir.intel.com with ? id x7RBvdwh002269; Tue, 27 Aug 2019 12:57:39 +0100 From: Chaitanya Babu Talluri To: dev@dpdk.org Cc: reshma.pattan@intel.com, jananeex.m.parthasarathy@intel.com, anatoly.burakov@intel.com, Chaitanya Babu Talluri , stable@dpdk.org Date: Tue, 27 Aug 2019 12:57:08 +0100 Message-Id: <1566907031-2105-2-git-send-email-tallurix.chaitanya.babu@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1566474836-30480-1-git-send-email-tallurix.chaitanya.babu@intel.com> References: <1566474836-30480-1-git-send-email-tallurix.chaitanya.babu@intel.com> Subject: [dpdk-dev] [PATCH v3 1/4] lib/eal: fix vfio unmap that fails unexpectedly X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Unmap fails when there are duplicate entries in user_mem_maps. The fix is to validate if the input VA, IOVA exists or overlaps in user_mem_maps before creating map. Fixes: 73a63908 ("vfio: allow to map other memory regions") Cc: stable@dpdk.org Signed-off-by: Chaitanya Babu Talluri --- lib/librte_eal/linux/eal/eal_vfio.c | 46 +++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c index 501c74f23..104912077 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.c +++ b/lib/librte_eal/linux/eal/eal_vfio.c @@ -212,6 +212,41 @@ find_user_mem_map(struct user_mem_maps *user_mem_maps, uint64_t addr, return NULL; } +static int +find_user_mem_map_overlap(struct user_mem_maps *user_mem_maps, uint64_t addr, + uint64_t iova, uint64_t len) +{ + uint64_t va_end = addr + len; + uint64_t iova_end = iova + len; + int i; + + for (i = 0; i < user_mem_maps->n_maps; i++) { + struct user_mem_map *map = &user_mem_maps->maps[i]; + uint64_t map_va_end = map->addr + map->len; + uint64_t map_iova_end = map->iova + map->len; + + bool no_lo_va_overlap = addr < map->addr && va_end <= map->addr; + bool no_hi_va_overlap = addr >= map_va_end && + va_end > map_va_end; + bool no_lo_iova_overlap = iova < map->iova && + iova_end <= map->iova; + bool no_hi_iova_overlap = iova >= map_iova_end && + iova_end > map_iova_end; + + /* check input VA and iova is not within the + * existing map's range + */ + if ((no_lo_va_overlap || no_hi_va_overlap) && + (no_lo_iova_overlap || no_hi_iova_overlap)) + continue; + else + /* map overlaps */ + return 1; + } + /* map doesn't overlap */ + return 0; +} + /* this will sort all user maps, and merge/compact any adjacent maps */ static void compact_user_maps(struct user_mem_maps *user_mem_maps) @@ -1732,6 +1767,17 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, ret = -1; goto out; } + + /* check whether vaddr and iova exists in user_mem_maps */ + ret = find_user_mem_map_overlap(user_mem_maps, vaddr, iova, len); + if (ret) { + RTE_LOG(ERR, EAL, "Mapping overlaps with a previously " + "existing mapping\n"); + rte_errno = EEXIST; + ret = -1; + goto out; + } + /* map the entry */ if (vfio_dma_mem_map(vfio_cfg, vaddr, iova, len, 1)) { /* technically, this will fail if there are currently no devices From patchwork Tue Aug 27 11:57:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chaitanya Babu, TalluriX" X-Patchwork-Id: 58039 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9EBB61C0D8; Tue, 27 Aug 2019 13:58:23 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 142FB1C0B2; Tue, 27 Aug 2019 13:58:19 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Aug 2019 04:58:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,437,1559545200"; d="scan'208";a="355751176" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga005.jf.intel.com with ESMTP; 27 Aug 2019 04:58:17 -0700 Received: from wgcvswdev001.ir.intel.com (wgcvswdev001.ir.intel.com [10.102.246.100]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id x7RBwGfi021186; Tue, 27 Aug 2019 12:58:16 +0100 Received: from wgcvswdev001.ir.intel.com (localhost [127.0.0.1]) by wgcvswdev001.ir.intel.com with ESMTP id x7RBvfm9002283; Tue, 27 Aug 2019 12:57:41 +0100 Received: (from tchaitax@localhost) by wgcvswdev001.ir.intel.com with ? id x7RBvfjs002279; Tue, 27 Aug 2019 12:57:41 +0100 From: Chaitanya Babu Talluri To: dev@dpdk.org Cc: reshma.pattan@intel.com, jananeex.m.parthasarathy@intel.com, anatoly.burakov@intel.com, Chaitanya Babu Talluri , stable@dpdk.org Date: Tue, 27 Aug 2019 12:57:09 +0100 Message-Id: <1566907031-2105-3-git-send-email-tallurix.chaitanya.babu@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1566474836-30480-1-git-send-email-tallurix.chaitanya.babu@intel.com> References: <1566474836-30480-1-git-send-email-tallurix.chaitanya.babu@intel.com> Subject: [dpdk-dev] [PATCH v3 2/4] lib/eal: fix vfio unmap that succeeds unexpectedly X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Unmapping page with a VA that is found in the list of current mappings will succeed even if the IOVA for the chunk that is being unmapped,is mismatched. Fix it by checking if IOVA address matches the expected IOVA address exactly. Fixes: 73a6390859 ("vfio: allow to map other memory regions") Cc: stable@dpdk.org Signed-off-by: Chaitanya Babu Talluri --- lib/librte_eal/linux/eal/eal_vfio.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c index 104912077..04c284cb2 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.c +++ b/lib/librte_eal/linux/eal/eal_vfio.c @@ -184,13 +184,13 @@ find_user_mem_map(struct user_mem_maps *user_mem_maps, uint64_t addr, uint64_t iova, uint64_t len) { uint64_t va_end = addr + len; - uint64_t iova_end = iova + len; int i; for (i = 0; i < user_mem_maps->n_maps; i++) { struct user_mem_map *map = &user_mem_maps->maps[i]; uint64_t map_va_end = map->addr + map->len; - uint64_t map_iova_end = map->iova + map->len; + uint64_t diff_addr_len = addr - map->addr; + uint64_t expected_iova = map->iova + diff_addr_len; /* check start VA */ if (addr < map->addr || addr >= map_va_end) @@ -199,11 +199,10 @@ find_user_mem_map(struct user_mem_maps *user_mem_maps, uint64_t addr, if (va_end <= map->addr || va_end > map_va_end) continue; - /* check start IOVA */ - if (iova < map->iova || iova >= map_iova_end) - continue; - /* check if IOVA end is within boundaries */ - if (iova_end <= map->iova || iova_end > map_iova_end) + /* check whether user input iova is in sync with + * user_mem_map entry's iova + */ + if (expected_iova != iova) continue; /* we've found our map */ From patchwork Tue Aug 27 11:57:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chaitanya Babu, TalluriX" X-Patchwork-Id: 58040 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CC5B21C118; Tue, 27 Aug 2019 13:58:25 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 0AEB31C0B2 for ; Tue, 27 Aug 2019 13:58:20 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Aug 2019 04:58:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,437,1559545200"; d="scan'208";a="355751180" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga005.jf.intel.com with ESMTP; 27 Aug 2019 04:58:18 -0700 Received: from wgcvswdev001.ir.intel.com (wgcvswdev001.ir.intel.com [10.102.246.100]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id x7RBwHvV021193; Tue, 27 Aug 2019 12:58:17 +0100 Received: from wgcvswdev001.ir.intel.com (localhost [127.0.0.1]) by wgcvswdev001.ir.intel.com with ESMTP id x7RBvgmw002290; Tue, 27 Aug 2019 12:57:42 +0100 Received: (from tchaitax@localhost) by wgcvswdev001.ir.intel.com with ? id x7RBvgjt002286; Tue, 27 Aug 2019 12:57:42 +0100 From: Chaitanya Babu Talluri To: dev@dpdk.org Cc: reshma.pattan@intel.com, jananeex.m.parthasarathy@intel.com, anatoly.burakov@intel.com, Chaitanya Babu Talluri Date: Tue, 27 Aug 2019 12:57:10 +0100 Message-Id: <1566907031-2105-4-git-send-email-tallurix.chaitanya.babu@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1566474836-30480-1-git-send-email-tallurix.chaitanya.babu@intel.com> References: <1566474836-30480-1-git-send-email-tallurix.chaitanya.babu@intel.com> Subject: [dpdk-dev] [PATCH v3 3/4] lib/eal: add API to check iommu type is set X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add rte_vfio_iommu_type_is_set() to check IOMMU type for default container. Signed-off-by: Chaitanya Babu Talluri --- lib/librte_eal/common/include/rte_vfio.h | 10 ++++++++++ lib/librte_eal/linux/eal/eal_vfio.c | 16 ++++++++++++++++ 2 files changed, 26 insertions(+) diff --git a/lib/librte_eal/common/include/rte_vfio.h b/lib/librte_eal/common/include/rte_vfio.h index b360485fa..a62006e5a 100644 --- a/lib/librte_eal/common/include/rte_vfio.h +++ b/lib/librte_eal/common/include/rte_vfio.h @@ -397,6 +397,16 @@ int rte_vfio_container_dma_unmap(int container_fd, uint64_t vaddr, uint64_t iova, uint64_t len); +/** + * Check VFIO IOMMU Type is set for default container. + * + * @return + * 0 if successful + * <0 if failed + */ +int +rte_vfio_iommu_type_is_set(void); + #ifdef __cplusplus } #endif diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c index 04c284cb2..a5bb1cff4 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.c +++ b/lib/librte_eal/linux/eal/eal_vfio.c @@ -2071,6 +2071,17 @@ rte_vfio_container_dma_unmap(int container_fd, uint64_t vaddr, uint64_t iova, return container_dma_unmap(vfio_cfg, vaddr, iova, len); } +int +rte_vfio_iommu_type_is_set(void) +{ + if (vfio_get_iommu_type() < 0) { + RTE_LOG(ERR, EAL, "VFIO IOMMU Type is not set\n"); + return -1; + } + + return 0; +} + #else int @@ -2191,4 +2202,9 @@ rte_vfio_container_dma_unmap(__rte_unused int container_fd, return -1; } +int +rte_vfio_iommu_type_is_set(void) +{ + return -1; +} #endif /* VFIO_PRESENT */ From patchwork Tue Aug 27 11:57:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chaitanya Babu, TalluriX" X-Patchwork-Id: 58041 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 613561C120; Tue, 27 Aug 2019 13:58:28 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id C7C411C0C5 for ; Tue, 27 Aug 2019 13:58:21 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Aug 2019 04:58:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,437,1559545200"; d="scan'208";a="355751183" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga005.jf.intel.com with ESMTP; 27 Aug 2019 04:58:19 -0700 Received: from wgcvswdev001.ir.intel.com (wgcvswdev001.ir.intel.com [10.102.246.100]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id x7RBwJSY021196; Tue, 27 Aug 2019 12:58:19 +0100 Received: from wgcvswdev001.ir.intel.com (localhost [127.0.0.1]) by wgcvswdev001.ir.intel.com with ESMTP id x7RBvhU9002299; Tue, 27 Aug 2019 12:57:43 +0100 Received: (from tchaitax@localhost) by wgcvswdev001.ir.intel.com with ? id x7RBvhTc002294; Tue, 27 Aug 2019 12:57:43 +0100 From: Chaitanya Babu Talluri To: dev@dpdk.org Cc: reshma.pattan@intel.com, jananeex.m.parthasarathy@intel.com, anatoly.burakov@intel.com, Chaitanya Babu Talluri Date: Tue, 27 Aug 2019 12:57:11 +0100 Message-Id: <1566907031-2105-5-git-send-email-tallurix.chaitanya.babu@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1566474836-30480-1-git-send-email-tallurix.chaitanya.babu@intel.com> References: <1566474836-30480-1-git-send-email-tallurix.chaitanya.babu@intel.com> Subject: [dpdk-dev] [PATCH v3 4/4] app/test: add unit tests for eal vfio X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Unit test cases are added for eal vfio library. eal_vfio_autotest added to meson build file. Signed-off-by: Chaitanya Babu Talluri --- app/test/Makefile | 1 + app/test/meson.build | 2 + app/test/test_eal_vfio.c | 736 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 739 insertions(+) create mode 100644 app/test/test_eal_vfio.c diff --git a/app/test/Makefile b/app/test/Makefile index 26ba6fe2b..9b9c78b4e 100644 --- a/app/test/Makefile +++ b/app/test/Makefile @@ -137,6 +137,7 @@ SRCS-y += test_cpuflags.c SRCS-y += test_mp_secondary.c SRCS-y += test_eal_flags.c SRCS-y += test_eal_fs.c +SRCS-y += test_eal_vfio.c SRCS-y += test_alarm.c SRCS-y += test_interrupts.c SRCS-y += test_version.c diff --git a/app/test/meson.build b/app/test/meson.build index ec40943bd..bd96ebb2b 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -36,6 +36,7 @@ test_sources = files('commands.c', 'test_distributor_perf.c', 'test_eal_flags.c', 'test_eal_fs.c', + 'test_eal_vfio.c', 'test_efd.c', 'test_efd_perf.c', 'test_errno.c', @@ -175,6 +176,7 @@ fast_test_names = [ 'eal_flags_file_prefix_autotest', 'eal_flags_misc_autotest', 'eal_fs_autotest', + 'eal_vfio_autotest', 'errno_autotest', 'event_ring_autotest', 'func_reentrancy_autotest', diff --git a/app/test/test_eal_vfio.c b/app/test/test_eal_vfio.c new file mode 100644 index 000000000..ca3efb034 --- /dev/null +++ b/app/test/test_eal_vfio.c @@ -0,0 +1,736 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019 Intel Corporation + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "test.h" + +#if !defined(RTE_EXEC_ENV_LINUX) || !defined(RTE_EAL_VFIO) +static int +test_eal_vfio(void) +{ + printf("VFIO not supported, skipping test\n"); + return TEST_SKIPPED; +} + +#else + +#define PAGESIZE sysconf(_SC_PAGESIZE) +#define INVALID_CONTAINER_FD -5 +#define THREE_PAGES 3 +#define UNALIGNED_ADDR 0x1500 + +uint64_t virtaddr_64; +const char *name = "heap"; +size_t map_length; +int container_fds[RTE_MAX_VFIO_CONTAINERS]; + +static int +check_get_mem(void *addr, rte_iova_t *iova) +{ + const struct rte_memseg_list *msl; + const struct rte_memseg *ms; + rte_iova_t expected_iova; + + msl = rte_mem_virt2memseg_list(addr); + if (!msl->external) { + printf("%s():%i: Memseg list is not marked as " + "external\n", __func__, __LINE__); + return -1; + } + ms = rte_mem_virt2memseg(addr, msl); + if (ms == NULL) { + printf("%s():%i: Failed to retrieve memseg for " + "external mem\n", __func__, __LINE__); + return -1; + } + if (ms->addr != addr) { + printf("%s():%i: VA mismatch\n", __func__, __LINE__); + return -1; + } + expected_iova = (iova == NULL) ? RTE_BAD_IOVA : iova[0]; + if (ms->iova != expected_iova) { + printf("%s():%i: IOVA mismatch\n", __func__, __LINE__); + return -1; + } + return 0; +} +static int +check_vfio_exist_and_initialize(void) +{ + int i = 0; + + if (rte_vfio_is_enabled("vfio_pci") == 0) { + printf("VFIO is not enabled\n"); + return TEST_SKIPPED; + } + if (rte_vfio_iommu_type_is_set() < 0) { + printf("VFIO IOMMU Type is not set\n"); + return TEST_SKIPPED; + } + + /* initialize_container_fds */; + for (i = 0; i < RTE_MAX_VFIO_CONTAINERS; i++) + container_fds[i] = -1; + + return TEST_SUCCESS; +} + +/* To test vfio container create */ +static int +test_vfio_container_create(void) +{ + int ret = 0, i = 0; + + /* check max containers limit */ + for (i = 1; i < RTE_MAX_VFIO_CONTAINERS; i++) { + container_fds[i] = rte_vfio_container_create(); + TEST_ASSERT(container_fds[i] > 0, "Test to check " + "rte_vfio_container_create with max " + "containers limit: Failed\n"); + } + + /* check rte_vfio_container_create when exceeds max containers limit */ + ret = rte_vfio_container_create(); + TEST_ASSERT(ret == -1, "Test to check " + "rte_vfio_container_create container " + "when exceeds limit: Failed\n"); + + return TEST_SUCCESS; +} + +/* To test vfio container destroy */ +static int +test_vfio_container_destroy(void) +{ + int i = 0, ret = 0; + + /* check to destroy max container limit */ + for (i = 1; i < RTE_MAX_VFIO_CONTAINERS; i++) { + ret = rte_vfio_container_destroy(container_fds[i]); + TEST_ASSERT(ret == 0, "Test to check " + "rte_vfio_container_destroy: Failed\n"); + container_fds[i] = -1; + } + + /* check rte_vfio_container_destroy with valid but non existing value */ + ret = rte_vfio_container_destroy(0); + TEST_ASSERT(ret == -1, "Test to check rte_vfio_container_destroy with " + "valid but non existing value: Failed\n"); + + /* check rte_vfio_container_destroy with invalid value */ + ret = rte_vfio_container_destroy(-5); + TEST_ASSERT(ret == -1, "Test to check rte_vfio_container_destroy " + "with invalid value: Failed\n"); + + return TEST_SUCCESS; +} + +/* Test to bind a IOMMU group to a container*/ +static int +test_rte_vfio_container_group_bind(void) +{ + int ret = 0; + + /* Test case to bind with invalid container fd */ + ret = rte_vfio_container_group_bind(INVALID_CONTAINER_FD, 0); + TEST_ASSERT(ret == -1, "Test to bind a IOMMU group to a container " + "with invalid fd: Failed\n"); + + /* Test case to bind with non-existing container fd */ + ret = rte_vfio_container_group_bind(0, 0); + TEST_ASSERT(ret == -1, "Test to bind a IOMMU group to a container " + "with non existing fd: Failed\n"); + + return TEST_SUCCESS; +} + +/* Test to unbind a IOMMU group from a container*/ +static int +test_rte_vfio_container_group_unbind(void) +{ + int ret = 0; + + /* Test case to unbind container from invalid group*/ + ret = rte_vfio_container_group_unbind(INVALID_CONTAINER_FD, 0); + TEST_ASSERT(ret == -1, "Test to unbind a IOMMU group to a container " + "with invalid fd: Failed\n"); + + /* Test case to unbind container from group*/ + ret = rte_vfio_container_group_unbind(0, 0); + TEST_ASSERT(ret == -1, "Test to unbind a IOMMU group to a container " + "with non existing fd: Failed\n"); + + return TEST_SUCCESS; +} + +/* Test to get IOMMU group number for a device*/ +static int +test_rte_vfio_get_group_num(void) +{ + int ret = 0, invalid_group_num = 0; + + /* Test case to get IOMMU group num from invalid group */ + ret = rte_vfio_get_group_num(NULL, NULL, &invalid_group_num); + TEST_ASSERT(ret == 0, "Test to get IOMMU group num: Failed\n"); + + /* Test case to get IOMMU group num from invalid device address and + * valid sysfs_base + */ + ret = rte_vfio_get_group_num("/sys/bus/pci/devices/", NULL, + &invalid_group_num); + TEST_ASSERT(ret == 0, "Test to get IOMMU group num: Failed\n"); + + return TEST_SUCCESS; +} + +/* Test to perform DMA mapping for devices in a container */ +static int +test_rte_vfio_container_dma_map(void) +{ + int ret = 0, container_fd; + + /* Test case to map device for non-existing container_fd, with + * non-zero map_length + */ + ret = rte_vfio_container_dma_map(0, 0, 0, map_length); + TEST_ASSERT(ret == -1, "Test to check map device with invalid " + "container: Failed\n"); + + container_fd = rte_vfio_container_create(); + /* Test case to map device for existing fd with no device attached and + * non-zero map_length + */ + ret = rte_vfio_container_dma_map(container_fd, 0, 0, map_length); + TEST_ASSERT(ret == -1, "Test to check map device for existing fd " + "with no device attached and non-zero " + "map_length: Failed\n"); + + /* Test to destroy for container fd */ + ret = rte_vfio_container_destroy(container_fd); + TEST_ASSERT(ret == 0, "Container fd destroy failed\n"); + + return TEST_SUCCESS; +} + +/* Test to perform DMA unmapping for devices in a container*/ +static int +test_rte_vfio_container_dma_unmap(void) +{ + int ret = 0, container_fd; + + /* Test case to unmap device for non-existing container_fd, with + * zero map_length + */ + ret = rte_vfio_container_dma_unmap(0, 0, 0, 0); + TEST_ASSERT(ret == -1, "Test to check map device with non-existing " + "container fd: Failed\n"); + + /* Test case to unmap device for non-existing container_fd, with + * non-zero map_length + */ + ret = rte_vfio_container_dma_unmap(0, 0, 0, map_length); + TEST_ASSERT(ret == -1, "Test to check map device with non-existing " + "container fd: Failed\n"); + + container_fd = rte_vfio_container_create(); + /* Test case to unmap device for existing fd with no device attached + * and with non-zero map_length + */ + ret = rte_vfio_container_dma_unmap(container_fd, 0, 0, map_length); + TEST_ASSERT(ret == -1, "Test to check map device with unmapped " + "container fd: Failed\n"); + + /* Test case to unmap device for existing fd with no device attached + * and with zero map_length + */ + ret = rte_vfio_container_dma_unmap(container_fd, 0, 0, 0); + TEST_ASSERT(ret == -1, "Test to check map device with unmapped " + "container fd: Failed\n"); + + /* Test to destroy for container fd */ + ret = rte_vfio_container_destroy(container_fd); + TEST_ASSERT(ret == 0, "Container fd destroy failed\n"); + + return TEST_SUCCESS; +} + +/*Function to setup external memory */ +static int +test_heap_mem_setup(size_t map_length, int n_pages) +{ + rte_iova_t iova[map_length / PAGESIZE]; + void *addr; + + addr = mmap(NULL, map_length, PROT_WRITE | PROT_READ, + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + if (addr == MAP_FAILED) { + printf("%s():%i: Failed to create dummy memory area\n", + __func__, __LINE__); + return -1; + } + rte_iova_t tmp = 0x100000000 + PAGESIZE; + iova[0] = tmp; + + if (rte_malloc_heap_create(name) != 0) { + printf("%s():%i: Failed to Create heap with valid name\n", + __func__, __LINE__); + return -1; + } + if (rte_malloc_heap_memory_add(name, addr, map_length, iova, n_pages, + PAGESIZE) != 0) { + printf("%s():%i: Failed to add memory to heap\n", + __func__, __LINE__); + return -1; + } + if (check_get_mem(addr, iova) != 0) { + printf("%s():%i: Failed to verify memory\n", + __func__, __LINE__); + + return -1; + } + virtaddr_64 = (uint64_t)(uintptr_t)addr; + + return 0; +} + +/* Function to free the external memory */ +static void +test_heap_mem_free(void) +{ + if (rte_malloc_heap_memory_remove(name, (void *)virtaddr_64, + map_length) != 0) { + printf("%s():%i: Failed to remove memory\n", + __func__, __LINE__); + return; + } + rte_malloc_heap_destroy(name); + + munmap((void *)virtaddr_64, map_length); +} + +/* Test to map memory region for use with VFIO*/ +static int +test_rte_vfio_dma_map(void) +{ + int ret = 0; + + const int n_pages = 1; + map_length = PAGESIZE; + + test_heap_mem_setup(map_length, n_pages); + + /* Test case to map memory for VFIO with zero vaddr, iova addr + * and map_length + */ + ret = rte_vfio_dma_map(0, 0, 0); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with incorrect inputs: Failed\n"); + + /* Test case to map memory for VFIO with zero vaddr, iova addr + * and valid map_length + */ + ret = rte_vfio_dma_map(0, 0, map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with valid map_length: Failed\n"); + + /* Test case to map memory for VFIO with valid iova addr, unmapped + * vaddr and valid map_length + */ + ret = rte_vfio_dma_map(1000000, 0, map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with valid map_length and " + "unmapped virtual address: Failed\n"); + + /* Test case to map memory for VFIO with valid iova addr, mapped + * vaddr and valid map_length + */ + ret = rte_vfio_dma_map(virtaddr_64, 0, map_length); + TEST_ASSERT(ret == 0, "Test to map devices within default container " + "with valid map_length and " + "mapped valid virtual address: Failed\n"); + + /* Test case to check already mapped virtual address */ + ret = rte_vfio_dma_map(virtaddr_64, 0, map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with valid map_length and " + "mapped valid virtual address: Failed\n"); + + /* Test case to check start virtual address + length range overlaps */ + ret = rte_vfio_dma_map((virtaddr_64 + UNALIGNED_ADDR), 0, map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with overlapping virtual address: Failed\n"); + + /* Test case to check start virtual address before + * existing map, overlaps + */ + ret = rte_vfio_dma_map((virtaddr_64 - UNALIGNED_ADDR), 0, map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with start virtual address " + "before existing map, overlaps: Failed\n"); + + /* Test case to check invalid map length */ + ret = rte_vfio_dma_map((virtaddr_64 - UNALIGNED_ADDR), 0, 500); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with invalid map length: Failed\n"); + + /* Test case to check already mapped iova overlaps */ + ret = rte_vfio_dma_map((virtaddr_64 + 8192), 0, map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with already mapped iova overlaps: Failed\n"); + + /* Test case to check start iova + length range overlaps */ + ret = rte_vfio_dma_map((virtaddr_64 + 8192), (0 + UNALIGNED_ADDR), + map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with start iova + length range overlaps: Failed\n"); + + /* Test case to check invalid iova */ + ret = rte_vfio_dma_map((virtaddr_64 + 8192), (0 + 5000), map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with invalid iova: Failed\n"); + + /* Test case to check invalid map length */ + ret = rte_vfio_dma_map((virtaddr_64 + 8192), (0 + UNALIGNED_ADDR), 100); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with invalid map length: Failed\n"); + + /* Test case to map memory for VFIO with invalid vaddr, valid iova addr + * and valid map_length + */ + uint64_t invalid_addr = virtaddr_64 + 1; + ret = rte_vfio_dma_map(invalid_addr, virtaddr_64, map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with mapped invalid virtual address: Failed\n"); + + /* Test case to map memory for VFIO with invalid iova addr, valid vaddr + * and valid map_length + */ + ret = rte_vfio_dma_map(virtaddr_64, UNALIGNED_ADDR, map_length); + TEST_ASSERT(ret == -1, "Test to map devices within default container " + "with valid map_length and " + "invalid iova address: Failed\n"); + + /* Test case to unmap memory region from VFIO with valid iova, + * mapped vaddr and valid map_length + */ + ret = rte_vfio_dma_unmap(virtaddr_64, 0, map_length); + TEST_ASSERT(ret == 0, "Test to unmap devices in default container " + "with valid map_length and " + "mapped valid virtual address: Failed\n"); + + return TEST_SUCCESS; +} + +/* Test to unmap memory region for use with VFIO*/ +static int +test_rte_vfio_dma_unmap(void) +{ + int ret = 0; + + const int n_pages = 1; + map_length = PAGESIZE; + + test_heap_mem_setup(map_length, n_pages); + + /* Test case to unmap memory region from VFIO with zero vaddr, + * iova addr and map_length + */ + ret = rte_vfio_dma_unmap(0, 0, 0); + TEST_ASSERT(ret == -1, "Test to unmap devices in default container " + "with incorrect input: Failed\n"); + + /* Test case to unmap memory region from VFIO with zero vaddr, + * iova addr and valid map_length + */ + ret = rte_vfio_dma_unmap(0, 0, map_length); + TEST_ASSERT(ret == -1, "Test to unmap devices in default container " + "with valid map_length: Failed\n"); + + /* Test case to unmap memory region from VFIO with zero iova addr, + * unmapped vaddr and valid map_length + */ + ret = rte_vfio_dma_unmap(virtaddr_64, 0, map_length); + TEST_ASSERT(ret == -1, "Test to unmap devices in default container " + "with valid map_length and unmapped addr: Failed\n"); + + /* Test case to unmap memory region from VFIO with unmapped vaddr, iova + * and valid map_length + */ + ret = rte_vfio_dma_unmap(virtaddr_64, virtaddr_64, map_length); + TEST_ASSERT(ret == -1, "Test to unmap devices in default container " + "with valid map_length and " + "unmapped addr, iova: Failed\n"); + + /* Test case to map memory region from VFIO with valid iova, + * mapped vaddr and valid map_length + */ + ret = rte_vfio_dma_map(virtaddr_64, 0, map_length); + TEST_ASSERT(ret == 0, "Test to unmap devices in default container " + "with valid map_length and " + "mapped valid virtual address: Failed\n"); + + /* Test case to unmap memory region from VFIO with mapped invalid vaddr, + * valid IOVA and valid map_length + */ + ret = rte_vfio_dma_unmap((virtaddr_64 + 1), 0, map_length); + TEST_ASSERT(ret == -1, "Test to unmap devices in default container " + "with valid map_length and mapped " + "invalid virtual address: Failed\n"); + + /* Test case to unmap memory region from VFIO with mapped + * valid iova addr, vaddr and valid map_length + */ + ret = rte_vfio_dma_unmap(virtaddr_64, 0, map_length); + TEST_ASSERT(ret == 0, "Test to unmap devices in default container " + "with valid map_length and mapped " + "valid virtual address: Failed\n"); + + return TEST_SUCCESS; +} + +static int +test_rte_vfio_dma_map_overlaps(void) +{ + int ret = 0; + const int n_pages = THREE_PAGES; + map_length = PAGESIZE * THREE_PAGES; + + test_heap_mem_setup(map_length, n_pages); + + /* Test case to map 1st page */ + ret = rte_vfio_dma_map(virtaddr_64, 0, PAGESIZE); + TEST_ASSERT(ret == 0, "Test to map device in default container " + "with valid address:Failed\n"); + + /* Test case to map same start virtual address and + * extend beyond end virtual address + */ + ret = rte_vfio_dma_map(virtaddr_64, 0, (PAGESIZE * 2)); + TEST_ASSERT(ret == -1, "Test to map device in default container " + "with same start virtual address and extend beyond end " + "virtual address: Failed\n"); + + /* Test case to map same start virtual address and same end address*/ + ret = rte_vfio_dma_map(virtaddr_64, 0, PAGESIZE); + TEST_ASSERT(ret == -1, "Test to map device in default container " + "with same start virtual address and " + "same end address: Failed\n"); + + /* Test case to unmap 1st page */ + ret = rte_vfio_dma_unmap(virtaddr_64, 0, PAGESIZE); + TEST_ASSERT(ret == 0, "Test to unmap device in default container " + "with valid map_length and " + "mapped valid virtual address: Failed\n"); + + /* Test case to map different virtual address */ + ret = rte_vfio_dma_map((virtaddr_64 + PAGESIZE), (0 + PAGESIZE), + (PAGESIZE * 2)); + TEST_ASSERT(ret == 0, "Test to map device in default container " + "with different virtual address: Failed\n"); + + /* Test case to map different start virtual address and + * ends with same address + */ + ret = rte_vfio_dma_map((virtaddr_64 + (PAGESIZE * 2)), + (0 + (PAGESIZE * 2)), PAGESIZE); + TEST_ASSERT(ret == -1, "Test to map device in default container " + "with different start virtual address and " + "ends with same address: Failed\n"); + + /* Test case to map three pages */ + ret = rte_vfio_dma_map(virtaddr_64, 0, map_length); + TEST_ASSERT(ret == -1, "Test to map device in default container " + "with overlapping virtual address range: Failed\n"); + + /* Test case to map middle overlapping virtual address */ + ret = rte_vfio_dma_map((virtaddr_64 + PAGESIZE), (0 + PAGESIZE), + PAGESIZE); + TEST_ASSERT(ret == -1, "Test to map device in default container " + "with overlapping virtual address: Failed\n"); + + /* Test case to unmap 1st page */ + ret = rte_vfio_dma_unmap(virtaddr_64, 0, PAGESIZE); + TEST_ASSERT(ret == -1, "Test to unmap 1st page: Failed\n"); + + /* Test case to map 1st and 2nd page overlaps */ + ret = rte_vfio_dma_map(virtaddr_64, 0, (PAGESIZE * 2)); + TEST_ASSERT(ret == -1, "Test to map device in default container " + "with 1st and 2nd page overlaps: Failed\n"); + + /* Test case to map 3rd and 4th pages */ + ret = rte_vfio_dma_map((virtaddr_64 + (PAGESIZE * 2)), + (0 + (PAGESIZE * 2)), (PAGESIZE * 2)); + TEST_ASSERT(ret == -1, "Test to map device in default container " + "with 3rd and 4th pages: Failed\n"); + + /* Test case to unmap 3rd page */ + ret = rte_vfio_dma_unmap((virtaddr_64 + (PAGESIZE * 2)), + (0 + (PAGESIZE * 2)), PAGESIZE); + TEST_ASSERT(ret == 0, "Test to unmap 3rd page: Failed\n"); + + /* Test case to map 1st page with total length + * that overlaps middle page + */ + ret = rte_vfio_dma_map(virtaddr_64, 0, map_length); + TEST_ASSERT(ret == -1, "Test to map device in default container " + "with 1st page with total length " + "that overlaps middle page: Failed\n"); + + /* Test case to unmap 2nd page */ + ret = rte_vfio_dma_unmap((virtaddr_64 + PAGESIZE), (0 + PAGESIZE), + PAGESIZE); + TEST_ASSERT(ret == 0, "Test to unmap 2nd page: Failed\n"); + + return TEST_SUCCESS; +} + +/*allocate three pages */ +static int +test_rte_vfio_dma_map_threepages(void) +{ + int ret = 0; + + const int n_pages = THREE_PAGES; + map_length = PAGESIZE * THREE_PAGES; + uint64_t page1_va, page2_va, page3_va; + rte_iova_t page1_iova, page2_iova, page3_iova; + + page1_va = virtaddr_64; + page2_va = virtaddr_64 + PAGESIZE; + page3_va = virtaddr_64 + (PAGESIZE * 2); + + page1_iova = 0; + page2_iova = 0 + PAGESIZE; + page3_iova = 0 + (PAGESIZE * 2); + + test_heap_mem_setup(map_length, n_pages); + + /* Test case to map three pages */ + ret = rte_vfio_dma_map(page1_va, page1_iova, map_length); + TEST_ASSERT(ret == 0, "Test to map device in default container " + "with valid map_length and " + "mapped valid virtual address: Failed\n"); + + /* Test case to unmap 1st page */ + ret = rte_vfio_dma_unmap(page1_va, page1_iova, PAGESIZE); + TEST_ASSERT(ret == 0, "Test to unmap device in default container " + "with valid 1st page map_length and " + "mapped valid virtual address: Failed\n"); + + /* Test case to map 1st page */ + ret = rte_vfio_dma_map(page1_va, page1_iova, PAGESIZE); + TEST_ASSERT(ret == 0, "Test to map device in default container " + "with valid map_length and " + "mapped valid virtual address: Failed\n"); + + /* Test case to unmap 2nd page */ + ret = rte_vfio_dma_unmap(page2_va, page2_iova, PAGESIZE); + TEST_ASSERT(ret == 0, "Test to unmap device in default container " + "with valid map_length and mapped " + "valid 2nd page virtual address: Failed\n"); + + /* Test case to map 2nd page */ + ret = rte_vfio_dma_map(page2_va, page2_iova, PAGESIZE); + TEST_ASSERT(ret == 0, "Test to map device in default container " + "with valid map_length and mapped " + "valid 2nd page virtual address: Failed\n"); + + /* Test case to unmap 3rd page */ + ret = rte_vfio_dma_unmap(page3_va, page3_iova, PAGESIZE); + TEST_ASSERT(ret == 0, "Test to unmap device in default container " + "with valid map_length and mapped " + "valid 3rd page virtual address: Failed\n"); + + /* Test case to map 3rd page */ + ret = rte_vfio_dma_map(page3_va, page3_iova, PAGESIZE); + TEST_ASSERT(ret == 0, "Test to map device in default container " + "with valid map_length and " + "mapped 3rd page valid virtual address: Failed\n"); + + /* Test case to unmap 1st page, but used IOVA address of 2nd page */ + ret = rte_vfio_dma_unmap(page1_va, page2_iova, PAGESIZE); + TEST_ASSERT(ret == -1, "Test to unmap devices in default container " + "with valid map_length and mapped " + "valid virtual address: Failed\n"); + + /* Test case to unmap memory region from VFIO with mapped + * valid iova addr, vaddr and valid map_length + */ + ret = rte_vfio_dma_unmap(page1_va, page1_iova, map_length); + TEST_ASSERT(ret == 0, "Test to unmap devices in default container " + "with valid map_length and mapped " + "valid virtual address: Failed\n"); + + return TEST_SUCCESS; +} + +static struct +unit_test_suite eal_vfio_testsuite = { + .suite_name = "EAL VFIO Unit Test Suite", + .setup = check_vfio_exist_and_initialize, + .teardown = NULL, + .unit_test_cases = { + /* Test Case 1: To check vfio container create test cases */ + TEST_CASE(test_vfio_container_create), + + /* Test Case 2: To check vfio container destroy */ + TEST_CASE(test_vfio_container_destroy), + + /* Test Case 3: To bind a IOMMU group to a container.*/ + TEST_CASE(test_rte_vfio_container_group_bind), + + /* Test Case 4: To get IOMMU group number for a device*/ + TEST_CASE(test_rte_vfio_get_group_num), + + /* Test Case 5: To unbind a IOMMU group to a container.*/ + TEST_CASE(test_rte_vfio_container_group_unbind), + + /* Test Case 6: To perform DMA mapping for devices in default + * container + */ + TEST_CASE_ST(NULL, test_heap_mem_free, test_rte_vfio_dma_map), + + /* Test Case 7: To perform DMA unmapping for devices in default + * container + */ + TEST_CASE_ST(NULL, test_heap_mem_free, + test_rte_vfio_dma_unmap), + + /* Test Case 8: To perform map devices in specific container */ + TEST_CASE(test_rte_vfio_container_dma_map), + + /* Test Case 9: To perform unmap devices in specific container + */ + TEST_CASE(test_rte_vfio_container_dma_unmap), + + /* Test Case 10: To perform three pages */ + TEST_CASE_ST(NULL, test_heap_mem_free, + test_rte_vfio_dma_map_threepages), + + /* Test Case 11: To check DMA overlaps */ + TEST_CASE_ST(NULL, test_heap_mem_free, + test_rte_vfio_dma_map_overlaps), + + TEST_CASES_END() + } +}; + +static int +test_eal_vfio(void) +{ + return unit_test_suite_runner(&eal_vfio_testsuite); +} + +#endif + +REGISTER_TEST_COMMAND(eal_vfio_autotest, test_eal_vfio);