get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/99647/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 99647,
    "url": "https://patches.dpdk.org/api/patches/99647/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/20210925100358.61995-2-xuan.ding@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20210925100358.61995-2-xuan.ding@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20210925100358.61995-2-xuan.ding@intel.com",
    "date": "2021-09-25T10:03:57",
    "name": "[v3,1/2] vfio: allow partially unmapping adjacent memory",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "cfc8ff7b5e31b7bafd6a50da2e09df2bd79c1de5",
    "submitter": {
        "id": 1401,
        "url": "https://patches.dpdk.org/api/people/1401/?format=api",
        "name": "Ding, Xuan",
        "email": "xuan.ding@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/20210925100358.61995-2-xuan.ding@intel.com/mbox/",
    "series": [
        {
            "id": 19153,
            "url": "https://patches.dpdk.org/api/series/19153/?format=api",
            "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=19153",
            "date": "2021-09-25T10:03:56",
            "name": "support IOMMU for DMA device",
            "version": 3,
            "mbox": "https://patches.dpdk.org/series/19153/mbox/"
        }
    ],
    "comments": "https://patches.dpdk.org/api/patches/99647/comments/",
    "check": "success",
    "checks": "https://patches.dpdk.org/api/patches/99647/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 7A5ACA0C4C;\n\tSat, 25 Sep 2021 12:11:38 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 5AFD74014D;\n\tSat, 25 Sep 2021 12:11:38 +0200 (CEST)",
            "from mga17.intel.com (mga17.intel.com [192.55.52.151])\n by mails.dpdk.org (Postfix) with ESMTP id 465C740DDE\n for <dev@dpdk.org>; Sat, 25 Sep 2021 12:11:31 +0200 (CEST)",
            "from orsmga007.jf.intel.com ([10.7.209.58])\n by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 25 Sep 2021 03:11:30 -0700",
            "from dpdk-xuanding-dev2.sh.intel.com ([10.67.119.250])\n by orsmga007.jf.intel.com with ESMTP; 25 Sep 2021 03:11:27 -0700"
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6200,9189,10117\"; a=\"204407015\"",
            "E=Sophos;i=\"5.85,321,1624345200\"; d=\"scan'208\";a=\"204407015\"",
            "E=Sophos;i=\"5.85,321,1624345200\"; d=\"scan'208\";a=\"475429169\""
        ],
        "X-ExtLoop1": "1",
        "From": "Xuan Ding <xuan.ding@intel.com>",
        "To": "dev@dpdk.org, anatoly.burakov@intel.com, maxime.coquelin@redhat.com,\n chenbo.xia@intel.com",
        "Cc": "jiayu.hu@intel.com, cheng1.jiang@intel.com, bruce.richardson@intel.com,\n sunil.pai.g@intel.com, yinan.wang@intel.com, yvonnex.yang@intel.com,\n Xuan Ding <xuan.ding@intel.com>",
        "Date": "Sat, 25 Sep 2021 10:03:57 +0000",
        "Message-Id": "<20210925100358.61995-2-xuan.ding@intel.com>",
        "X-Mailer": "git-send-email 2.17.1",
        "In-Reply-To": "<20210925100358.61995-1-xuan.ding@intel.com>",
        "References": "<20210901053044.109901-1-xuan.ding@intel.com>\n <20210925100358.61995-1-xuan.ding@intel.com>",
        "Subject": "[dpdk-dev] [PATCH v3 1/2] vfio: allow partially unmapping adjacent\n memory",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Currently, if we map a memory area A, then map a separate memory area B\nthat by coincidence happens to be adjacent to A, current implementation\nwill merge these two segments into one, and if partial unmapping is not\nsupported, these segments will then be only allowed to be unmapped in\none go. In other words, given segments A and B that are adjacent, it\nis currently not possible to map A, then map B, then unmap A.\n\nFix this by adding a notion of \"chunk size\", which will allow\nsubdividing segments into equally sized segments whenever we are dealing\nwith an IOMMU that does not support partial unmapping. With this change,\nwe will still be able to merge adjacent segments, but only if they are\nof the same size. If we keep with our above example, adjacent segments A\nand B will be stored as separate segments if they are of different\nsizes.\n\nSigned-off-by: Anatoly Burakov <anatoly.burakov@intel.com>\nSigned-off-by: Xuan Ding <xuan.ding@intel.com>\n---\n lib/eal/linux/eal_vfio.c | 338 ++++++++++++++++++++++++++-------------\n 1 file changed, 228 insertions(+), 110 deletions(-)",
    "diff": "diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c\nindex 25add2fa5d..657c89ca58 100644\n--- a/lib/eal/linux/eal_vfio.c\n+++ b/lib/eal/linux/eal_vfio.c\n@@ -31,9 +31,10 @@\n  */\n #define VFIO_MAX_USER_MEM_MAPS 256\n struct user_mem_map {\n-\tuint64_t addr;\n-\tuint64_t iova;\n-\tuint64_t len;\n+\tuint64_t addr;  /**< start VA */\n+\tuint64_t iova;  /**< start IOVA */\n+\tuint64_t len;   /**< total length of the mapping */\n+\tuint64_t chunk; /**< this mapping can be split in chunks of this size */\n };\n \n struct user_mem_maps {\n@@ -95,7 +96,8 @@ static const struct vfio_iommu_type iommu_types[] = {\n static int\n is_null_map(const struct user_mem_map *map)\n {\n-\treturn map->addr == 0 && map->iova == 0 && map->len == 0;\n+\treturn map->addr == 0 && map->iova == 0 &&\n+\t\t\tmap->len == 0 && map->chunk == 0;\n }\n \n /* we may need to merge user mem maps together in case of user mapping/unmapping\n@@ -129,41 +131,90 @@ user_mem_map_cmp(const void *a, const void *b)\n \tif (umm_a->len > umm_b->len)\n \t\treturn 1;\n \n+\tif (umm_a->chunk < umm_b->chunk)\n+\t\treturn -1;\n+\tif (umm_a->chunk > umm_b->chunk)\n+\t\treturn 1;\n+\n \treturn 0;\n }\n \n-/* adjust user map entry. this may result in shortening of existing map, or in\n- * splitting existing map in two pieces.\n+/*\n+ * Take in an address range and list of current mappings, and produce a list of\n+ * mappings that will be kept.\n  */\n+static int\n+process_maps(struct user_mem_map *src, size_t src_len,\n+\t\tstruct user_mem_map newmap[2], uint64_t vaddr, uint64_t len)\n+{\n+\tstruct user_mem_map *src_first = &src[0];\n+\tstruct user_mem_map *src_last = &src[src_len - 1];\n+\tstruct user_mem_map *dst_first = &newmap[0];\n+\t/* we can get at most two new segments */\n+\tstruct user_mem_map *dst_last = &newmap[1];\n+\tuint64_t first_off = vaddr - src_first->addr;\n+\tuint64_t last_off = (src_last->addr + src_last->len) - (vaddr + len);\n+\tint newmap_len = 0;\n+\n+\tif (first_off != 0) {\n+\t\tdst_first->addr = src_first->addr;\n+\t\tdst_first->iova = src_first->iova;\n+\t\tdst_first->len = first_off;\n+\t\tdst_first->chunk = src_first->chunk;\n+\n+\t\tnewmap_len++;\n+\t}\n+\tif (last_off != 0) {\n+\t\t/* if we had start offset, we have two segments */\n+\t\tstruct user_mem_map *last =\n+\t\t\t\tfirst_off == 0 ? dst_first : dst_last;\n+\t\tlast->addr = (src_last->addr + src_last->len) - last_off;\n+\t\tlast->iova = (src_last->iova + src_last->len) - last_off;\n+\t\tlast->len = last_off;\n+\t\tlast->chunk = src_last->chunk;\n+\n+\t\tnewmap_len++;\n+\t}\n+\treturn newmap_len;\n+}\n+\n+/* erase certain maps from the list */\n static void\n-adjust_map(struct user_mem_map *src, struct user_mem_map *end,\n-\t\tuint64_t remove_va_start, uint64_t remove_len)\n-{\n-\t/* if va start is same as start address, we're simply moving start */\n-\tif (remove_va_start == src->addr) {\n-\t\tsrc->addr += remove_len;\n-\t\tsrc->iova += remove_len;\n-\t\tsrc->len -= remove_len;\n-\t} else if (remove_va_start + remove_len == src->addr + src->len) {\n-\t\t/* we're shrinking mapping from the end */\n-\t\tsrc->len -= remove_len;\n-\t} else {\n-\t\t/* we're blowing a hole in the middle */\n-\t\tstruct user_mem_map tmp;\n-\t\tuint64_t total_len = src->len;\n+delete_maps(struct user_mem_maps *user_mem_maps, struct user_mem_map *del_maps,\n+\t\tsize_t n_del)\n+{\n+\tint i;\n+\tsize_t j;\n+\n+\tfor (i = 0, j = 0; i < VFIO_MAX_USER_MEM_MAPS && j < n_del; i++) {\n+\t\tstruct user_mem_map *left = &user_mem_maps->maps[i];\n+\t\tstruct user_mem_map *right = &del_maps[j];\n \n-\t\t/* adjust source segment length */\n-\t\tsrc->len = remove_va_start - src->addr;\n+\t\tif (user_mem_map_cmp(left, right) == 0) {\n+\t\t\tmemset(left, 0, sizeof(*left));\n+\t\t\tj++;\n+\t\t\tuser_mem_maps->n_maps--;\n+\t\t}\n+\t}\n+}\n+\n+static void\n+copy_maps(struct user_mem_maps *user_mem_maps, struct user_mem_map *add_maps,\n+\t\tsize_t n_add)\n+{\n+\tint i;\n+\tsize_t j;\n \n-\t\t/* create temporary segment in the middle */\n-\t\ttmp.addr = src->addr + src->len;\n-\t\ttmp.iova = src->iova + src->len;\n-\t\ttmp.len = remove_len;\n+\tfor (i = 0, j = 0; i < VFIO_MAX_USER_MEM_MAPS && j < n_add; i++) {\n+\t\tstruct user_mem_map *left = &user_mem_maps->maps[i];\n+\t\tstruct user_mem_map *right = &add_maps[j];\n \n-\t\t/* populate end segment - this one we will be keeping */\n-\t\tend->addr = tmp.addr + tmp.len;\n-\t\tend->iova = tmp.iova + tmp.len;\n-\t\tend->len = total_len - src->len - tmp.len;\n+\t\t/* insert into empty space */\n+\t\tif (is_null_map(left)) {\n+\t\t\tmemcpy(left, right, sizeof(*left));\n+\t\t\tj++;\n+\t\t\tuser_mem_maps->n_maps++;\n+\t\t}\n \t}\n }\n \n@@ -179,7 +230,8 @@ merge_map(struct user_mem_map *left, struct user_mem_map *right)\n \t\treturn 0;\n \tif (left->iova + left->len != right->iova)\n \t\treturn 0;\n-\n+\tif (left->chunk != right->chunk)\n+\t\treturn 0;\n \tleft->len += right->len;\n \n out:\n@@ -188,51 +240,94 @@ merge_map(struct user_mem_map *left, struct user_mem_map *right)\n \treturn 1;\n }\n \n-static struct user_mem_map *\n-find_user_mem_map(struct user_mem_maps *user_mem_maps, uint64_t addr,\n-\t\tuint64_t iova, uint64_t len)\n+static bool\n+addr_is_chunk_aligned(struct user_mem_map *maps, size_t n_maps,\n+\t\tuint64_t vaddr, uint64_t iova)\n+{\n+\tunsigned int i;\n+\n+\tfor (i = 0; i < n_maps; i++) {\n+\t\tstruct user_mem_map *map = &maps[i];\n+\t\tuint64_t map_va_end = map->addr + map->len;\n+\t\tuint64_t map_iova_end = map->iova + map->len;\n+\t\tuint64_t map_va_off = vaddr - map->addr;\n+\t\tuint64_t map_iova_off = iova - map->iova;\n+\n+\t\t/* we include end of the segment in comparison as well */\n+\t\tbool addr_in_map = (vaddr >= map->addr) && (vaddr <= map_va_end);\n+\t\tbool iova_in_map = (iova >= map->iova) && (iova <= map_iova_end);\n+\t\t/* chunk may not be power of two, so use modulo */\n+\t\tbool addr_is_aligned = (map_va_off % map->chunk) == 0;\n+\t\tbool iova_is_aligned = (map_iova_off % map->chunk) == 0;\n+\n+\t\tif (addr_in_map && iova_in_map &&\n+\t\t\t\taddr_is_aligned && iova_is_aligned)\n+\t\t\treturn true;\n+\t}\n+\treturn false;\n+}\n+\n+static int\n+find_user_mem_maps(struct user_mem_maps *user_mem_maps, uint64_t addr,\n+\t\tuint64_t iova, uint64_t len, struct user_mem_map *dst,\n+\t\tsize_t dst_len)\n {\n \tuint64_t va_end = addr + len;\n \tuint64_t iova_end = iova + len;\n-\tint i;\n+\tbool found = false;\n+\tsize_t j;\n+\tint i, ret;\n \n-\tfor (i = 0; i < user_mem_maps->n_maps; i++) {\n+\tfor (i = 0, j = 0; i < user_mem_maps->n_maps; i++) {\n \t\tstruct user_mem_map *map = &user_mem_maps->maps[i];\n \t\tuint64_t map_va_end = map->addr + map->len;\n \t\tuint64_t map_iova_end = map->iova + map->len;\n \n-\t\t/* check start VA */\n-\t\tif (addr < map->addr || addr >= map_va_end)\n-\t\t\tcontinue;\n-\t\t/* check if VA end is within boundaries */\n-\t\tif (va_end <= map->addr || va_end > map_va_end)\n-\t\t\tcontinue;\n-\n-\t\t/* check start IOVA */\n-\t\tif (iova < map->iova || iova >= map_iova_end)\n-\t\t\tcontinue;\n-\t\t/* check if IOVA end is within boundaries */\n-\t\tif (iova_end <= map->iova || iova_end > map_iova_end)\n-\t\t\tcontinue;\n-\n-\t\t/* we've found our map */\n-\t\treturn map;\n+\t\tbool start_addr_in_map = (addr >= map->addr) &&\n+\t\t\t\t(addr < map_va_end);\n+\t\tbool end_addr_in_map = (va_end > map->addr) &&\n+\t\t\t\t(va_end <= map_va_end);\n+\t\tbool start_iova_in_map = (iova >= map->iova) &&\n+\t\t\t\t(iova < map_iova_end);\n+\t\tbool end_iova_in_map = (iova_end > map->iova) &&\n+\t\t\t\t(iova_end <= map_iova_end);\n+\n+\t\t/* do we have space in temporary map? */\n+\t\tif (j == dst_len) {\n+\t\t\tret = -ENOSPC;\n+\t\t\tgoto err;\n+\t\t}\n+\t\t/* check if current map is start of our segment */\n+\t\tif (!found && start_addr_in_map && start_iova_in_map)\n+\t\t\tfound = true;\n+\t\t/* if we have previously found a segment, add it to the map */\n+\t\tif (found) {\n+\t\t\t/* copy the segment into our temporary map */\n+\t\t\tmemcpy(&dst[j++], map, sizeof(*map));\n+\n+\t\t\t/* if we match end of segment, quit */\n+\t\t\tif (end_addr_in_map && end_iova_in_map)\n+\t\t\t\treturn j;\n+\t\t}\n \t}\n-\treturn NULL;\n+\t/* we didn't find anything */\n+\tret = -ENOENT;\n+err:\n+\tmemset(dst, 0, sizeof(*dst) * dst_len);\n+\treturn ret;\n }\n \n /* this will sort all user maps, and merge/compact any adjacent maps */\n static void\n compact_user_maps(struct user_mem_maps *user_mem_maps)\n {\n-\tint i, n_merged, cur_idx;\n+\tint i;\n \n-\tqsort(user_mem_maps->maps, user_mem_maps->n_maps,\n+\tqsort(user_mem_maps->maps, VFIO_MAX_USER_MEM_MAPS,\n \t\t\tsizeof(user_mem_maps->maps[0]), user_mem_map_cmp);\n \n \t/* we'll go over the list backwards when merging */\n-\tn_merged = 0;\n-\tfor (i = user_mem_maps->n_maps - 2; i >= 0; i--) {\n+\tfor (i = VFIO_MAX_USER_MEM_MAPS - 2; i >= 0; i--) {\n \t\tstruct user_mem_map *l, *r;\n \n \t\tl = &user_mem_maps->maps[i];\n@@ -241,30 +336,16 @@ compact_user_maps(struct user_mem_maps *user_mem_maps)\n \t\tif (is_null_map(l) || is_null_map(r))\n \t\t\tcontinue;\n \n+\t\t/* try and merge the maps */\n \t\tif (merge_map(l, r))\n-\t\t\tn_merged++;\n+\t\t\tuser_mem_maps->n_maps--;\n \t}\n \n \t/* the entries are still sorted, but now they have holes in them, so\n-\t * walk through the list and remove the holes\n+\t * sort the list again.\n \t */\n-\tif (n_merged > 0) {\n-\t\tcur_idx = 0;\n-\t\tfor (i = 0; i < user_mem_maps->n_maps; i++) {\n-\t\t\tif (!is_null_map(&user_mem_maps->maps[i])) {\n-\t\t\t\tstruct user_mem_map *src, *dst;\n-\n-\t\t\t\tsrc = &user_mem_maps->maps[i];\n-\t\t\t\tdst = &user_mem_maps->maps[cur_idx++];\n-\n-\t\t\t\tif (src != dst) {\n-\t\t\t\t\tmemcpy(dst, src, sizeof(*src));\n-\t\t\t\t\tmemset(src, 0, sizeof(*src));\n-\t\t\t\t}\n-\t\t\t}\n-\t\t}\n-\t\tuser_mem_maps->n_maps = cur_idx;\n-\t}\n+\tqsort(user_mem_maps->maps, VFIO_MAX_USER_MEM_MAPS,\n+\t\t\tsizeof(user_mem_maps->maps[0]), user_mem_map_cmp);\n }\n \n static int\n@@ -1795,6 +1876,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,\n {\n \tstruct user_mem_map *new_map;\n \tstruct user_mem_maps *user_mem_maps;\n+\tbool has_partial_unmap;\n \tint ret = 0;\n \n \tuser_mem_maps = &vfio_cfg->mem_maps;\n@@ -1818,11 +1900,16 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,\n \t\tret = -1;\n \t\tgoto out;\n \t}\n+\t/* do we have partial unmap support? */\n+\thas_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap;\n+\n \t/* create new user mem map entry */\n \tnew_map = &user_mem_maps->maps[user_mem_maps->n_maps++];\n \tnew_map->addr = vaddr;\n \tnew_map->iova = iova;\n \tnew_map->len = len;\n+\t/* for IOMMU types supporting partial unmap, we don't need chunking */\n+\tnew_map->chunk = has_partial_unmap ? 0 : len;\n \n \tcompact_user_maps(user_mem_maps);\n out:\n@@ -1834,38 +1921,81 @@ static int\n container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,\n \t\tuint64_t len)\n {\n-\tstruct user_mem_map *map, *new_map = NULL;\n+\tstruct user_mem_map orig_maps[VFIO_MAX_USER_MEM_MAPS];\n+\tstruct user_mem_map new_maps[2]; /* can be at most 2 */\n \tstruct user_mem_maps *user_mem_maps;\n-\tint ret = 0;\n+\tint n_orig, n_new, newlen, ret = 0;\n+\tbool has_partial_unmap;\n \n \tuser_mem_maps = &vfio_cfg->mem_maps;\n \trte_spinlock_recursive_lock(&user_mem_maps->lock);\n \n-\t/* find our mapping */\n-\tmap = find_user_mem_map(user_mem_maps, vaddr, iova, len);\n-\tif (!map) {\n+\t/*\n+\t * Previously, we had adjacent mappings entirely contained within one\n+\t * mapping entry. Since we now store original mapping length in some\n+\t * cases, this is no longer the case, so unmapping can potentially go\n+\t * over multiple segments and split them in any number of ways.\n+\t *\n+\t * To complicate things further, some IOMMU types support arbitrary\n+\t * partial unmapping, while others will only support unmapping along the\n+\t * chunk size, so there are a lot of cases we need to handle. To make\n+\t * things easier code wise, instead of trying to adjust existing\n+\t * mappings, let's just rebuild them using information we have.\n+\t */\n+\n+\t/* do we have partial unmap capability? */\n+\thas_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap;\n+\n+\t/*\n+\t * first thing to do is check if there exists a mapping that includes\n+\t * the start and the end of our requested unmap. We need to collect all\n+\t * maps that include our unmapped region.\n+\t */\n+\tn_orig = find_user_mem_maps(user_mem_maps, vaddr, iova, len,\n+\t\t\torig_maps, RTE_DIM(orig_maps));\n+\t/* did we find anything? */\n+\tif (n_orig < 0) {\n \t\tRTE_LOG(ERR, EAL, \"Couldn't find previously mapped region\\n\");\n \t\trte_errno = EINVAL;\n \t\tret = -1;\n \t\tgoto out;\n \t}\n-\tif (map->addr != vaddr || map->iova != iova || map->len != len) {\n-\t\t/* we're partially unmapping a previously mapped region, so we\n-\t\t * need to split entry into two.\n-\t\t */\n-\t\tif (!vfio_cfg->vfio_iommu_type->partial_unmap) {\n+\n+\t/*\n+\t * if we don't support partial unmap, we must check if start and end of\n+\t * current unmap region are chunk-aligned.\n+\t */\n+\tif (!has_partial_unmap) {\n+\t\tbool start_aligned, end_aligned;\n+\n+\t\tstart_aligned = addr_is_chunk_aligned(orig_maps, n_orig,\n+\t\t\t\tvaddr, iova);\n+\t\tend_aligned = addr_is_chunk_aligned(orig_maps, n_orig,\n+\t\t\t\tvaddr + len, iova + len);\n+\n+\t\tif (!start_aligned || !end_aligned) {\n \t\t\tRTE_LOG(DEBUG, EAL, \"DMA partial unmap unsupported\\n\");\n \t\t\trte_errno = ENOTSUP;\n \t\t\tret = -1;\n \t\t\tgoto out;\n \t\t}\n-\t\tif (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) {\n-\t\t\tRTE_LOG(ERR, EAL, \"Not enough space to store partial mapping\\n\");\n-\t\t\trte_errno = ENOMEM;\n-\t\t\tret = -1;\n-\t\t\tgoto out;\n-\t\t}\n-\t\tnew_map = &user_mem_maps->maps[user_mem_maps->n_maps++];\n+\t}\n+\n+\t/*\n+\t * now we know we can potentially unmap the region, but we still have to\n+\t * figure out if there is enough space in our list to store remaining\n+\t * maps. for this, we will figure out how many segments we are going to\n+\t * remove, and how many new segments we are going to create.\n+\t */\n+\tn_new = process_maps(orig_maps, n_orig, new_maps, vaddr, len);\n+\n+\t/* can we store the new maps in our list? */\n+\tnewlen = (user_mem_maps->n_maps - n_orig) + n_new;\n+\tif (newlen >= VFIO_MAX_USER_MEM_MAPS) {\n+\t\tRTE_LOG(ERR, EAL, \"Not enough space to store partial mapping\\n\");\n+\t\trte_errno = ENOMEM;\n+\t\tret = -1;\n+\t\tgoto out;\n \t}\n \n \t/* unmap the entry */\n@@ -1886,23 +2016,11 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,\n \t\t\tRTE_LOG(DEBUG, EAL, \"DMA unmapping failed, but removing mappings anyway\\n\");\n \t\t}\n \t}\n-\t/* remove map from the list of active mappings */\n-\tif (new_map != NULL) {\n-\t\tadjust_map(map, new_map, vaddr, len);\n-\n-\t\t/* if we've created a new map by splitting, sort everything */\n-\t\tif (!is_null_map(new_map)) {\n-\t\t\tcompact_user_maps(user_mem_maps);\n-\t\t} else {\n-\t\t\t/* we've created a new mapping, but it was unused */\n-\t\t\tuser_mem_maps->n_maps--;\n-\t\t}\n-\t} else {\n-\t\tmemset(map, 0, sizeof(*map));\n-\t\tcompact_user_maps(user_mem_maps);\n-\t\tuser_mem_maps->n_maps--;\n-\t}\n \n+\t/* we have unmapped the region, so now update the maps */\n+\tdelete_maps(user_mem_maps, orig_maps, n_orig);\n+\tcopy_maps(user_mem_maps, new_maps, n_new);\n+\tcompact_user_maps(user_mem_maps);\n out:\n \trte_spinlock_recursive_unlock(&user_mem_maps->lock);\n \treturn ret;\n",
    "prefixes": [
        "v3",
        "1/2"
    ]
}