get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/83715/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 83715,
    "url": "https://patches.dpdk.org/api/patches/83715/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/20201105090423.11954-3-ndabilpuram@marvell.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20201105090423.11954-3-ndabilpuram@marvell.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20201105090423.11954-3-ndabilpuram@marvell.com",
    "date": "2020-11-05T09:04:22",
    "name": "[v2,2/3] vfio: fix DMA mapping granularity for type1 iova as va",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "2cff3722e525f3d009fea30a1e1023d8c5c859e0",
    "submitter": {
        "id": 1202,
        "url": "https://patches.dpdk.org/api/people/1202/?format=api",
        "name": "Nithin Dabilpuram",
        "email": "ndabilpuram@marvell.com"
    },
    "delegate": {
        "id": 24651,
        "url": "https://patches.dpdk.org/api/users/24651/?format=api",
        "username": "dmarchand",
        "first_name": "David",
        "last_name": "Marchand",
        "email": "david.marchand@redhat.com"
    },
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/20201105090423.11954-3-ndabilpuram@marvell.com/mbox/",
    "series": [
        {
            "id": 13683,
            "url": "https://patches.dpdk.org/api/series/13683/?format=api",
            "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=13683",
            "date": "2020-11-05T09:04:20",
            "name": "fix issue with partial DMA unmap",
            "version": 2,
            "mbox": "https://patches.dpdk.org/series/13683/mbox/"
        }
    ],
    "comments": "https://patches.dpdk.org/api/patches/83715/comments/",
    "check": "success",
    "checks": "https://patches.dpdk.org/api/patches/83715/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 917AFA04B1;\n\tThu,  5 Nov 2020 10:05:34 +0100 (CET)",
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 2AC55BC66;\n\tThu,  5 Nov 2020 10:04:41 +0100 (CET)",
            "from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com\n [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 4F393AAB7;\n Thu,  5 Nov 2020 10:04:36 +0100 (CET)",
            "from pps.filterd (m0045849.ppops.net [127.0.0.1])\n by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id\n 0A58xxIM028587; Thu, 5 Nov 2020 01:04:34 -0800",
            "from sc-exch01.marvell.com ([199.233.58.181])\n by mx0a-0016f401.pphosted.com with ESMTP id 34mbfcrndt-1\n (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT);\n Thu, 05 Nov 2020 01:04:34 -0800",
            "from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com\n (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2;\n Thu, 5 Nov 2020 01:04:33 -0800",
            "from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com\n (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2;\n Thu, 5 Nov 2020 01:04:32 -0800",
            "from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com\n (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend\n Transport; Thu, 5 Nov 2020 01:04:32 -0800",
            "from hyd1588t430.marvell.com (unknown [10.29.52.204])\n by maili.marvell.com (Postfix) with ESMTP id C484D3F7040;\n Thu,  5 Nov 2020 01:04:30 -0800 (PST)"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;\n h=from : to : cc :\n subject : date : message-id : in-reply-to : references : mime-version :\n content-type; s=pfpt0220; bh=uNrmDvN5a5JoiezgNQ8iNd0RTWqGtKVdVXaANTdd9YM=;\n b=cxAbdJUblbG2Ebgr8ewQRQHfYWO6wzc11O0lPuas4/XxObqWN35RL5I62WYsyClZxBql\n qfdvI7jtX1e1QZgqGsQammIMBJrZi4WRxULroJdBCQp7cnhH3oCuQfa+JGjmoTBe2ZTV\n mRXh6F9BflkUT2T1r6SnG0aDFXJ7bLWnGCxY8kvnUIQiade3jggqBCzPXCo61JSMLhAg\n hcqUxLkLiyoUUFWGsyrbkms3fmqEk4L0FW+QDWb3pQr0UmAv/QKvV8ZMOm7zW8w51iIc\n MsfIfTtiGj1hq7XoAlrbzg8XhRD6Vn4WcJLQFX5J5TPtrW6y8IHeEuLnPnlMBTBUOdXE UA==",
        "From": "Nithin Dabilpuram <ndabilpuram@marvell.com>",
        "To": "<anatoly.burakov@intel.com>",
        "CC": "<jerinj@marvell.com>, <dev@dpdk.org>, Nithin Dabilpuram\n <ndabilpuram@marvell.com>, <stable@dpdk.org>",
        "Date": "Thu, 5 Nov 2020 14:34:22 +0530",
        "Message-ID": "<20201105090423.11954-3-ndabilpuram@marvell.com>",
        "X-Mailer": "git-send-email 2.8.4",
        "In-Reply-To": "<20201105090423.11954-1-ndabilpuram@marvell.com>",
        "References": "<20201012081106.10610-1-ndabilpuram@marvell.com>\n <20201105090423.11954-1-ndabilpuram@marvell.com>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain",
        "X-Proofpoint-Virus-Version": "vendor=fsecure engine=2.50.10434:6.0.312, 18.0.737\n definitions=2020-11-05_05:2020-11-05,\n 2020-11-05 signatures=0",
        "Subject": "[dpdk-dev] [PATCH v2 2/3] vfio: fix DMA mapping granularity for\n\ttype1 iova as va",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Partial unmapping is not supported for VFIO IOMMU type1\nby kernel. Though kernel gives return as zero, the unmapped size\nreturned will not be same as expected. So check for\nreturned unmap size and return error.\n\nFor IOVA as PA, DMA mapping is already at memseg size\ngranularity. Do the same even for IOVA as VA mode as\nDMA map/unmap triggered by heap allocations,\nmaintain granularity of memseg page size so that heap\nexpansion and contraction does not have this issue.\n\nFor user requested DMA map/unmap disallow partial unmapping\nfor VFIO type1.\n\nFixes: 73a639085938 (\"vfio: allow to map other memory regions\")\nCc: anatoly.burakov@intel.com\nCc: stable@dpdk.org\n\nSigned-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>\n---\n lib/librte_eal/linux/eal_vfio.c | 34 ++++++++++++++++++++++++++++------\n lib/librte_eal/linux/eal_vfio.h |  1 +\n 2 files changed, 29 insertions(+), 6 deletions(-)",
    "diff": "diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c\nindex dbefcba..b4f9c33 100644\n--- a/lib/librte_eal/linux/eal_vfio.c\n+++ b/lib/librte_eal/linux/eal_vfio.c\n@@ -69,6 +69,7 @@ static const struct vfio_iommu_type iommu_types[] = {\n \t{\n \t\t.type_id = RTE_VFIO_TYPE1,\n \t\t.name = \"Type 1\",\n+\t\t.partial_unmap = false,\n \t\t.dma_map_func = &vfio_type1_dma_map,\n \t\t.dma_user_map_func = &vfio_type1_dma_mem_map\n \t},\n@@ -76,6 +77,7 @@ static const struct vfio_iommu_type iommu_types[] = {\n \t{\n \t\t.type_id = RTE_VFIO_SPAPR,\n \t\t.name = \"sPAPR\",\n+\t\t.partial_unmap = true,\n \t\t.dma_map_func = &vfio_spapr_dma_map,\n \t\t.dma_user_map_func = &vfio_spapr_dma_mem_map\n \t},\n@@ -83,6 +85,7 @@ static const struct vfio_iommu_type iommu_types[] = {\n \t{\n \t\t.type_id = RTE_VFIO_NOIOMMU,\n \t\t.name = \"No-IOMMU\",\n+\t\t.partial_unmap = true,\n \t\t.dma_map_func = &vfio_noiommu_dma_map,\n \t\t.dma_user_map_func = &vfio_noiommu_dma_mem_map\n \t},\n@@ -525,12 +528,19 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len,\n \t/* for IOVA as VA mode, no need to care for IOVA addresses */\n \tif (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) {\n \t\tuint64_t vfio_va = (uint64_t)(uintptr_t)addr;\n-\t\tif (type == RTE_MEM_EVENT_ALLOC)\n-\t\t\tvfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,\n-\t\t\t\t\tlen, 1);\n-\t\telse\n-\t\t\tvfio_dma_mem_map(default_vfio_cfg, vfio_va, vfio_va,\n-\t\t\t\t\tlen, 0);\n+\t\tuint64_t page_sz = msl->page_sz;\n+\n+\t\t/* Maintain granularity of DMA map/unmap to memseg size */\n+\t\tfor (; cur_len < len; cur_len += page_sz) {\n+\t\t\tif (type == RTE_MEM_EVENT_ALLOC)\n+\t\t\t\tvfio_dma_mem_map(default_vfio_cfg, vfio_va,\n+\t\t\t\t\t\t vfio_va, page_sz, 1);\n+\t\t\telse\n+\t\t\t\tvfio_dma_mem_map(default_vfio_cfg, vfio_va,\n+\t\t\t\t\t\t vfio_va, page_sz, 0);\n+\t\t\tvfio_va += page_sz;\n+\t\t}\n+\n \t\treturn;\n \t}\n \n@@ -1369,6 +1379,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,\n \t\t\tRTE_LOG(ERR, EAL, \"  cannot clear DMA remapping, error %i (%s)\\n\",\n \t\t\t\t\terrno, strerror(errno));\n \t\t\treturn -1;\n+\t\t} else if (dma_unmap.size != len) {\n+\t\t\tRTE_LOG(ERR, EAL, \"  unexpected size %\"PRIu64\" of DMA \"\n+\t\t\t\t\"remapping cleared instead of %\"PRIu64\"\\n\",\n+\t\t\t\t(uint64_t)dma_unmap.size, len);\n+\t\t\trte_errno = EIO;\n+\t\t\treturn -1;\n \t\t}\n \t}\n \n@@ -1839,6 +1855,12 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,\n \t\t/* we're partially unmapping a previously mapped region, so we\n \t\t * need to split entry into two.\n \t\t */\n+\t\tif (!vfio_cfg->vfio_iommu_type->partial_unmap) {\n+\t\t\tRTE_LOG(DEBUG, EAL, \"DMA partial unmap unsupported\\n\");\n+\t\t\trte_errno = ENOTSUP;\n+\t\t\tret = -1;\n+\t\t\tgoto out;\n+\t\t}\n \t\tif (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) {\n \t\t\tRTE_LOG(ERR, EAL, \"Not enough space to store partial mapping\\n\");\n \t\t\trte_errno = ENOMEM;\ndiff --git a/lib/librte_eal/linux/eal_vfio.h b/lib/librte_eal/linux/eal_vfio.h\nindex cb2d35f..6ebaca6 100644\n--- a/lib/librte_eal/linux/eal_vfio.h\n+++ b/lib/librte_eal/linux/eal_vfio.h\n@@ -113,6 +113,7 @@ typedef int (*vfio_dma_user_func_t)(int fd, uint64_t vaddr, uint64_t iova,\n struct vfio_iommu_type {\n \tint type_id;\n \tconst char *name;\n+\tbool partial_unmap;\n \tvfio_dma_user_func_t dma_user_map_func;\n \tvfio_dma_func_t dma_map_func;\n };\n",
    "prefixes": [
        "v2",
        "2/3"
    ]
}