get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/32466/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 32466,
    "url": "https://patches.dpdk.org/api/patches/32466/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/cbef4f6b4af3ea8a3b29b13c39c1c0c91c793291.1513681966.git.anatoly.burakov@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<cbef4f6b4af3ea8a3b29b13c39c1c0c91c793291.1513681966.git.anatoly.burakov@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/cbef4f6b4af3ea8a3b29b13c39c1c0c91c793291.1513681966.git.anatoly.burakov@intel.com",
    "date": "2017-12-19T11:14:48",
    "name": "[dpdk-dev,RFC,v2,21/23] mempool: add support for the new memory allocation methods",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "daeab81abdc8293e40b25a1f4b6cda8886e336ac",
    "submitter": {
        "id": 4,
        "url": "https://patches.dpdk.org/api/people/4/?format=api",
        "name": "Anatoly Burakov",
        "email": "anatoly.burakov@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/cbef4f6b4af3ea8a3b29b13c39c1c0c91c793291.1513681966.git.anatoly.burakov@intel.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/32466/comments/",
    "check": "fail",
    "checks": "https://patches.dpdk.org/api/patches/32466/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 2B1691B1B0;\n\tTue, 19 Dec 2017 12:15:21 +0100 (CET)",
            "from mga09.intel.com (mga09.intel.com [134.134.136.24])\n\tby dpdk.org (Postfix) with ESMTP id 9028A1B019\n\tfor <dev@dpdk.org>; Tue, 19 Dec 2017 12:14:56 +0100 (CET)",
            "from fmsmga002.fm.intel.com ([10.253.24.26])\n\tby orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t19 Dec 2017 03:14:55 -0800",
            "from irvmail001.ir.intel.com ([163.33.26.43])\n\tby fmsmga002.fm.intel.com with ESMTP; 19 Dec 2017 03:14:54 -0800",
            "from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com\n\t[10.237.217.45])\n\tby irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id\n\tvBJBErgw003151; Tue, 19 Dec 2017 11:14:53 GMT",
            "from sivswdev01.ir.intel.com (localhost [127.0.0.1])\n\tby sivswdev01.ir.intel.com with ESMTP id vBJBErAp010334;\n\tTue, 19 Dec 2017 11:14:53 GMT",
            "(from aburakov@localhost)\n\tby sivswdev01.ir.intel.com with LOCAL id vBJBErkJ010330;\n\tTue, 19 Dec 2017 11:14:53 GMT"
        ],
        "X-Amp-Result": "SKIPPED(no attachment in message)",
        "X-Amp-File-Uploaded": "False",
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.45,426,1508828400\"; d=\"scan'208\";a=\"3023086\"",
        "From": "Anatoly Burakov <anatoly.burakov@intel.com>",
        "To": "dev@dpdk.org",
        "Cc": "andras.kovacs@ericsson.com, laszlo.vadkeri@ericsson.com,\n\tkeith.wiles@intel.com, benjamin.walker@intel.com,\n\tbruce.richardson@intel.com, thomas@monjalon.net",
        "Date": "Tue, 19 Dec 2017 11:14:48 +0000",
        "Message-Id": "<cbef4f6b4af3ea8a3b29b13c39c1c0c91c793291.1513681966.git.anatoly.burakov@intel.com>",
        "X-Mailer": "git-send-email 1.7.0.7",
        "In-Reply-To": [
            "<cover.1513681966.git.anatoly.burakov@intel.com>",
            "<cover.1513681966.git.anatoly.burakov@intel.com>"
        ],
        "References": [
            "<cover.1513681966.git.anatoly.burakov@intel.com>",
            "<cover.1513681966.git.anatoly.burakov@intel.com>"
        ],
        "Subject": "[dpdk-dev] [RFC v2 21/23] mempool: add support for the new memory\n\tallocation methods",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "If a user has specified that the zone should have contiguous memory,\nuse the new _contig allocation API's instead of normal ones.\nOtherwise, account for the fact that unless we're in IOVA_AS_VA\nmode, we cannot guarantee that the pages would be physically\ncontiguous, so we calculate the memzone size and alignments as if\nwe were getting the smallest page size available.\n\nSigned-off-by: Anatoly Burakov <anatoly.burakov@intel.com>\n---\n lib/librte_mempool/rte_mempool.c | 84 +++++++++++++++++++++++++++++++++++-----\n 1 file changed, 75 insertions(+), 9 deletions(-)",
    "diff": "diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c\nindex d50dba4..4b9ab22 100644\n--- a/lib/librte_mempool/rte_mempool.c\n+++ b/lib/librte_mempool/rte_mempool.c\n@@ -127,6 +127,26 @@ static unsigned optimize_object_size(unsigned obj_size)\n \treturn new_obj_size * RTE_MEMPOOL_ALIGN;\n }\n \n+static size_t\n+get_min_page_size(void) {\n+\tconst struct rte_mem_config *mcfg =\n+\t\t\trte_eal_get_configuration()->mem_config;\n+\tint i;\n+\tsize_t min_pagesz = SIZE_MAX;\n+\n+\tfor (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {\n+\t\tconst struct rte_memseg_list *msl = &mcfg->memsegs[i];\n+\n+\t\tif (msl->base_va == NULL)\n+\t\t\tcontinue;\n+\n+\t\tif (msl->hugepage_sz < min_pagesz)\n+\t\t\tmin_pagesz = msl->hugepage_sz;\n+\t}\n+\n+\treturn min_pagesz == SIZE_MAX ? (size_t) getpagesize() : min_pagesz;\n+}\n+\n static void\n mempool_add_elem(struct rte_mempool *mp, void *obj, rte_iova_t iova)\n {\n@@ -568,6 +588,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)\n \tunsigned mz_id, n;\n \tunsigned int mp_flags;\n \tint ret;\n+\tbool force_contig, no_contig;\n \n \t/* mempool must not be populated */\n \tif (mp->nb_mem_chunks != 0)\n@@ -582,10 +603,46 @@ rte_mempool_populate_default(struct rte_mempool *mp)\n \t/* update mempool capabilities */\n \tmp->flags |= mp_flags;\n \n-\tif (rte_eal_has_hugepages()) {\n-\t\tpg_shift = 0; /* not needed, zone is physically contiguous */\n+\tno_contig = mp->flags & MEMPOOL_F_NO_PHYS_CONTIG;\n+\tforce_contig = mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG;\n+\n+\t/*\n+\t * there are several considerations for page size and page shift here.\n+\t *\n+\t * if we don't need our mempools to have physically contiguous objects,\n+\t * then just set page shift and page size to 0, because the user has\n+\t * indicated that there's no need to care about anything.\n+\t *\n+\t * if we do need contiguous objects, there is also an option to reserve\n+\t * the entire mempool memory as one contiguous block of memory, in\n+\t * which case the page shift and alignment wouldn't matter as well.\n+\t *\n+\t * if we require contiguous objects, but not necessarily the entire\n+\t * mempool reserved space to be contiguous, then there are two options.\n+\t *\n+\t * if our IO addresses are virtual, not actual physical (IOVA as VA\n+\t * case), then no page shift needed - our memory allocation will give us\n+\t * contiguous physical memory as far as the hardware is concerned, so\n+\t * act as if we're getting contiguous memory.\n+\t *\n+\t * if our IO addresses are physical, we may get memory from bigger\n+\t * pages, or we might get memory from smaller pages, and how much of it\n+\t * we require depends on whether we want bigger or smaller pages.\n+\t * However, requesting each and every memory size is too much work, so\n+\t * what we'll do instead is walk through the page sizes available, pick\n+\t * the smallest one and set up page shift to match that one. We will be\n+\t * wasting some space this way, but it's much nicer than looping around\n+\t * trying to reserve each and every page size.\n+\t */\n+\n+\tif (no_contig || force_contig || rte_eal_iova_mode() == RTE_IOVA_VA) {\n \t\tpg_sz = 0;\n+\t\tpg_shift = 0;\n \t\talign = RTE_CACHE_LINE_SIZE;\n+\t} else if (rte_eal_has_hugepages()) {\n+\t\tpg_sz = get_min_page_size();\n+\t\tpg_shift = rte_bsf32(pg_sz);\n+\t\talign = pg_sz;\n \t} else {\n \t\tpg_sz = getpagesize();\n \t\tpg_shift = rte_bsf32(pg_sz);\n@@ -604,23 +661,32 @@ rte_mempool_populate_default(struct rte_mempool *mp)\n \t\t\tgoto fail;\n \t\t}\n \n-\t\tmz = rte_memzone_reserve_aligned(mz_name, size,\n-\t\t\tmp->socket_id, mz_flags, align);\n-\t\t/* not enough memory, retry with the biggest zone we have */\n-\t\tif (mz == NULL)\n-\t\t\tmz = rte_memzone_reserve_aligned(mz_name, 0,\n+\t\tif (force_contig) {\n+\t\t\t/*\n+\t\t\t * if contiguous memory for entire mempool memory was\n+\t\t\t * requested, don't try reserving again if we fail.\n+\t\t\t */\n+\t\t\tmz = rte_memzone_reserve_aligned_contig(mz_name, size,\n+\t\t\t\tmp->socket_id, mz_flags, align);\n+\t\t} else {\n+\t\t\tmz = rte_memzone_reserve_aligned(mz_name, size,\n \t\t\t\tmp->socket_id, mz_flags, align);\n+\t\t\t/* not enough memory, retry with the biggest zone we have */\n+\t\t\tif (mz == NULL)\n+\t\t\t\tmz = rte_memzone_reserve_aligned(mz_name, 0,\n+\t\t\t\t\tmp->socket_id, mz_flags, align);\n+\t\t}\n \t\tif (mz == NULL) {\n \t\t\tret = -rte_errno;\n \t\t\tgoto fail;\n \t\t}\n \n-\t\tif (mp->flags & MEMPOOL_F_NO_PHYS_CONTIG)\n+\t\tif (no_contig)\n \t\t\tiova = RTE_BAD_IOVA;\n \t\telse\n \t\t\tiova = mz->iova;\n \n-\t\tif (rte_eal_has_hugepages())\n+\t\tif (rte_eal_has_hugepages() && force_contig)\n \t\t\tret = rte_mempool_populate_iova(mp, mz->addr,\n \t\t\t\tiova, mz->len,\n \t\t\t\trte_mempool_memchunk_mz_free,\n",
    "prefixes": [
        "dpdk-dev",
        "RFC",
        "v2",
        "21/23"
    ]
}