get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/5786/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 5786,
    "url": "https://patches.dpdk.org/api/patches/5786/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1435241155-23684-3-git-send-email-sergio.gonzalez.monroy@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1435241155-23684-3-git-send-email-sergio.gonzalez.monroy@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1435241155-23684-3-git-send-email-sergio.gonzalez.monroy@intel.com",
    "date": "2015-06-25T14:05:48",
    "name": "[dpdk-dev,v4,2/9] eal: memzone allocated by malloc",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "7665bd78b4be6ff1d6d4acbef644511804c94c07",
    "submitter": {
        "id": 73,
        "url": "https://patches.dpdk.org/api/people/73/?format=api",
        "name": "Sergio Gonzalez Monroy",
        "email": "sergio.gonzalez.monroy@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1435241155-23684-3-git-send-email-sergio.gonzalez.monroy@intel.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/5786/comments/",
    "check": "pending",
    "checks": "https://patches.dpdk.org/api/patches/5786/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id 9A0C7C6A8;\n\tThu, 25 Jun 2015 16:06:17 +0200 (CEST)",
            "from mga09.intel.com (mga09.intel.com [134.134.136.24])\n\tby dpdk.org (Postfix) with ESMTP id 76092C628\n\tfor <dev@dpdk.org>; Thu, 25 Jun 2015 16:06:00 +0200 (CEST)",
            "from orsmga001.jf.intel.com ([10.7.209.18])\n\tby orsmga102.jf.intel.com with ESMTP; 25 Jun 2015 07:05:57 -0700",
            "from irvmail001.ir.intel.com ([163.33.26.43])\n\tby orsmga001.jf.intel.com with ESMTP; 25 Jun 2015 07:05:56 -0700",
            "from sivswdev02.ir.intel.com (sivswdev02.ir.intel.com\n\t[10.237.217.46])\n\tby irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id\n\tt5PE5tD2010501 for <dev@dpdk.org>; Thu, 25 Jun 2015 15:05:56 +0100",
            "from sivswdev02.ir.intel.com (localhost [127.0.0.1])\n\tby sivswdev02.ir.intel.com with ESMTP id t5PE5tBx023759\n\tfor <dev@dpdk.org>; Thu, 25 Jun 2015 15:05:55 +0100",
            "(from smonroy@localhost)\n\tby sivswdev02.ir.intel.com with  id t5PE5tL8023755\n\tfor dev@dpdk.org; Thu, 25 Jun 2015 15:05:55 +0100"
        ],
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.13,677,1427785200\"; d=\"scan'208\";a=\"717529318\"",
        "From": "Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>",
        "To": "dev@dpdk.org",
        "Date": "Thu, 25 Jun 2015 15:05:48 +0100",
        "Message-Id": "<1435241155-23684-3-git-send-email-sergio.gonzalez.monroy@intel.com>",
        "X-Mailer": "git-send-email 1.8.5.4",
        "In-Reply-To": "<1435241155-23684-1-git-send-email-sergio.gonzalez.monroy@intel.com>",
        "References": "<1433586732-28217-1-git-send-email-sergio.gonzalez.monroy@intel.com>\n\t<1435241155-23684-1-git-send-email-sergio.gonzalez.monroy@intel.com>",
        "Subject": "[dpdk-dev] [PATCH v4 2/9] eal: memzone allocated by malloc",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "patches and discussions about DPDK <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "In the current memory hierarchy, memsegs are groups of physically\ncontiguous hugepages, memzones are slices of memsegs and malloc further\nslices memzones into smaller memory chunks.\n\nThis patch modifies malloc so it partitions memsegs instead of memzones.\nThus memzones would call malloc internally for memory allocation while\nmaintaining its ABI.\n\nIt would be possible to free memzones and therefore any other structure\nbased on memzones, ie. mempools\n\nSigned-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>\n---\n lib/librte_eal/common/eal_common_memzone.c        | 274 ++++++----------------\n lib/librte_eal/common/include/rte_eal_memconfig.h |   2 +-\n lib/librte_eal/common/include/rte_malloc_heap.h   |   3 +-\n lib/librte_eal/common/malloc_elem.c               |  68 ++++--\n lib/librte_eal/common/malloc_elem.h               |  14 +-\n lib/librte_eal/common/malloc_heap.c               | 140 ++++++-----\n lib/librte_eal/common/malloc_heap.h               |   6 +-\n lib/librte_eal/common/rte_malloc.c                |   7 +-\n 8 files changed, 197 insertions(+), 317 deletions(-)",
    "diff": "diff --git a/lib/librte_eal/common/eal_common_memzone.c b/lib/librte_eal/common/eal_common_memzone.c\nindex aee184a..943012b 100644\n--- a/lib/librte_eal/common/eal_common_memzone.c\n+++ b/lib/librte_eal/common/eal_common_memzone.c\n@@ -50,15 +50,15 @@\n #include <rte_string_fns.h>\n #include <rte_common.h>\n \n+#include \"malloc_heap.h\"\n+#include \"malloc_elem.h\"\n #include \"eal_private.h\"\n \n-/* internal copy of free memory segments */\n-static struct rte_memseg *free_memseg = NULL;\n-\n static inline const struct rte_memzone *\n memzone_lookup_thread_unsafe(const char *name)\n {\n \tconst struct rte_mem_config *mcfg;\n+\tconst struct rte_memzone *mz;\n \tunsigned i = 0;\n \n \t/* get pointer to global configuration */\n@@ -68,8 +68,9 @@ memzone_lookup_thread_unsafe(const char *name)\n \t * the algorithm is not optimal (linear), but there are few\n \t * zones and this function should be called at init only\n \t */\n-\tfor (i = 0; i < RTE_MAX_MEMZONE && mcfg->memzone[i].addr != NULL; i++) {\n-\t\tif (!strncmp(name, mcfg->memzone[i].name, RTE_MEMZONE_NAMESIZE))\n+\tfor (i = 0; i < RTE_MAX_MEMZONE; i++) {\n+\t\tmz = &mcfg->memzone[i];\n+\t\tif (mz->addr != NULL && !strncmp(name, mz->name, RTE_MEMZONE_NAMESIZE))\n \t\t\treturn &mcfg->memzone[i];\n \t}\n \n@@ -88,39 +89,45 @@ rte_memzone_reserve(const char *name, size_t len, int socket_id,\n \t\t\tlen, socket_id, flags, RTE_CACHE_LINE_SIZE);\n }\n \n-/*\n- * Helper function for memzone_reserve_aligned_thread_unsafe().\n- * Calculate address offset from the start of the segment.\n- * Align offset in that way that it satisfy istart alignmnet and\n- * buffer of the  requested length would not cross specified boundary.\n- */\n-static inline phys_addr_t\n-align_phys_boundary(const struct rte_memseg *ms, size_t len, size_t align,\n-\tsize_t bound)\n+/* Find the heap with the greatest free block size */\n+static void\n+find_heap_max_free_elem(int *s, size_t *len, unsigned align)\n {\n-\tphys_addr_t addr_offset, bmask, end, start;\n-\tsize_t step;\n+\tstruct rte_mem_config *mcfg;\n+\tstruct rte_malloc_socket_stats stats;\n+\tunsigned i;\n \n-\tstep = RTE_MAX(align, bound);\n-\tbmask = ~((phys_addr_t)bound - 1);\n+\t/* get pointer to global configuration */\n+\tmcfg = rte_eal_get_configuration()->mem_config;\n \n-\t/* calculate offset to closest alignment */\n-\tstart = RTE_ALIGN_CEIL(ms->phys_addr, align);\n-\taddr_offset = start - ms->phys_addr;\n+\tfor (i = 0; i < RTE_MAX_NUMA_NODES; i++) {\n+\t\tmalloc_heap_get_stats(&mcfg->malloc_heaps[i], &stats);\n+\t\tif (stats.greatest_free_size > *len) {\n+\t\t\t*len = stats.greatest_free_size;\n+\t\t\t*s = i;\n+\t\t}\n+\t}\n+\t*len -= (MALLOC_ELEM_OVERHEAD + align);\n+}\n \n-\twhile (addr_offset + len < ms->len) {\n+/* Find a heap that can allocate the requested size */\n+static void\n+find_heap_suitable(int *s, size_t len, unsigned align)\n+{\n+\tstruct rte_mem_config *mcfg;\n+\tstruct rte_malloc_socket_stats stats;\n+\tunsigned i;\n \n-\t\t/* check, do we meet boundary condition */\n-\t\tend = start + len - (len != 0);\n-\t\tif ((start & bmask) == (end & bmask))\n-\t\t\tbreak;\n+\t/* get pointer to global configuration */\n+\tmcfg = rte_eal_get_configuration()->mem_config;\n \n-\t\t/* calculate next offset */\n-\t\tstart = RTE_ALIGN_CEIL(start + 1, step);\n-\t\taddr_offset = start - ms->phys_addr;\n+\tfor (i = 0; i < RTE_MAX_NUMA_NODES; i++) {\n+\t\tmalloc_heap_get_stats(&mcfg->malloc_heaps[i], &stats);\n+\t\tif (stats.greatest_free_size >= len + MALLOC_ELEM_OVERHEAD + align) {\n+\t\t\t*s = i;\n+\t\t\tbreak;\n+\t\t}\n \t}\n-\n-\treturn addr_offset;\n }\n \n static const struct rte_memzone *\n@@ -128,13 +135,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \t\tint socket_id, unsigned flags, unsigned align, unsigned bound)\n {\n \tstruct rte_mem_config *mcfg;\n-\tunsigned i = 0;\n-\tint memseg_idx = -1;\n-\tuint64_t addr_offset, seg_offset = 0;\n \tsize_t requested_len;\n-\tsize_t memseg_len = 0;\n-\tphys_addr_t memseg_physaddr;\n-\tvoid *memseg_addr;\n \n \t/* get pointer to global configuration */\n \tmcfg = rte_eal_get_configuration()->mem_config;\n@@ -166,7 +167,6 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \tif (align < RTE_CACHE_LINE_SIZE)\n \t\talign = RTE_CACHE_LINE_SIZE;\n \n-\n \t/* align length on cache boundary. Check for overflow before doing so */\n \tif (len > SIZE_MAX - RTE_CACHE_LINE_MASK) {\n \t\trte_errno = EINVAL; /* requested size too big */\n@@ -180,129 +180,50 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \trequested_len = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE,  len);\n \n \t/* check that boundary condition is valid */\n-\tif (bound != 0 &&\n-\t\t\t(requested_len > bound || !rte_is_power_of_2(bound))) {\n+\tif (bound != 0 && (requested_len > bound || !rte_is_power_of_2(bound))) {\n \t\trte_errno = EINVAL;\n \t\treturn NULL;\n \t}\n \n-\t/* find the smallest segment matching requirements */\n-\tfor (i = 0; i < RTE_MAX_MEMSEG; i++) {\n-\t\t/* last segment */\n-\t\tif (free_memseg[i].addr == NULL)\n-\t\t\tbreak;\n+\tif (len == 0) {\n+\t\tif (bound != 0)\n+\t\t\trequested_len = bound;\n+\t\telse\n+\t\t\trequested_len = 0;\n+\t}\n \n-\t\t/* empty segment, skip it */\n-\t\tif (free_memseg[i].len == 0)\n-\t\t\tcontinue;\n-\n-\t\t/* bad socket ID */\n-\t\tif (socket_id != SOCKET_ID_ANY &&\n-\t\t    free_memseg[i].socket_id != SOCKET_ID_ANY &&\n-\t\t    socket_id != free_memseg[i].socket_id)\n-\t\t\tcontinue;\n-\n-\t\t/*\n-\t\t * calculate offset to closest alignment that\n-\t\t * meets boundary conditions.\n-\t\t */\n-\t\taddr_offset = align_phys_boundary(free_memseg + i,\n-\t\t\trequested_len, align, bound);\n-\n-\t\t/* check len */\n-\t\tif ((requested_len + addr_offset) > free_memseg[i].len)\n-\t\t\tcontinue;\n-\n-\t\t/* check flags for hugepage sizes */\n-\t\tif ((flags & RTE_MEMZONE_2MB) &&\n-\t\t\t\tfree_memseg[i].hugepage_sz == RTE_PGSIZE_1G)\n-\t\t\tcontinue;\n-\t\tif ((flags & RTE_MEMZONE_1GB) &&\n-\t\t\t\tfree_memseg[i].hugepage_sz == RTE_PGSIZE_2M)\n-\t\t\tcontinue;\n-\t\tif ((flags & RTE_MEMZONE_16MB) &&\n-\t\t\t\tfree_memseg[i].hugepage_sz == RTE_PGSIZE_16G)\n-\t\t\tcontinue;\n-\t\tif ((flags & RTE_MEMZONE_16GB) &&\n-\t\t\t\tfree_memseg[i].hugepage_sz == RTE_PGSIZE_16M)\n-\t\t\tcontinue;\n-\n-\t\t/* this segment is the best until now */\n-\t\tif (memseg_idx == -1) {\n-\t\t\tmemseg_idx = i;\n-\t\t\tmemseg_len = free_memseg[i].len;\n-\t\t\tseg_offset = addr_offset;\n-\t\t}\n-\t\t/* find the biggest contiguous zone */\n-\t\telse if (len == 0) {\n-\t\t\tif (free_memseg[i].len > memseg_len) {\n-\t\t\t\tmemseg_idx = i;\n-\t\t\t\tmemseg_len = free_memseg[i].len;\n-\t\t\t\tseg_offset = addr_offset;\n-\t\t\t}\n-\t\t}\n-\t\t/*\n-\t\t * find the smallest (we already checked that current\n-\t\t * zone length is > len\n-\t\t */\n-\t\telse if (free_memseg[i].len + align < memseg_len ||\n-\t\t\t\t(free_memseg[i].len <= memseg_len + align &&\n-\t\t\t\taddr_offset < seg_offset)) {\n-\t\t\tmemseg_idx = i;\n-\t\t\tmemseg_len = free_memseg[i].len;\n-\t\t\tseg_offset = addr_offset;\n+\tif (socket_id == SOCKET_ID_ANY) {\n+\t\tif (requested_len == 0)\n+\t\t\tfind_heap_max_free_elem(&socket_id, &requested_len, align);\n+\t\telse\n+\t\t\tfind_heap_suitable(&socket_id, requested_len, align);\n+\n+\t\tif (socket_id == SOCKET_ID_ANY) {\n+\t\t\trte_errno = ENOMEM;\n+\t\t\treturn NULL;\n \t\t}\n \t}\n \n-\t/* no segment found */\n-\tif (memseg_idx == -1) {\n-\t\t/*\n-\t\t * If RTE_MEMZONE_SIZE_HINT_ONLY flag is specified,\n-\t\t * try allocating again without the size parameter otherwise -fail.\n-\t\t */\n-\t\tif ((flags & RTE_MEMZONE_SIZE_HINT_ONLY)  &&\n-\t\t    ((flags & RTE_MEMZONE_1GB) || (flags & RTE_MEMZONE_2MB)\n-\t\t|| (flags & RTE_MEMZONE_16MB) || (flags & RTE_MEMZONE_16GB)))\n-\t\t\treturn memzone_reserve_aligned_thread_unsafe(name,\n-\t\t\t\tlen, socket_id, 0, align, bound);\n-\n+\t/* allocate memory on heap */\n+\tvoid *mz_addr = malloc_heap_alloc(&mcfg->malloc_heaps[socket_id], NULL,\n+\t\t\trequested_len, flags, align, bound);\n+\tif (mz_addr == NULL) {\n \t\trte_errno = ENOMEM;\n \t\treturn NULL;\n \t}\n \n-\t/* save aligned physical and virtual addresses */\n-\tmemseg_physaddr = free_memseg[memseg_idx].phys_addr + seg_offset;\n-\tmemseg_addr = RTE_PTR_ADD(free_memseg[memseg_idx].addr,\n-\t\t\t(uintptr_t) seg_offset);\n-\n-\t/* if we are looking for a biggest memzone */\n-\tif (len == 0) {\n-\t\tif (bound == 0)\n-\t\t\trequested_len = memseg_len - seg_offset;\n-\t\telse\n-\t\t\trequested_len = RTE_ALIGN_CEIL(memseg_physaddr + 1,\n-\t\t\t\tbound) - memseg_physaddr;\n-\t}\n-\n-\t/* set length to correct value */\n-\tlen = (size_t)seg_offset + requested_len;\n-\n-\t/* update our internal state */\n-\tfree_memseg[memseg_idx].len -= len;\n-\tfree_memseg[memseg_idx].phys_addr += len;\n-\tfree_memseg[memseg_idx].addr =\n-\t\t(char *)free_memseg[memseg_idx].addr + len;\n+\tconst struct malloc_elem *elem = malloc_elem_from_data(mz_addr);\n \n \t/* fill the zone in config */\n \tstruct rte_memzone *mz = &mcfg->memzone[mcfg->memzone_idx++];\n \tsnprintf(mz->name, sizeof(mz->name), \"%s\", name);\n-\tmz->phys_addr = memseg_physaddr;\n-\tmz->addr = memseg_addr;\n-\tmz->len = requested_len;\n-\tmz->hugepage_sz = free_memseg[memseg_idx].hugepage_sz;\n-\tmz->socket_id = free_memseg[memseg_idx].socket_id;\n+\tmz->phys_addr = rte_malloc_virt2phy(mz_addr);\n+\tmz->addr = mz_addr;\n+\tmz->len = (requested_len == 0 ? elem->size : requested_len);\n+\tmz->hugepage_sz = elem->ms->hugepage_sz;\n+\tmz->socket_id = elem->ms->socket_id;\n \tmz->flags = 0;\n-\tmz->memseg_id = memseg_idx;\n+\tmz->memseg_id = elem->ms - rte_eal_get_configuration()->mem_config->memseg;\n \n \treturn mz;\n }\n@@ -419,45 +340,6 @@ rte_memzone_dump(FILE *f)\n }\n \n /*\n- * called by init: modify the free memseg list to have cache-aligned\n- * addresses and cache-aligned lengths\n- */\n-static int\n-memseg_sanitize(struct rte_memseg *memseg)\n-{\n-\tunsigned phys_align;\n-\tunsigned virt_align;\n-\tunsigned off;\n-\n-\tphys_align = memseg->phys_addr & RTE_CACHE_LINE_MASK;\n-\tvirt_align = (unsigned long)memseg->addr & RTE_CACHE_LINE_MASK;\n-\n-\t/*\n-\t * sanity check: phys_addr and addr must have the same\n-\t * alignment\n-\t */\n-\tif (phys_align != virt_align)\n-\t\treturn -1;\n-\n-\t/* memseg is really too small, don't bother with it */\n-\tif (memseg->len < (2 * RTE_CACHE_LINE_SIZE)) {\n-\t\tmemseg->len = 0;\n-\t\treturn 0;\n-\t}\n-\n-\t/* align start address */\n-\toff = (RTE_CACHE_LINE_SIZE - phys_align) & RTE_CACHE_LINE_MASK;\n-\tmemseg->phys_addr += off;\n-\tmemseg->addr = (char *)memseg->addr + off;\n-\tmemseg->len -= off;\n-\n-\t/* align end address */\n-\tmemseg->len &= ~((uint64_t)RTE_CACHE_LINE_MASK);\n-\n-\treturn 0;\n-}\n-\n-/*\n  * Init the memzone subsystem\n  */\n int\n@@ -465,14 +347,10 @@ rte_eal_memzone_init(void)\n {\n \tstruct rte_mem_config *mcfg;\n \tconst struct rte_memseg *memseg;\n-\tunsigned i = 0;\n \n \t/* get pointer to global configuration */\n \tmcfg = rte_eal_get_configuration()->mem_config;\n \n-\t/* mirror the runtime memsegs from config */\n-\tfree_memseg = mcfg->free_memseg;\n-\n \t/* secondary processes don't need to initialise anything */\n \tif (rte_eal_process_type() == RTE_PROC_SECONDARY)\n \t\treturn 0;\n@@ -485,33 +363,13 @@ rte_eal_memzone_init(void)\n \n \trte_rwlock_write_lock(&mcfg->mlock);\n \n-\t/* fill in uninitialized free_memsegs */\n-\tfor (i = 0; i < RTE_MAX_MEMSEG; i++) {\n-\t\tif (memseg[i].addr == NULL)\n-\t\t\tbreak;\n-\t\tif (free_memseg[i].addr != NULL)\n-\t\t\tcontinue;\n-\t\tmemcpy(&free_memseg[i], &memseg[i], sizeof(struct rte_memseg));\n-\t}\n-\n-\t/* make all zones cache-aligned */\n-\tfor (i = 0; i < RTE_MAX_MEMSEG; i++) {\n-\t\tif (free_memseg[i].addr == NULL)\n-\t\t\tbreak;\n-\t\tif (memseg_sanitize(&free_memseg[i]) < 0) {\n-\t\t\tRTE_LOG(ERR, EAL, \"%s(): Sanity check failed\\n\", __func__);\n-\t\t\trte_rwlock_write_unlock(&mcfg->mlock);\n-\t\t\treturn -1;\n-\t\t}\n-\t}\n-\n \t/* delete all zones */\n \tmcfg->memzone_idx = 0;\n \tmemset(mcfg->memzone, 0, sizeof(mcfg->memzone));\n \n \trte_rwlock_write_unlock(&mcfg->mlock);\n \n-\treturn 0;\n+\treturn rte_eal_malloc_heap_init();\n }\n \n /* Walk all reserved memory zones */\ndiff --git a/lib/librte_eal/common/include/rte_eal_memconfig.h b/lib/librte_eal/common/include/rte_eal_memconfig.h\nindex 34f5abc..055212a 100644\n--- a/lib/librte_eal/common/include/rte_eal_memconfig.h\n+++ b/lib/librte_eal/common/include/rte_eal_memconfig.h\n@@ -73,7 +73,7 @@ struct rte_mem_config {\n \tstruct rte_memseg memseg[RTE_MAX_MEMSEG];    /**< Physmem descriptors. */\n \tstruct rte_memzone memzone[RTE_MAX_MEMZONE]; /**< Memzone descriptors. */\n \n-\t/* Runtime Physmem descriptors. */\n+\t/* Runtime Physmem descriptors - NOT USED */\n \tstruct rte_memseg free_memseg[RTE_MAX_MEMSEG];\n \n \tstruct rte_tailq_head tailq_head[RTE_MAX_TAILQ]; /**< Tailqs for objects */\ndiff --git a/lib/librte_eal/common/include/rte_malloc_heap.h b/lib/librte_eal/common/include/rte_malloc_heap.h\nindex 716216f..b270356 100644\n--- a/lib/librte_eal/common/include/rte_malloc_heap.h\n+++ b/lib/librte_eal/common/include/rte_malloc_heap.h\n@@ -40,7 +40,7 @@\n #include <rte_memory.h>\n \n /* Number of free lists per heap, grouped by size. */\n-#define RTE_HEAP_NUM_FREELISTS  5\n+#define RTE_HEAP_NUM_FREELISTS  13\n \n /**\n  * Structure to hold malloc heap\n@@ -48,7 +48,6 @@\n struct malloc_heap {\n \trte_spinlock_t lock;\n \tLIST_HEAD(, malloc_elem) free_head[RTE_HEAP_NUM_FREELISTS];\n-\tunsigned mz_count;\n \tunsigned alloc_count;\n \tsize_t total_size;\n } __rte_cache_aligned;\ndiff --git a/lib/librte_eal/common/malloc_elem.c b/lib/librte_eal/common/malloc_elem.c\nindex a5e1248..b54ee33 100644\n--- a/lib/librte_eal/common/malloc_elem.c\n+++ b/lib/librte_eal/common/malloc_elem.c\n@@ -37,7 +37,6 @@\n #include <sys/queue.h>\n \n #include <rte_memory.h>\n-#include <rte_memzone.h>\n #include <rte_eal.h>\n #include <rte_launch.h>\n #include <rte_per_lcore.h>\n@@ -56,10 +55,10 @@\n  */\n void\n malloc_elem_init(struct malloc_elem *elem,\n-\t\tstruct malloc_heap *heap, const struct rte_memzone *mz, size_t size)\n+\t\tstruct malloc_heap *heap, const struct rte_memseg *ms, size_t size)\n {\n \telem->heap = heap;\n-\telem->mz = mz;\n+\telem->ms = ms;\n \telem->prev = NULL;\n \tmemset(&elem->free_list, 0, sizeof(elem->free_list));\n \telem->state = ELEM_FREE;\n@@ -70,12 +69,12 @@ malloc_elem_init(struct malloc_elem *elem,\n }\n \n /*\n- * initialise a dummy malloc_elem header for the end-of-memzone marker\n+ * initialise a dummy malloc_elem header for the end-of-memseg marker\n  */\n void\n malloc_elem_mkend(struct malloc_elem *elem, struct malloc_elem *prev)\n {\n-\tmalloc_elem_init(elem, prev->heap, prev->mz, 0);\n+\tmalloc_elem_init(elem, prev->heap, prev->ms, 0);\n \telem->prev = prev;\n \telem->state = ELEM_BUSY; /* mark busy so its never merged */\n }\n@@ -86,12 +85,24 @@ malloc_elem_mkend(struct malloc_elem *elem, struct malloc_elem *prev)\n  * fit, return NULL.\n  */\n static void *\n-elem_start_pt(struct malloc_elem *elem, size_t size, unsigned align)\n+elem_start_pt(struct malloc_elem *elem, size_t size, unsigned align,\n+\t\tsize_t bound)\n {\n-\tconst uintptr_t end_pt = (uintptr_t)elem +\n+\tconst size_t bmask = ~(bound - 1);\n+\tuintptr_t end_pt = (uintptr_t)elem +\n \t\t\telem->size - MALLOC_ELEM_TRAILER_LEN;\n-\tconst uintptr_t new_data_start = RTE_ALIGN_FLOOR((end_pt - size), align);\n-\tconst uintptr_t new_elem_start = new_data_start - MALLOC_ELEM_HEADER_LEN;\n+\tuintptr_t new_data_start = RTE_ALIGN_FLOOR((end_pt - size), align);\n+\tuintptr_t new_elem_start;\n+\n+\t/* check boundary */\n+\tif ((new_data_start & bmask) != ((end_pt - 1) & bmask)) {\n+\t\tend_pt = RTE_ALIGN_FLOOR(end_pt, bound);\n+\t\tnew_data_start = RTE_ALIGN_FLOOR((end_pt - size), align);\n+\t\tif (((end_pt - 1) & bmask) != (new_data_start & bmask))\n+\t\t\treturn NULL;\n+\t}\n+\n+\tnew_elem_start = new_data_start - MALLOC_ELEM_HEADER_LEN;\n \n \t/* if the new start point is before the exist start, it won't fit */\n \treturn (new_elem_start < (uintptr_t)elem) ? NULL : (void *)new_elem_start;\n@@ -102,9 +113,10 @@ elem_start_pt(struct malloc_elem *elem, size_t size, unsigned align)\n  * alignment request from the current element\n  */\n int\n-malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align)\n+malloc_elem_can_hold(struct malloc_elem *elem, size_t size,\tunsigned align,\n+\t\tsize_t bound)\n {\n-\treturn elem_start_pt(elem, size, align) != NULL;\n+\treturn elem_start_pt(elem, size, align, bound) != NULL;\n }\n \n /*\n@@ -115,10 +127,10 @@ static void\n split_elem(struct malloc_elem *elem, struct malloc_elem *split_pt)\n {\n \tstruct malloc_elem *next_elem = RTE_PTR_ADD(elem, elem->size);\n-\tconst unsigned old_elem_size = (uintptr_t)split_pt - (uintptr_t)elem;\n-\tconst unsigned new_elem_size = elem->size - old_elem_size;\n+\tconst size_t old_elem_size = (uintptr_t)split_pt - (uintptr_t)elem;\n+\tconst size_t new_elem_size = elem->size - old_elem_size;\n \n-\tmalloc_elem_init(split_pt, elem->heap, elem->mz, new_elem_size);\n+\tmalloc_elem_init(split_pt, elem->heap, elem->ms, new_elem_size);\n \tsplit_pt->prev = elem;\n \tnext_elem->prev = split_pt;\n \telem->size = old_elem_size;\n@@ -168,8 +180,9 @@ malloc_elem_free_list_index(size_t size)\n void\n malloc_elem_free_list_insert(struct malloc_elem *elem)\n {\n-\tsize_t idx = malloc_elem_free_list_index(elem->size - MALLOC_ELEM_HEADER_LEN);\n+\tsize_t idx;\n \n+\tidx = malloc_elem_free_list_index(elem->size - MALLOC_ELEM_HEADER_LEN);\n \telem->state = ELEM_FREE;\n \tLIST_INSERT_HEAD(&elem->heap->free_head[idx], elem, free_list);\n }\n@@ -190,12 +203,26 @@ elem_free_list_remove(struct malloc_elem *elem)\n  * is not done here, as it's done there previously.\n  */\n struct malloc_elem *\n-malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align)\n+malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align,\n+\t\tsize_t bound)\n {\n-\tstruct malloc_elem *new_elem = elem_start_pt(elem, size, align);\n-\tconst unsigned old_elem_size = (uintptr_t)new_elem - (uintptr_t)elem;\n+\tstruct malloc_elem *new_elem = elem_start_pt(elem, size, align, bound);\n+\tconst size_t old_elem_size = (uintptr_t)new_elem - (uintptr_t)elem;\n+\tconst size_t trailer_size = elem->size - old_elem_size - size -\n+\t\tMALLOC_ELEM_OVERHEAD;\n+\n+\telem_free_list_remove(elem);\n \n-\tif (old_elem_size < MALLOC_ELEM_OVERHEAD + MIN_DATA_SIZE){\n+\tif (trailer_size > MALLOC_ELEM_OVERHEAD + MIN_DATA_SIZE) {\n+\t\t/* split it, too much free space after elem */\n+\t\tstruct malloc_elem *new_free_elem =\n+\t\t\t\tRTE_PTR_ADD(new_elem, size + MALLOC_ELEM_OVERHEAD);\n+\n+\t\tsplit_elem(elem, new_free_elem);\n+\t\tmalloc_elem_free_list_insert(new_free_elem);\n+\t}\n+\n+\tif (old_elem_size < MALLOC_ELEM_OVERHEAD + MIN_DATA_SIZE) {\n \t\t/* don't split it, pad the element instead */\n \t\telem->state = ELEM_BUSY;\n \t\telem->pad = old_elem_size;\n@@ -208,8 +235,6 @@ malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align)\n \t\t\tnew_elem->size = elem->size - elem->pad;\n \t\t\tset_header(new_elem);\n \t\t}\n-\t\t/* remove element from free list */\n-\t\telem_free_list_remove(elem);\n \n \t\treturn new_elem;\n \t}\n@@ -219,7 +244,6 @@ malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align)\n \t * Re-insert original element, in case its new size makes it\n \t * belong on a different list.\n \t */\n-\telem_free_list_remove(elem);\n \tsplit_elem(elem, new_elem);\n \tnew_elem->state = ELEM_BUSY;\n \tmalloc_elem_free_list_insert(elem);\ndiff --git a/lib/librte_eal/common/malloc_elem.h b/lib/librte_eal/common/malloc_elem.h\nindex 9790b1a..e05d2ea 100644\n--- a/lib/librte_eal/common/malloc_elem.h\n+++ b/lib/librte_eal/common/malloc_elem.h\n@@ -47,9 +47,9 @@ enum elem_state {\n \n struct malloc_elem {\n \tstruct malloc_heap *heap;\n-\tstruct malloc_elem *volatile prev;      /* points to prev elem in memzone */\n+\tstruct malloc_elem *volatile prev;      /* points to prev elem in memseg */\n \tLIST_ENTRY(malloc_elem) free_list;      /* list of free elements in heap */\n-\tconst struct rte_memzone *mz;\n+\tconst struct rte_memseg *ms;\n \tvolatile enum elem_state state;\n \tuint32_t pad;\n \tsize_t size;\n@@ -136,11 +136,11 @@ malloc_elem_from_data(const void *data)\n void\n malloc_elem_init(struct malloc_elem *elem,\n \t\tstruct malloc_heap *heap,\n-\t\tconst struct rte_memzone *mz,\n+\t\tconst struct rte_memseg *ms,\n \t\tsize_t size);\n \n /*\n- * initialise a dummy malloc_elem header for the end-of-memzone marker\n+ * initialise a dummy malloc_elem header for the end-of-memseg marker\n  */\n void\n malloc_elem_mkend(struct malloc_elem *elem,\n@@ -151,14 +151,16 @@ malloc_elem_mkend(struct malloc_elem *elem,\n  * of the requested size and with the requested alignment\n  */\n int\n-malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align);\n+malloc_elem_can_hold(struct malloc_elem *elem, size_t size,\n+\t\tunsigned align, size_t bound);\n \n /*\n  * reserve a block of data in an existing malloc_elem. If the malloc_elem\n  * is much larger than the data block requested, we split the element in two.\n  */\n struct malloc_elem *\n-malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align);\n+malloc_elem_alloc(struct malloc_elem *elem, size_t size,\n+\t\tunsigned align, size_t bound);\n \n /*\n  * free a malloc_elem block by adding it to the free list. If the\ndiff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c\nindex 8861d27..f5fff96 100644\n--- a/lib/librte_eal/common/malloc_heap.c\n+++ b/lib/librte_eal/common/malloc_heap.c\n@@ -39,7 +39,6 @@\n #include <sys/queue.h>\n \n #include <rte_memory.h>\n-#include <rte_memzone.h>\n #include <rte_eal.h>\n #include <rte_eal_memconfig.h>\n #include <rte_launch.h>\n@@ -54,123 +53,104 @@\n #include \"malloc_elem.h\"\n #include \"malloc_heap.h\"\n \n-/* since the memzone size starts with a digit, it will appear unquoted in\n- * rte_config.h, so quote it so it can be passed to rte_str_to_size */\n-#define MALLOC_MEMZONE_SIZE RTE_STR(RTE_MALLOC_MEMZONE_SIZE)\n-\n-/*\n- * returns the configuration setting for the memzone size as a size_t value\n- */\n-static inline size_t\n-get_malloc_memzone_size(void)\n+static unsigned\n+check_hugepage_sz(unsigned flags, size_t hugepage_sz)\n {\n-\treturn rte_str_to_size(MALLOC_MEMZONE_SIZE);\n+\tunsigned ret = 1;\n+\n+\tif ((flags & RTE_MEMZONE_2MB) && hugepage_sz == RTE_PGSIZE_1G)\n+\t\tret = 0;\n+\tif ((flags & RTE_MEMZONE_1GB) && hugepage_sz == RTE_PGSIZE_2M)\n+\t\tret = 0;\n+\tif ((flags & RTE_MEMZONE_16MB) && hugepage_sz == RTE_PGSIZE_16G)\n+\t\tret = 0;\n+\tif ((flags & RTE_MEMZONE_16GB) && hugepage_sz == RTE_PGSIZE_16M)\n+\t\tret = 0;\n+\n+\treturn ret;\n }\n \n /*\n- * reserve an extra memory zone and make it available for use by a particular\n- * heap. This reserves the zone and sets a dummy malloc_elem header at the end\n+ * Expand the heap with a memseg.\n+ * This reserves the zone and sets a dummy malloc_elem header at the end\n  * to prevent overflow. The rest of the zone is added to free list as a single\n  * large free block\n  */\n-static int\n-malloc_heap_add_memzone(struct malloc_heap *heap, size_t size, unsigned align)\n+static void\n+malloc_heap_add_memseg(struct malloc_heap *heap, struct rte_memseg *ms)\n {\n-\tconst unsigned mz_flags = 0;\n-\tconst size_t block_size = get_malloc_memzone_size();\n-\t/* ensure the data we want to allocate will fit in the memzone */\n-\tconst size_t min_size = size + align + MALLOC_ELEM_OVERHEAD * 2;\n-\tconst struct rte_memzone *mz = NULL;\n-\tstruct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;\n-\tunsigned numa_socket = heap - mcfg->malloc_heaps;\n-\n-\tsize_t mz_size = min_size;\n-\tif (mz_size < block_size)\n-\t\tmz_size = block_size;\n-\n-\tchar mz_name[RTE_MEMZONE_NAMESIZE];\n-\tsnprintf(mz_name, sizeof(mz_name), \"MALLOC_S%u_HEAP_%u\",\n-\t\t     numa_socket, heap->mz_count++);\n-\n-\t/* try getting a block. if we fail and we don't need as big a block\n-\t * as given in the config, we can shrink our request and try again\n-\t */\n-\tdo {\n-\t\tmz = rte_memzone_reserve(mz_name, mz_size, numa_socket,\n-\t\t\t\t\t mz_flags);\n-\t\tif (mz == NULL)\n-\t\t\tmz_size /= 2;\n-\t} while (mz == NULL && mz_size > min_size);\n-\tif (mz == NULL)\n-\t\treturn -1;\n-\n \t/* allocate the memory block headers, one at end, one at start */\n-\tstruct malloc_elem *start_elem = (struct malloc_elem *)mz->addr;\n-\tstruct malloc_elem *end_elem = RTE_PTR_ADD(mz->addr,\n-\t\t\tmz_size - MALLOC_ELEM_OVERHEAD);\n+\tstruct malloc_elem *start_elem = (struct malloc_elem *)ms->addr;\n+\tstruct malloc_elem *end_elem = RTE_PTR_ADD(ms->addr,\n+\t\t\tms->len - MALLOC_ELEM_OVERHEAD);\n \tend_elem = RTE_PTR_ALIGN_FLOOR(end_elem, RTE_CACHE_LINE_SIZE);\n+\tconst size_t elem_size = (uintptr_t)end_elem - (uintptr_t)start_elem;\n \n-\tconst unsigned elem_size = (uintptr_t)end_elem - (uintptr_t)start_elem;\n-\tmalloc_elem_init(start_elem, heap, mz, elem_size);\n+\tmalloc_elem_init(start_elem, heap, ms, elem_size);\n \tmalloc_elem_mkend(end_elem, start_elem);\n \tmalloc_elem_free_list_insert(start_elem);\n \n-\t/* increase heap total size by size of new memzone */\n-\theap->total_size+=mz_size - MALLOC_ELEM_OVERHEAD;\n-\treturn 0;\n+\theap->total_size += elem_size;\n }\n \n /*\n  * Iterates through the freelist for a heap to find a free element\n  * which can store data of the required size and with the requested alignment.\n+ * If size is 0, find the biggest available elem.\n  * Returns null on failure, or pointer to element on success.\n  */\n static struct malloc_elem *\n-find_suitable_element(struct malloc_heap *heap, size_t size, unsigned align)\n+find_suitable_element(struct malloc_heap *heap, size_t size,\n+\t\tunsigned flags, size_t align, size_t bound)\n {\n \tsize_t idx;\n-\tstruct malloc_elem *elem;\n+\tstruct malloc_elem *elem, *alt_elem = NULL;\n \n \tfor (idx = malloc_elem_free_list_index(size);\n-\t\tidx < RTE_HEAP_NUM_FREELISTS; idx++)\n-\t{\n+\t\t\tidx < RTE_HEAP_NUM_FREELISTS; idx++) {\n \t\tfor (elem = LIST_FIRST(&heap->free_head[idx]);\n-\t\t\t!!elem; elem = LIST_NEXT(elem, free_list))\n-\t\t{\n-\t\t\tif (malloc_elem_can_hold(elem, size, align))\n-\t\t\t\treturn elem;\n+\t\t\t\t!!elem; elem = LIST_NEXT(elem, free_list)) {\n+\t\t\tif (malloc_elem_can_hold(elem, size, align, bound)) {\n+\t\t\t\tif (check_hugepage_sz(flags, elem->ms->hugepage_sz))\n+\t\t\t\t\treturn elem;\n+\t\t\t\talt_elem = elem;\n+\t\t\t}\n \t\t}\n \t}\n+\n+\tif ((alt_elem != NULL) && (flags & RTE_MEMZONE_SIZE_HINT_ONLY))\n+\t\treturn alt_elem;\n+\n \treturn NULL;\n }\n \n /*\n- * Main function called by malloc to allocate a block of memory from the\n- * heap. It locks the free list, scans it, and adds a new memzone if the\n- * scan fails. Once the new memzone is added, it re-scans and should return\n+ * Main function to allocate a block of memory from the heap.\n+ * It locks the free list, scans it, and adds a new memseg if the\n+ * scan fails. Once the new memseg is added, it re-scans and should return\n  * the new element after releasing the lock.\n  */\n void *\n malloc_heap_alloc(struct malloc_heap *heap,\n-\t\tconst char *type __attribute__((unused)), size_t size, unsigned align)\n+\t\tconst char *type __attribute__((unused)), size_t size, unsigned flags,\n+\t\tsize_t align, size_t bound)\n {\n+\tstruct malloc_elem *elem;\n+\n \tsize = RTE_CACHE_LINE_ROUNDUP(size);\n \talign = RTE_CACHE_LINE_ROUNDUP(align);\n+\n \trte_spinlock_lock(&heap->lock);\n-\tstruct malloc_elem *elem = find_suitable_element(heap, size, align);\n-\tif (elem == NULL){\n-\t\tif ((malloc_heap_add_memzone(heap, size, align)) == 0)\n-\t\t\telem = find_suitable_element(heap, size, align);\n-\t}\n \n-\tif (elem != NULL){\n-\t\telem = malloc_elem_alloc(elem, size, align);\n+\telem = find_suitable_element(heap, size, flags, align, bound);\n+\tif (elem != NULL) {\n+\t\telem = malloc_elem_alloc(elem, size, align, bound);\n \t\t/* increase heap's count of allocated elements */\n \t\theap->alloc_count++;\n \t}\n \trte_spinlock_unlock(&heap->lock);\n-\treturn elem == NULL ? NULL : (void *)(&elem[1]);\n \n+\treturn elem == NULL ? NULL : (void *)(&elem[1]);\n }\n \n /*\n@@ -206,3 +186,21 @@ malloc_heap_get_stats(const struct malloc_heap *heap,\n \tsocket_stats->alloc_count = heap->alloc_count;\n \treturn 0;\n }\n+\n+int\n+rte_eal_malloc_heap_init(void)\n+{\n+\tstruct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;\n+\tunsigned ms_cnt;\n+\tstruct rte_memseg *ms;\n+\n+\tif (mcfg == NULL)\n+\t\treturn -1;\n+\n+\tfor (ms = &mcfg->memseg[0], ms_cnt = 0;\n+\t\t\t(ms_cnt < RTE_MAX_MEMSEG) && (ms->len > 0);\n+\t\t\tms_cnt++, ms++)\n+\t\tmalloc_heap_add_memseg(&mcfg->malloc_heaps[ms->socket_id], ms);\n+\n+\treturn 0;\n+}\ndiff --git a/lib/librte_eal/common/malloc_heap.h b/lib/librte_eal/common/malloc_heap.h\nindex a47136d..3ccbef0 100644\n--- a/lib/librte_eal/common/malloc_heap.h\n+++ b/lib/librte_eal/common/malloc_heap.h\n@@ -53,15 +53,15 @@ malloc_get_numa_socket(void)\n }\n \n void *\n-malloc_heap_alloc(struct malloc_heap *heap, const char *type,\n-\t\tsize_t size, unsigned align);\n+malloc_heap_alloc(struct malloc_heap *heap,\tconst char *type, size_t size,\n+\t\tunsigned flags, size_t align, size_t bound);\n \n int\n malloc_heap_get_stats(const struct malloc_heap *heap,\n \t\tstruct rte_malloc_socket_stats *socket_stats);\n \n int\n-rte_eal_heap_memzone_init(void);\n+rte_eal_malloc_heap_init(void);\n \n #ifdef __cplusplus\n }\ndiff --git a/lib/librte_eal/common/rte_malloc.c b/lib/librte_eal/common/rte_malloc.c\nindex c313a57..54c2bd8 100644\n--- a/lib/librte_eal/common/rte_malloc.c\n+++ b/lib/librte_eal/common/rte_malloc.c\n@@ -39,7 +39,6 @@\n \n #include <rte_memcpy.h>\n #include <rte_memory.h>\n-#include <rte_memzone.h>\n #include <rte_eal.h>\n #include <rte_eal_memconfig.h>\n #include <rte_branch_prediction.h>\n@@ -87,7 +86,7 @@ rte_malloc_socket(const char *type, size_t size, unsigned align, int socket_arg)\n \t\treturn NULL;\n \n \tret = malloc_heap_alloc(&mcfg->malloc_heaps[socket], type,\n-\t\t\t\tsize, align == 0 ? 1 : align);\n+\t\t\t\tsize, 0, align == 0 ? 1 : align, 0);\n \tif (ret != NULL || socket_arg != SOCKET_ID_ANY)\n \t\treturn ret;\n \n@@ -98,7 +97,7 @@ rte_malloc_socket(const char *type, size_t size, unsigned align, int socket_arg)\n \t\t\tcontinue;\n \n \t\tret = malloc_heap_alloc(&mcfg->malloc_heaps[i], type,\n-\t\t\t\t\tsize, align == 0 ? 1 : align);\n+\t\t\t\t\tsize, 0, align == 0 ? 1 : align, 0);\n \t\tif (ret != NULL)\n \t\t\treturn ret;\n \t}\n@@ -256,5 +255,5 @@ rte_malloc_virt2phy(const void *addr)\n \tconst struct malloc_elem *elem = malloc_elem_from_data(addr);\n \tif (elem == NULL)\n \t\treturn 0;\n-\treturn elem->mz->phys_addr + ((uintptr_t)addr - (uintptr_t)elem->mz->addr);\n+\treturn elem->ms->phys_addr + ((uintptr_t)addr - (uintptr_t)elem->ms->addr);\n }\n",
    "prefixes": [
        "dpdk-dev",
        "v4",
        "2/9"
    ]
}