get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/2245/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2245,
    "url": "https://patches.dpdk.org/api/patches/2245/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1421080446-19249-6-git-send-email-sergio.gonzalez.monroy@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1421080446-19249-6-git-send-email-sergio.gonzalez.monroy@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1421080446-19249-6-git-send-email-sergio.gonzalez.monroy@intel.com",
    "date": "2015-01-12T16:33:58",
    "name": "[dpdk-dev,RFC,05/13] core: move librte_mempool to core subdir",
    "commit_ref": null,
    "pull_url": null,
    "state": "rfc",
    "archived": true,
    "hash": "aa0935dd3553694845188f98274986dd1d0fbc75",
    "submitter": {
        "id": 73,
        "url": "https://patches.dpdk.org/api/people/73/?format=api",
        "name": "Sergio Gonzalez Monroy",
        "email": "sergio.gonzalez.monroy@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1421080446-19249-6-git-send-email-sergio.gonzalez.monroy@intel.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/2245/comments/",
    "check": "pending",
    "checks": "https://patches.dpdk.org/api/patches/2245/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id 1FC535AF5;\n\tMon, 12 Jan 2015 17:35:09 +0100 (CET)",
            "from mga03.intel.com (mga03.intel.com [134.134.136.65])\n\tby dpdk.org (Postfix) with ESMTP id 0041F5AC1\n\tfor <dev@dpdk.org>; Mon, 12 Jan 2015 17:34:13 +0100 (CET)",
            "from orsmga002.jf.intel.com ([10.7.209.21])\n\tby orsmga103.jf.intel.com with ESMTP; 12 Jan 2015 08:30:38 -0800",
            "from irvmail001.ir.intel.com ([163.33.26.43])\n\tby orsmga002.jf.intel.com with ESMTP; 12 Jan 2015 08:34:09 -0800",
            "from sivswdev02.ir.intel.com (sivswdev02.ir.intel.com\n\t[10.237.217.46])\n\tby irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id\n\tt0CGY7Qg022021 for <dev@dpdk.org>; Mon, 12 Jan 2015 16:34:07 GMT",
            "from sivswdev02.ir.intel.com (localhost [127.0.0.1])\n\tby sivswdev02.ir.intel.com with ESMTP id t0CGY7TZ019323\n\tfor <dev@dpdk.org>; Mon, 12 Jan 2015 16:34:07 GMT",
            "(from smonroy@localhost)\n\tby sivswdev02.ir.intel.com with  id t0CGY7aw019319\n\tfor dev@dpdk.org; Mon, 12 Jan 2015 16:34:07 GMT"
        ],
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.07,744,1413270000\"; d=\"scan'208\";a=\"668520232\"",
        "From": "Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>",
        "To": "dev@dpdk.org",
        "Date": "Mon, 12 Jan 2015 16:33:58 +0000",
        "Message-Id": "<1421080446-19249-6-git-send-email-sergio.gonzalez.monroy@intel.com>",
        "X-Mailer": "git-send-email 1.8.5.4",
        "In-Reply-To": "<1421080446-19249-1-git-send-email-sergio.gonzalez.monroy@intel.com>",
        "References": "<1421080446-19249-1-git-send-email-sergio.gonzalez.monroy@intel.com>",
        "Subject": "[dpdk-dev] [PATCH RFC 05/13] core: move librte_mempool to core\n\tsubdir",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "patches and discussions about DPDK <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "This is equivalent to:\n\ngit mv lib/librte_mempool lib/core\n\nSigned-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>\n---\n lib/core/librte_mempool/Makefile           |   51 +\n lib/core/librte_mempool/rte_dom0_mempool.c |  134 +++\n lib/core/librte_mempool/rte_mempool.c      |  901 ++++++++++++++++++\n lib/core/librte_mempool/rte_mempool.h      | 1392 ++++++++++++++++++++++++++++\n lib/librte_mempool/Makefile                |   51 -\n lib/librte_mempool/rte_dom0_mempool.c      |  134 ---\n lib/librte_mempool/rte_mempool.c           |  901 ------------------\n lib/librte_mempool/rte_mempool.h           | 1392 ----------------------------\n 8 files changed, 2478 insertions(+), 2478 deletions(-)\n create mode 100644 lib/core/librte_mempool/Makefile\n create mode 100644 lib/core/librte_mempool/rte_dom0_mempool.c\n create mode 100644 lib/core/librte_mempool/rte_mempool.c\n create mode 100644 lib/core/librte_mempool/rte_mempool.h\n delete mode 100644 lib/librte_mempool/Makefile\n delete mode 100644 lib/librte_mempool/rte_dom0_mempool.c\n delete mode 100644 lib/librte_mempool/rte_mempool.c\n delete mode 100644 lib/librte_mempool/rte_mempool.h",
    "diff": "diff --git a/lib/core/librte_mempool/Makefile b/lib/core/librte_mempool/Makefile\nnew file mode 100644\nindex 0000000..9939e10\n--- /dev/null\n+++ b/lib/core/librte_mempool/Makefile\n@@ -0,0 +1,51 @@\n+#   BSD LICENSE\n+#\n+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n+#   All rights reserved.\n+#\n+#   Redistribution and use in source and binary forms, with or without\n+#   modification, are permitted provided that the following conditions\n+#   are met:\n+#\n+#     * Redistributions of source code must retain the above copyright\n+#       notice, this list of conditions and the following disclaimer.\n+#     * Redistributions in binary form must reproduce the above copyright\n+#       notice, this list of conditions and the following disclaimer in\n+#       the documentation and/or other materials provided with the\n+#       distribution.\n+#     * Neither the name of Intel Corporation nor the names of its\n+#       contributors may be used to endorse or promote products derived\n+#       from this software without specific prior written permission.\n+#\n+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+#   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+\n+include $(RTE_SDK)/mk/rte.vars.mk\n+\n+# library name\n+LIB = librte_mempool.a\n+\n+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3\n+\n+# all source are stored in SRCS-y\n+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c\n+ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)\n+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c\n+endif\n+# install includes\n+SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h\n+\n+# this lib needs eal, rte_ring and rte_malloc\n+DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_eal lib/librte_ring\n+DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_malloc\n+\n+include $(RTE_SDK)/mk/rte.lib.mk\ndiff --git a/lib/core/librte_mempool/rte_dom0_mempool.c b/lib/core/librte_mempool/rte_dom0_mempool.c\nnew file mode 100644\nindex 0000000..9ec68fb\n--- /dev/null\n+++ b/lib/core/librte_mempool/rte_dom0_mempool.c\n@@ -0,0 +1,134 @@\n+/*-\n+ *   BSD LICENSE\n+ *\n+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n+ *   All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include <stdio.h>\n+#include <string.h>\n+#include <stdint.h>\n+#include <unistd.h>\n+#include <stdarg.h>\n+#include <inttypes.h>\n+#include <errno.h>\n+#include <sys/queue.h>\n+\n+#include <rte_common.h>\n+#include <rte_log.h>\n+#include <rte_debug.h>\n+#include <rte_memory.h>\n+#include <rte_memzone.h>\n+#include <rte_atomic.h>\n+#include <rte_launch.h>\n+#include <rte_tailq.h>\n+#include <rte_eal.h>\n+#include <rte_eal_memconfig.h>\n+#include <rte_per_lcore.h>\n+#include <rte_lcore.h>\n+#include <rte_branch_prediction.h>\n+#include <rte_ring.h>\n+#include <rte_errno.h>\n+#include <rte_string_fns.h>\n+#include <rte_spinlock.h>\n+\n+#include \"rte_mempool.h\"\n+\n+static void\n+get_phys_map(void *va, phys_addr_t pa[], uint32_t pg_num,\n+            uint32_t pg_sz, uint32_t memseg_id)\n+{\n+    uint32_t i;\n+    uint64_t virt_addr, mfn_id;\n+    struct rte_mem_config *mcfg;\n+    uint32_t page_size = getpagesize();\n+\n+    /* get pointer to global configuration */\n+    mcfg = rte_eal_get_configuration()->mem_config;\n+    virt_addr =(uintptr_t) mcfg->memseg[memseg_id].addr;\n+\n+    for (i = 0; i != pg_num; i++) {\n+        mfn_id = ((uintptr_t)va + i * pg_sz - virt_addr) / RTE_PGSIZE_2M;\n+        pa[i] = mcfg->memseg[memseg_id].mfn[mfn_id] * page_size;\n+    }\n+}\n+\n+/* create the mempool for supporting Dom0 */\n+struct rte_mempool *\n+rte_dom0_mempool_create(const char *name, unsigned elt_num, unsigned elt_size,\n+           unsigned cache_size, unsigned private_data_size,\n+           rte_mempool_ctor_t *mp_init, void *mp_init_arg,\n+           rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n+           int socket_id, unsigned flags)\n+{\n+\tstruct rte_mempool *mp = NULL;\n+\tphys_addr_t *pa;\n+\tchar *va;\n+\tsize_t sz;\n+\tuint32_t pg_num, pg_shift, pg_sz, total_size;\n+\tconst struct rte_memzone *mz;\n+\tchar mz_name[RTE_MEMZONE_NAMESIZE];\n+\tint mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;\n+\n+\tpg_sz = RTE_PGSIZE_2M;\n+\n+\tpg_shift = rte_bsf32(pg_sz);\n+\ttotal_size = rte_mempool_calc_obj_size(elt_size, flags, NULL);\n+\n+\t/* calc max memory size and max number of pages needed. */\n+\tsz = rte_mempool_xmem_size(elt_num, total_size, pg_shift) +\n+\t\tRTE_PGSIZE_2M;\n+\tpg_num = sz >> pg_shift;\n+\n+\t/* extract physical mappings of the allocated memory. */\n+\tpa = calloc(pg_num, sizeof (*pa));\n+\tif (pa == NULL)\n+\t\treturn mp;\n+\n+\tsnprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_OBJ_NAME, name);\n+\tmz = rte_memzone_reserve(mz_name, sz, socket_id, mz_flags);\n+\tif (mz == NULL) {\n+\t\tfree(pa);\n+\t\treturn mp;\n+\t}\n+\n+\tva = (char *)RTE_ALIGN_CEIL((uintptr_t)mz->addr, RTE_PGSIZE_2M);\n+\t/* extract physical mappings of the allocated memory. */\n+\tget_phys_map(va, pa, pg_num, pg_sz, mz->memseg_id);\n+\n+\tmp = rte_mempool_xmem_create(name, elt_num, elt_size,\n+\t\tcache_size, private_data_size,\n+\t\tmp_init, mp_init_arg,\n+\t\tobj_init, obj_init_arg,\n+\t\tsocket_id, flags, va, pa, pg_num, pg_shift);\n+\n+\tfree(pa);\n+\n+\treturn (mp);\n+}\ndiff --git a/lib/core/librte_mempool/rte_mempool.c b/lib/core/librte_mempool/rte_mempool.c\nnew file mode 100644\nindex 0000000..4cf6c25\n--- /dev/null\n+++ b/lib/core/librte_mempool/rte_mempool.c\n@@ -0,0 +1,901 @@\n+/*-\n+ *   BSD LICENSE\n+ *\n+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n+ *   All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include <stdio.h>\n+#include <string.h>\n+#include <stdint.h>\n+#include <stdarg.h>\n+#include <unistd.h>\n+#include <inttypes.h>\n+#include <errno.h>\n+#include <sys/queue.h>\n+\n+#include <rte_common.h>\n+#include <rte_log.h>\n+#include <rte_debug.h>\n+#include <rte_memory.h>\n+#include <rte_memzone.h>\n+#include <rte_malloc.h>\n+#include <rte_atomic.h>\n+#include <rte_launch.h>\n+#include <rte_tailq.h>\n+#include <rte_eal.h>\n+#include <rte_eal_memconfig.h>\n+#include <rte_per_lcore.h>\n+#include <rte_lcore.h>\n+#include <rte_branch_prediction.h>\n+#include <rte_ring.h>\n+#include <rte_errno.h>\n+#include <rte_string_fns.h>\n+#include <rte_spinlock.h>\n+\n+#include \"rte_mempool.h\"\n+\n+TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);\n+\n+#define CACHE_FLUSHTHRESH_MULTIPLIER 1.5\n+\n+/*\n+ * return the greatest common divisor between a and b (fast algorithm)\n+ *\n+ */\n+static unsigned get_gcd(unsigned a, unsigned b)\n+{\n+\tunsigned c;\n+\n+\tif (0 == a)\n+\t\treturn b;\n+\tif (0 == b)\n+\t\treturn a;\n+\n+\tif (a < b) {\n+\t\tc = a;\n+\t\ta = b;\n+\t\tb = c;\n+\t}\n+\n+\twhile (b != 0) {\n+\t\tc = a % b;\n+\t\ta = b;\n+\t\tb = c;\n+\t}\n+\n+\treturn a;\n+}\n+\n+/*\n+ * Depending on memory configuration, objects addresses are spread\n+ * between channels and ranks in RAM: the pool allocator will add\n+ * padding between objects. This function return the new size of the\n+ * object.\n+ */\n+static unsigned optimize_object_size(unsigned obj_size)\n+{\n+\tunsigned nrank, nchan;\n+\tunsigned new_obj_size;\n+\n+\t/* get number of channels */\n+\tnchan = rte_memory_get_nchannel();\n+\tif (nchan == 0)\n+\t\tnchan = 1;\n+\n+\tnrank = rte_memory_get_nrank();\n+\tif (nrank == 0)\n+\t\tnrank = 1;\n+\n+\t/* process new object size */\n+\tnew_obj_size = (obj_size + RTE_CACHE_LINE_MASK) / RTE_CACHE_LINE_SIZE;\n+\twhile (get_gcd(new_obj_size, nrank * nchan) != 1)\n+\t\tnew_obj_size++;\n+\treturn new_obj_size * RTE_CACHE_LINE_SIZE;\n+}\n+\n+static void\n+mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,\n+\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg)\n+{\n+\tstruct rte_mempool **mpp;\n+\n+\tobj = (char *)obj + mp->header_size;\n+\n+\t/* set mempool ptr in header */\n+\tmpp = __mempool_from_obj(obj);\n+\t*mpp = mp;\n+\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\t__mempool_write_header_cookie(obj, 1);\n+\t__mempool_write_trailer_cookie(obj);\n+#endif\n+\t/* call the initializer */\n+\tif (obj_init)\n+\t\tobj_init(mp, obj_init_arg, obj, obj_idx);\n+\n+\t/* enqueue in ring */\n+\trte_ring_sp_enqueue(mp->ring, obj);\n+}\n+\n+uint32_t\n+rte_mempool_obj_iter(void *vaddr, uint32_t elt_num, size_t elt_sz, size_t align,\n+\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift,\n+\trte_mempool_obj_iter_t obj_iter, void *obj_iter_arg)\n+{\n+\tuint32_t i, j, k;\n+\tuint32_t pgn;\n+\tuintptr_t end, start, va;\n+\tuintptr_t pg_sz;\n+\n+\tpg_sz = (uintptr_t)1 << pg_shift;\n+\tva = (uintptr_t)vaddr;\n+\n+\ti = 0;\n+\tj = 0;\n+\n+\twhile (i != elt_num && j != pg_num) {\n+\n+\t\tstart = RTE_ALIGN_CEIL(va, align);\n+\t\tend = start + elt_sz;\n+\n+\t\tpgn = (end >> pg_shift) - (start >> pg_shift);\n+\t\tpgn += j;\n+\n+\t\t/* do we have enough space left for the next element. */\n+\t\tif (pgn >= pg_num)\n+\t\t\tbreak;\n+\n+\t\tfor (k = j;\n+\t\t\t\tk != pgn &&\n+\t\t\t\tpaddr[k] + pg_sz == paddr[k + 1];\n+\t\t\t\tk++)\n+\t\t\t;\n+\n+\t\t/*\n+\t\t * if next pgn chunks of memory physically continuous,\n+\t\t * use it to create next element.\n+\t\t * otherwise, just skip that chunk unused.\n+\t\t */\n+\t\tif (k == pgn) {\n+\t\t\tif (obj_iter != NULL)\n+\t\t\t\tobj_iter(obj_iter_arg, (void *)start,\n+\t\t\t\t\t(void *)end, i);\n+\t\t\tva = end;\n+\t\t\tj = pgn;\n+\t\t\ti++;\n+\t\t} else {\n+\t\t\tva = RTE_ALIGN_CEIL((va + 1), pg_sz);\n+\t\t\tj++;\n+\t\t}\n+\t}\n+\n+\treturn (i);\n+}\n+\n+/*\n+ * Populate  mempool with the objects.\n+ */\n+\n+struct mempool_populate_arg {\n+\tstruct rte_mempool     *mp;\n+\trte_mempool_obj_ctor_t *obj_init;\n+\tvoid                   *obj_init_arg;\n+};\n+\n+static void\n+mempool_obj_populate(void *arg, void *start, void *end, uint32_t idx)\n+{\n+\tstruct mempool_populate_arg *pa = arg;\n+\n+\tmempool_add_elem(pa->mp, start, idx, pa->obj_init, pa->obj_init_arg);\n+\tpa->mp->elt_va_end = (uintptr_t)end;\n+}\n+\n+static void\n+mempool_populate(struct rte_mempool *mp, size_t num, size_t align,\n+\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg)\n+{\n+\tuint32_t elt_sz;\n+\tstruct mempool_populate_arg arg;\n+\n+\telt_sz = mp->elt_size + mp->header_size + mp->trailer_size;\n+\targ.mp = mp;\n+\targ.obj_init = obj_init;\n+\targ.obj_init_arg = obj_init_arg;\n+\n+\tmp->size = rte_mempool_obj_iter((void *)mp->elt_va_start,\n+\t\tnum, elt_sz, align,\n+\t\tmp->elt_pa, mp->pg_num, mp->pg_shift,\n+\t\tmempool_obj_populate, &arg);\n+}\n+\n+uint32_t\n+rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,\n+\tstruct rte_mempool_objsz *sz)\n+{\n+\tstruct rte_mempool_objsz lsz;\n+\n+\tsz = (sz != NULL) ? sz : &lsz;\n+\n+\t/*\n+\t * In header, we have at least the pointer to the pool, and\n+\t * optionaly a 64 bits cookie.\n+\t */\n+\tsz->header_size = 0;\n+\tsz->header_size += sizeof(struct rte_mempool *); /* ptr to pool */\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\tsz->header_size += sizeof(uint64_t); /* cookie */\n+#endif\n+\tif ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)\n+\t\tsz->header_size = RTE_ALIGN_CEIL(sz->header_size,\n+\t\t\tRTE_CACHE_LINE_SIZE);\n+\n+\t/* trailer contains the cookie in debug mode */\n+\tsz->trailer_size = 0;\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\tsz->trailer_size += sizeof(uint64_t); /* cookie */\n+#endif\n+\t/* element size is 8 bytes-aligned at least */\n+\tsz->elt_size = RTE_ALIGN_CEIL(elt_size, sizeof(uint64_t));\n+\n+\t/* expand trailer to next cache line */\n+\tif ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {\n+\t\tsz->total_size = sz->header_size + sz->elt_size +\n+\t\t\tsz->trailer_size;\n+\t\tsz->trailer_size += ((RTE_CACHE_LINE_SIZE -\n+\t\t\t\t  (sz->total_size & RTE_CACHE_LINE_MASK)) &\n+\t\t\t\t RTE_CACHE_LINE_MASK);\n+\t}\n+\n+\t/*\n+\t * increase trailer to add padding between objects in order to\n+\t * spread them across memory channels/ranks\n+\t */\n+\tif ((flags & MEMPOOL_F_NO_SPREAD) == 0) {\n+\t\tunsigned new_size;\n+\t\tnew_size = optimize_object_size(sz->header_size + sz->elt_size +\n+\t\t\tsz->trailer_size);\n+\t\tsz->trailer_size = new_size - sz->header_size - sz->elt_size;\n+\t}\n+\n+\tif (! rte_eal_has_hugepages()) {\n+\t\t/*\n+\t\t * compute trailer size so that pool elements fit exactly in\n+\t\t * a standard page\n+\t\t */\n+\t\tint page_size = getpagesize();\n+\t\tint new_size = page_size - sz->header_size - sz->elt_size;\n+\t\tif (new_size < 0 || (unsigned int)new_size < sz->trailer_size) {\n+\t\t\tprintf(\"When hugepages are disabled, pool objects \"\n+\t\t\t       \"can't exceed PAGE_SIZE: %d + %d + %d > %d\\n\",\n+\t\t\t       sz->header_size, sz->elt_size, sz->trailer_size,\n+\t\t\t       page_size);\n+\t\t\treturn 0;\n+\t\t}\n+\t\tsz->trailer_size = new_size;\n+\t}\n+\n+\t/* this is the size of an object, including header and trailer */\n+\tsz->total_size = sz->header_size + sz->elt_size + sz->trailer_size;\n+\n+\treturn (sz->total_size);\n+}\n+\n+\n+/*\n+ * Calculate maximum amount of memory required to store given number of objects.\n+ */\n+size_t\n+rte_mempool_xmem_size(uint32_t elt_num, size_t elt_sz, uint32_t pg_shift)\n+{\n+\tsize_t n, pg_num, pg_sz, sz;\n+\n+\tpg_sz = (size_t)1 << pg_shift;\n+\n+\tif ((n = pg_sz / elt_sz) > 0) {\n+\t\tpg_num = (elt_num + n - 1) / n;\n+\t\tsz = pg_num << pg_shift;\n+\t} else {\n+\t\tsz = RTE_ALIGN_CEIL(elt_sz, pg_sz) * elt_num;\n+\t}\n+\n+\treturn (sz);\n+}\n+\n+/*\n+ * Calculate how much memory would be actually required with the\n+ * given memory footprint to store required number of elements.\n+ */\n+static void\n+mempool_lelem_iter(void *arg, __rte_unused void *start, void *end,\n+        __rte_unused uint32_t idx)\n+{\n+        *(uintptr_t *)arg = (uintptr_t)end;\n+}\n+\n+ssize_t\n+rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,\n+\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)\n+{\n+\tuint32_t n;\n+\tuintptr_t va, uv;\n+\tsize_t pg_sz, usz;\n+\n+\tpg_sz = (size_t)1 << pg_shift;\n+\tva = (uintptr_t)vaddr;\n+\tuv = va;\n+\n+\tif ((n = rte_mempool_obj_iter(vaddr, elt_num, elt_sz, 1,\n+\t\t\tpaddr, pg_num, pg_shift, mempool_lelem_iter,\n+\t\t\t&uv)) != elt_num) {\n+\t\treturn (-n);\n+\t}\n+\n+\tuv = RTE_ALIGN_CEIL(uv, pg_sz);\n+\tusz = uv - va;\n+\treturn (usz);\n+}\n+\n+/* create the mempool */\n+struct rte_mempool *\n+rte_mempool_create(const char *name, unsigned n, unsigned elt_size,\n+\t\t   unsigned cache_size, unsigned private_data_size,\n+\t\t   rte_mempool_ctor_t *mp_init, void *mp_init_arg,\n+\t\t   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n+\t\t   int socket_id, unsigned flags)\n+{\n+#ifdef RTE_LIBRTE_XEN_DOM0\n+\treturn (rte_dom0_mempool_create(name, n, elt_size,\n+\t\tcache_size, private_data_size,\n+\t\tmp_init, mp_init_arg,\n+\t\tobj_init, obj_init_arg,\n+\t\tsocket_id, flags));\n+#else\n+\treturn (rte_mempool_xmem_create(name, n, elt_size,\n+\t\tcache_size, private_data_size,\n+\t\tmp_init, mp_init_arg,\n+\t\tobj_init, obj_init_arg,\n+\t\tsocket_id, flags,\n+\t\tNULL, NULL, MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX));\n+#endif\n+}\n+\n+/*\n+ * Create the mempool over already allocated chunk of memory.\n+ * That external memory buffer can consists of physically disjoint pages.\n+ * Setting vaddr to NULL, makes mempool to fallback to original behaviour\n+ * and allocate space for mempool and it's elements as one big chunk of\n+ * physically continuos memory.\n+ * */\n+struct rte_mempool *\n+rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,\n+\t\tunsigned cache_size, unsigned private_data_size,\n+\t\trte_mempool_ctor_t *mp_init, void *mp_init_arg,\n+\t\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n+\t\tint socket_id, unsigned flags, void *vaddr,\n+\t\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)\n+{\n+\tchar mz_name[RTE_MEMZONE_NAMESIZE];\n+\tchar rg_name[RTE_RING_NAMESIZE];\n+\tstruct rte_mempool *mp = NULL;\n+\tstruct rte_tailq_entry *te;\n+\tstruct rte_ring *r;\n+\tconst struct rte_memzone *mz;\n+\tsize_t mempool_size;\n+\tint mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;\n+\tint rg_flags = 0;\n+\tvoid *obj;\n+\tstruct rte_mempool_objsz objsz;\n+\tvoid *startaddr;\n+\tint page_size = getpagesize();\n+\n+\t/* compilation-time checks */\n+\tRTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n+\tRTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+\tRTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+#endif\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\tRTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+\tRTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+#endif\n+\n+\t/* check that we have an initialised tail queue */\n+\tif (RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL,\n+\t\t\trte_mempool_list) == NULL) {\n+\t\trte_errno = E_RTE_NO_TAILQ;\n+\t\treturn NULL;\n+\t}\n+\n+\t/* asked cache too big */\n+\tif (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) {\n+\t\trte_errno = EINVAL;\n+\t\treturn NULL;\n+\t}\n+\n+\t/* check that we have both VA and PA */\n+\tif (vaddr != NULL && paddr == NULL) {\n+\t\trte_errno = EINVAL;\n+\t\treturn NULL;\n+\t}\n+\n+\t/* Check that pg_num and pg_shift parameters are valid. */\n+\tif (pg_num < RTE_DIM(mp->elt_pa) || pg_shift > MEMPOOL_PG_SHIFT_MAX) {\n+\t\trte_errno = EINVAL;\n+\t\treturn NULL;\n+\t}\n+\n+\t/* \"no cache align\" imply \"no spread\" */\n+\tif (flags & MEMPOOL_F_NO_CACHE_ALIGN)\n+\t\tflags |= MEMPOOL_F_NO_SPREAD;\n+\n+\t/* ring flags */\n+\tif (flags & MEMPOOL_F_SP_PUT)\n+\t\trg_flags |= RING_F_SP_ENQ;\n+\tif (flags & MEMPOOL_F_SC_GET)\n+\t\trg_flags |= RING_F_SC_DEQ;\n+\n+\t/* calculate mempool object sizes. */\n+\tif (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {\n+\t\trte_errno = EINVAL;\n+\t\treturn NULL;\n+\t}\n+\n+\trte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);\n+\n+\t/* allocate the ring that will be used to store objects */\n+\t/* Ring functions will return appropriate errors if we are\n+\t * running as a secondary process etc., so no checks made\n+\t * in this function for that condition */\n+\tsnprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);\n+\tr = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);\n+\tif (r == NULL)\n+\t\tgoto exit;\n+\n+\t/*\n+\t * reserve a memory zone for this mempool: private data is\n+\t * cache-aligned\n+\t */\n+\tprivate_data_size = (private_data_size +\n+\t\t\t     RTE_CACHE_LINE_MASK) & (~RTE_CACHE_LINE_MASK);\n+\n+\tif (! rte_eal_has_hugepages()) {\n+\t\t/*\n+\t\t * expand private data size to a whole page, so that the\n+\t\t * first pool element will start on a new standard page\n+\t\t */\n+\t\tint head = sizeof(struct rte_mempool);\n+\t\tint new_size = (private_data_size + head) % page_size;\n+\t\tif (new_size) {\n+\t\t\tprivate_data_size += page_size - new_size;\n+\t\t}\n+\t}\n+\n+\t/* try to allocate tailq entry */\n+\tte = rte_zmalloc(\"MEMPOOL_TAILQ_ENTRY\", sizeof(*te), 0);\n+\tif (te == NULL) {\n+\t\tRTE_LOG(ERR, MEMPOOL, \"Cannot allocate tailq entry!\\n\");\n+\t\tgoto exit;\n+\t}\n+\n+\t/*\n+\t * If user provided an external memory buffer, then use it to\n+\t * store mempool objects. Otherwise reserve memzone big enough to\n+\t * hold mempool header and metadata plus mempool objects.\n+\t */\n+\tmempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;\n+\tif (vaddr == NULL)\n+\t\tmempool_size += (size_t)objsz.total_size * n;\n+\n+\tif (! rte_eal_has_hugepages()) {\n+\t\t/*\n+\t\t * we want the memory pool to start on a page boundary,\n+\t\t * because pool elements crossing page boundaries would\n+\t\t * result in discontiguous physical addresses\n+\t\t */\n+\t\tmempool_size += page_size;\n+\t}\n+\n+\tsnprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);\n+\n+\tmz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);\n+\n+\t/*\n+\t * no more memory: in this case we loose previously reserved\n+\t * space for the as we cannot free it\n+\t */\n+\tif (mz == NULL) {\n+\t\trte_free(te);\n+\t\tgoto exit;\n+\t}\n+\n+\tif (rte_eal_has_hugepages()) {\n+\t\tstartaddr = (void*)mz->addr;\n+\t} else {\n+\t\t/* align memory pool start address on a page boundary */\n+\t\tunsigned long addr = (unsigned long)mz->addr;\n+\t\tif (addr & (page_size - 1)) {\n+\t\t\taddr += page_size;\n+\t\t\taddr &= ~(page_size - 1);\n+\t\t}\n+\t\tstartaddr = (void*)addr;\n+\t}\n+\n+\t/* init the mempool structure */\n+\tmp = startaddr;\n+\tmemset(mp, 0, sizeof(*mp));\n+\tsnprintf(mp->name, sizeof(mp->name), \"%s\", name);\n+\tmp->phys_addr = mz->phys_addr;\n+\tmp->ring = r;\n+\tmp->size = n;\n+\tmp->flags = flags;\n+\tmp->elt_size = objsz.elt_size;\n+\tmp->header_size = objsz.header_size;\n+\tmp->trailer_size = objsz.trailer_size;\n+\tmp->cache_size = cache_size;\n+\tmp->cache_flushthresh = (uint32_t)\n+\t\t(cache_size * CACHE_FLUSHTHRESH_MULTIPLIER);\n+\tmp->private_data_size = private_data_size;\n+\n+\t/* calculate address of the first element for continuous mempool. */\n+\tobj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +\n+\t\tprivate_data_size;\n+\n+\t/* populate address translation fields. */\n+\tmp->pg_num = pg_num;\n+\tmp->pg_shift = pg_shift;\n+\tmp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));\n+\n+\t/* mempool elements allocated together with mempool */\n+\tif (vaddr == NULL) {\n+\t\tmp->elt_va_start = (uintptr_t)obj;\n+\t\tmp->elt_pa[0] = mp->phys_addr +\n+\t\t\t(mp->elt_va_start - (uintptr_t)mp);\n+\n+\t/* mempool elements in a separate chunk of memory. */\n+\t} else {\n+\t\tmp->elt_va_start = (uintptr_t)vaddr;\n+\t\tmemcpy(mp->elt_pa, paddr, sizeof (mp->elt_pa[0]) * pg_num);\n+\t}\n+\n+\tmp->elt_va_end = mp->elt_va_start;\n+\n+\t/* call the initializer */\n+\tif (mp_init)\n+\t\tmp_init(mp, mp_init_arg);\n+\n+\tmempool_populate(mp, n, 1, obj_init, obj_init_arg);\n+\n+\tte->data = (void *) mp;\n+\n+\tRTE_EAL_TAILQ_INSERT_TAIL(RTE_TAILQ_MEMPOOL, rte_mempool_list, te);\n+\n+exit:\n+\trte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);\n+\n+\treturn mp;\n+}\n+\n+/* Return the number of entries in the mempool */\n+unsigned\n+rte_mempool_count(const struct rte_mempool *mp)\n+{\n+\tunsigned count;\n+\n+\tcount = rte_ring_count(mp->ring);\n+\n+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n+\t{\n+\t\tunsigned lcore_id;\n+\t\tif (mp->cache_size == 0)\n+\t\t\treturn count;\n+\n+\t\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)\n+\t\t\tcount += mp->local_cache[lcore_id].len;\n+\t}\n+#endif\n+\n+\t/*\n+\t * due to race condition (access to len is not locked), the\n+\t * total can be greater than size... so fix the result\n+\t */\n+\tif (count > mp->size)\n+\t\treturn mp->size;\n+\treturn count;\n+}\n+\n+/* dump the cache status */\n+static unsigned\n+rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)\n+{\n+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n+\tunsigned lcore_id;\n+\tunsigned count = 0;\n+\tunsigned cache_count;\n+\n+\tfprintf(f, \"  cache infos:\\n\");\n+\tfprintf(f, \"    cache_size=%\"PRIu32\"\\n\", mp->cache_size);\n+\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n+\t\tcache_count = mp->local_cache[lcore_id].len;\n+\t\tfprintf(f, \"    cache_count[%u]=%u\\n\", lcore_id, cache_count);\n+\t\tcount += cache_count;\n+\t}\n+\tfprintf(f, \"    total_cache_count=%u\\n\", count);\n+\treturn count;\n+#else\n+\tRTE_SET_USED(mp);\n+\tfprintf(f, \"  cache disabled\\n\");\n+\treturn 0;\n+#endif\n+}\n+\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+/* check cookies before and after objects */\n+#ifndef __INTEL_COMPILER\n+#pragma GCC diagnostic ignored \"-Wcast-qual\"\n+#endif\n+\n+struct mempool_audit_arg {\n+\tconst struct rte_mempool *mp;\n+\tuintptr_t obj_end;\n+\tuint32_t obj_num;\n+};\n+\n+static void\n+mempool_obj_audit(void *arg, void *start, void *end, uint32_t idx)\n+{\n+\tstruct mempool_audit_arg *pa = arg;\n+\tvoid *obj;\n+\n+\tobj = (char *)start + pa->mp->header_size;\n+\tpa->obj_end = (uintptr_t)end;\n+\tpa->obj_num = idx + 1;\n+\t__mempool_check_cookies(pa->mp, &obj, 1, 2);\n+}\n+\n+static void\n+mempool_audit_cookies(const struct rte_mempool *mp)\n+{\n+\tuint32_t elt_sz, num;\n+\tstruct mempool_audit_arg arg;\n+\n+\telt_sz = mp->elt_size + mp->header_size + mp->trailer_size;\n+\n+\targ.mp = mp;\n+\targ.obj_end = mp->elt_va_start;\n+\targ.obj_num = 0;\n+\n+\tnum = rte_mempool_obj_iter((void *)mp->elt_va_start,\n+\t\tmp->size, elt_sz, 1,\n+\t\tmp->elt_pa, mp->pg_num, mp->pg_shift,\n+\t\tmempool_obj_audit, &arg);\n+\n+\tif (num != mp->size) {\n+\t\t\trte_panic(\"rte_mempool_obj_iter(mempool=%p, size=%u) \"\n+\t\t\t\"iterated only over %u elements\\n\",\n+\t\t\tmp, mp->size, num);\n+\t} else if (arg.obj_end != mp->elt_va_end || arg.obj_num != mp->size) {\n+\t\t\trte_panic(\"rte_mempool_obj_iter(mempool=%p, size=%u) \"\n+\t\t\t\"last callback va_end: %#tx (%#tx expeceted), \"\n+\t\t\t\"num of objects: %u (%u expected)\\n\",\n+\t\t\tmp, mp->size,\n+\t\t\targ.obj_end, mp->elt_va_end,\n+\t\t\targ.obj_num, mp->size);\n+\t}\n+}\n+\n+#ifndef __INTEL_COMPILER\n+#pragma GCC diagnostic error \"-Wcast-qual\"\n+#endif\n+#else\n+#define mempool_audit_cookies(mp) do {} while(0)\n+#endif\n+\n+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n+/* check cookies before and after objects */\n+static void\n+mempool_audit_cache(const struct rte_mempool *mp)\n+{\n+\t/* check cache size consistency */\n+\tunsigned lcore_id;\n+\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n+\t\tif (mp->local_cache[lcore_id].len > mp->cache_flushthresh) {\n+\t\t\tRTE_LOG(CRIT, MEMPOOL, \"badness on cache[%u]\\n\",\n+\t\t\t\tlcore_id);\n+\t\t\trte_panic(\"MEMPOOL: invalid cache len\\n\");\n+\t\t}\n+\t}\n+}\n+#else\n+#define mempool_audit_cache(mp) do {} while(0)\n+#endif\n+\n+\n+/* check the consistency of mempool (size, cookies, ...) */\n+void\n+rte_mempool_audit(const struct rte_mempool *mp)\n+{\n+\tmempool_audit_cache(mp);\n+\tmempool_audit_cookies(mp);\n+\n+\t/* For case where mempool DEBUG is not set, and cache size is 0 */\n+\tRTE_SET_USED(mp);\n+}\n+\n+/* dump the status of the mempool on the console */\n+void\n+rte_mempool_dump(FILE *f, const struct rte_mempool *mp)\n+{\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\tstruct rte_mempool_debug_stats sum;\n+\tunsigned lcore_id;\n+#endif\n+\tunsigned common_count;\n+\tunsigned cache_count;\n+\n+\tRTE_VERIFY(f != NULL);\n+\tRTE_VERIFY(mp != NULL);\n+\n+\tfprintf(f, \"mempool <%s>@%p\\n\", mp->name, mp);\n+\tfprintf(f, \"  flags=%x\\n\", mp->flags);\n+\tfprintf(f, \"  ring=<%s>@%p\\n\", mp->ring->name, mp->ring);\n+\tfprintf(f, \"  phys_addr=0x%\" PRIx64 \"\\n\", mp->phys_addr);\n+\tfprintf(f, \"  size=%\"PRIu32\"\\n\", mp->size);\n+\tfprintf(f, \"  header_size=%\"PRIu32\"\\n\", mp->header_size);\n+\tfprintf(f, \"  elt_size=%\"PRIu32\"\\n\", mp->elt_size);\n+\tfprintf(f, \"  trailer_size=%\"PRIu32\"\\n\", mp->trailer_size);\n+\tfprintf(f, \"  total_obj_size=%\"PRIu32\"\\n\",\n+\t       mp->header_size + mp->elt_size + mp->trailer_size);\n+\n+\tfprintf(f, \"  private_data_size=%\"PRIu32\"\\n\", mp->private_data_size);\n+\tfprintf(f, \"  pg_num=%\"PRIu32\"\\n\", mp->pg_num);\n+\tfprintf(f, \"  pg_shift=%\"PRIu32\"\\n\", mp->pg_shift);\n+\tfprintf(f, \"  pg_mask=%#tx\\n\", mp->pg_mask);\n+\tfprintf(f, \"  elt_va_start=%#tx\\n\", mp->elt_va_start);\n+\tfprintf(f, \"  elt_va_end=%#tx\\n\", mp->elt_va_end);\n+\tfprintf(f, \"  elt_pa[0]=0x%\" PRIx64 \"\\n\", mp->elt_pa[0]);\n+\n+\tif (mp->size != 0)\n+\t\tfprintf(f, \"  avg bytes/object=%#Lf\\n\",\n+\t\t\t(long double)(mp->elt_va_end - mp->elt_va_start) /\n+\t\t\tmp->size);\n+\n+\tcache_count = rte_mempool_dump_cache(f, mp);\n+\tcommon_count = rte_ring_count(mp->ring);\n+\tif ((cache_count + common_count) > mp->size)\n+\t\tcommon_count = mp->size - cache_count;\n+\tfprintf(f, \"  common_pool_count=%u\\n\", common_count);\n+\n+\t/* sum and dump statistics */\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\tmemset(&sum, 0, sizeof(sum));\n+\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n+\t\tsum.put_bulk += mp->stats[lcore_id].put_bulk;\n+\t\tsum.put_objs += mp->stats[lcore_id].put_objs;\n+\t\tsum.get_success_bulk += mp->stats[lcore_id].get_success_bulk;\n+\t\tsum.get_success_objs += mp->stats[lcore_id].get_success_objs;\n+\t\tsum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk;\n+\t\tsum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;\n+\t}\n+\tfprintf(f, \"  stats:\\n\");\n+\tfprintf(f, \"    put_bulk=%\"PRIu64\"\\n\", sum.put_bulk);\n+\tfprintf(f, \"    put_objs=%\"PRIu64\"\\n\", sum.put_objs);\n+\tfprintf(f, \"    get_success_bulk=%\"PRIu64\"\\n\", sum.get_success_bulk);\n+\tfprintf(f, \"    get_success_objs=%\"PRIu64\"\\n\", sum.get_success_objs);\n+\tfprintf(f, \"    get_fail_bulk=%\"PRIu64\"\\n\", sum.get_fail_bulk);\n+\tfprintf(f, \"    get_fail_objs=%\"PRIu64\"\\n\", sum.get_fail_objs);\n+#else\n+\tfprintf(f, \"  no statistics available\\n\");\n+#endif\n+\n+\trte_mempool_audit(mp);\n+}\n+\n+/* dump the status of all mempools on the console */\n+void\n+rte_mempool_list_dump(FILE *f)\n+{\n+\tconst struct rte_mempool *mp = NULL;\n+\tstruct rte_tailq_entry *te;\n+\tstruct rte_mempool_list *mempool_list;\n+\n+\tif ((mempool_list =\n+\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {\n+\t\trte_errno = E_RTE_NO_TAILQ;\n+\t\treturn;\n+\t}\n+\n+\trte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);\n+\n+\tTAILQ_FOREACH(te, mempool_list, next) {\n+\t\tmp = (struct rte_mempool *) te->data;\n+\t\trte_mempool_dump(f, mp);\n+\t}\n+\n+\trte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);\n+}\n+\n+/* search a mempool from its name */\n+struct rte_mempool *\n+rte_mempool_lookup(const char *name)\n+{\n+\tstruct rte_mempool *mp = NULL;\n+\tstruct rte_tailq_entry *te;\n+\tstruct rte_mempool_list *mempool_list;\n+\n+\tif ((mempool_list =\n+\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {\n+\t\trte_errno = E_RTE_NO_TAILQ;\n+\t\treturn NULL;\n+\t}\n+\n+\trte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);\n+\n+\tTAILQ_FOREACH(te, mempool_list, next) {\n+\t\tmp = (struct rte_mempool *) te->data;\n+\t\tif (strncmp(name, mp->name, RTE_MEMPOOL_NAMESIZE) == 0)\n+\t\t\tbreak;\n+\t}\n+\n+\trte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);\n+\n+\tif (te == NULL) {\n+\t\trte_errno = ENOENT;\n+\t\treturn NULL;\n+\t}\n+\n+\treturn mp;\n+}\n+\n+void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),\n+\t\t      void *arg)\n+{\n+\tstruct rte_tailq_entry *te = NULL;\n+\tstruct rte_mempool_list *mempool_list;\n+\n+\tif ((mempool_list =\n+\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {\n+\t\trte_errno = E_RTE_NO_TAILQ;\n+\t\treturn;\n+\t}\n+\n+\trte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);\n+\n+\tTAILQ_FOREACH(te, mempool_list, next) {\n+\t\t(*func)((struct rte_mempool *) te->data, arg);\n+\t}\n+\n+\trte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);\n+}\ndiff --git a/lib/core/librte_mempool/rte_mempool.h b/lib/core/librte_mempool/rte_mempool.h\nnew file mode 100644\nindex 0000000..3314651\n--- /dev/null\n+++ b/lib/core/librte_mempool/rte_mempool.h\n@@ -0,0 +1,1392 @@\n+/*-\n+ *   BSD LICENSE\n+ *\n+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n+ *   All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#ifndef _RTE_MEMPOOL_H_\n+#define _RTE_MEMPOOL_H_\n+\n+/**\n+ * @file\n+ * RTE Mempool.\n+ *\n+ * A memory pool is an allocator of fixed-size object. It is\n+ * identified by its name, and uses a ring to store free objects. It\n+ * provides some other optional services, like a per-core object\n+ * cache, and an alignment helper to ensure that objects are padded\n+ * to spread them equally on all RAM channels, ranks, and so on.\n+ *\n+ * Objects owned by a mempool should never be added in another\n+ * mempool. When an object is freed using rte_mempool_put() or\n+ * equivalent, the object data is not modified; the user can save some\n+ * meta-data in the object data and retrieve them when allocating a\n+ * new object.\n+ *\n+ * Note: the mempool implementation is not preemptable. A lcore must\n+ * not be interrupted by another task that uses the same mempool\n+ * (because it uses a ring which is not preemptable). Also, mempool\n+ * functions must not be used outside the DPDK environment: for\n+ * example, in linuxapp environment, a thread that is not created by\n+ * the EAL must not use mempools. This is due to the per-lcore cache\n+ * that won't work as rte_lcore_id() will not return a correct value.\n+ */\n+\n+#include <stdio.h>\n+#include <stdlib.h>\n+#include <stdint.h>\n+#include <errno.h>\n+#include <inttypes.h>\n+#include <sys/queue.h>\n+\n+#include <rte_log.h>\n+#include <rte_debug.h>\n+#include <rte_lcore.h>\n+#include <rte_memory.h>\n+#include <rte_branch_prediction.h>\n+#include <rte_ring.h>\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#define RTE_MEMPOOL_HEADER_COOKIE1  0xbadbadbadadd2e55ULL /**< Header cookie. */\n+#define RTE_MEMPOOL_HEADER_COOKIE2  0xf2eef2eedadd2e55ULL /**< Header cookie. */\n+#define RTE_MEMPOOL_TRAILER_COOKIE  0xadd2e55badbadbadULL /**< Trailer cookie.*/\n+\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+/**\n+ * A structure that stores the mempool statistics (per-lcore).\n+ */\n+struct rte_mempool_debug_stats {\n+\tuint64_t put_bulk;         /**< Number of puts. */\n+\tuint64_t put_objs;         /**< Number of objects successfully put. */\n+\tuint64_t get_success_bulk; /**< Successful allocation number. */\n+\tuint64_t get_success_objs; /**< Objects successfully allocated. */\n+\tuint64_t get_fail_bulk;    /**< Failed allocation number. */\n+\tuint64_t get_fail_objs;    /**< Objects that failed to be allocated. */\n+} __rte_cache_aligned;\n+#endif\n+\n+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n+/**\n+ * A structure that stores a per-core object cache.\n+ */\n+struct rte_mempool_cache {\n+\tunsigned len; /**< Cache len */\n+\t/*\n+\t * Cache is allocated to this size to allow it to overflow in certain\n+\t * cases to avoid needless emptying of cache.\n+\t */\n+\tvoid *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache objects */\n+} __rte_cache_aligned;\n+#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */\n+\n+struct rte_mempool_objsz {\n+\tuint32_t elt_size;     /**< Size of an element. */\n+\tuint32_t header_size;  /**< Size of header (before elt). */\n+\tuint32_t trailer_size; /**< Size of trailer (after elt). */\n+\tuint32_t total_size;\n+\t/**< Total size of an object (header + elt + trailer). */\n+};\n+\n+#define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory pool. */\n+#define RTE_MEMPOOL_MZ_PREFIX \"MP_\"\n+\n+/* \"MP_<name>\" */\n+#define\tRTE_MEMPOOL_MZ_FORMAT\tRTE_MEMPOOL_MZ_PREFIX \"%s\"\n+\n+#ifdef RTE_LIBRTE_XEN_DOM0\n+\n+/* \"<name>_MP_elt\" */\n+#define\tRTE_MEMPOOL_OBJ_NAME\t\"%s_\" RTE_MEMPOOL_MZ_PREFIX \"elt\"\n+\n+#else\n+\n+#define\tRTE_MEMPOOL_OBJ_NAME\tRTE_MEMPOOL_MZ_FORMAT\n+\n+#endif /* RTE_LIBRTE_XEN_DOM0 */\n+\n+#define\tMEMPOOL_PG_SHIFT_MAX\t(sizeof(uintptr_t) * CHAR_BIT - 1)\n+\n+/** Mempool over one chunk of physically continuous memory */\n+#define\tMEMPOOL_PG_NUM_DEFAULT\t1\n+\n+/**\n+ * The RTE mempool structure.\n+ */\n+struct rte_mempool {\n+\tchar name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */\n+\tstruct rte_ring *ring;           /**< Ring to store objects. */\n+\tphys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */\n+\tint flags;                       /**< Flags of the mempool. */\n+\tuint32_t size;                   /**< Size of the mempool. */\n+\tuint32_t cache_size;             /**< Size of per-lcore local cache. */\n+\tuint32_t cache_flushthresh;\n+\t/**< Threshold before we flush excess elements. */\n+\n+\tuint32_t elt_size;               /**< Size of an element. */\n+\tuint32_t header_size;            /**< Size of header (before elt). */\n+\tuint32_t trailer_size;           /**< Size of trailer (after elt). */\n+\n+\tunsigned private_data_size;      /**< Size of private data. */\n+\n+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n+\t/** Per-lcore local cache. */\n+\tstruct rte_mempool_cache local_cache[RTE_MAX_LCORE];\n+#endif\n+\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\t/** Per-lcore statistics. */\n+\tstruct rte_mempool_debug_stats stats[RTE_MAX_LCORE];\n+#endif\n+\n+\t/* Address translation support, starts from next cache line. */\n+\n+\t/** Number of elements in the elt_pa array. */\n+\tuint32_t    pg_num __rte_cache_aligned;\n+\tuint32_t    pg_shift;     /**< LOG2 of the physical pages. */\n+\tuintptr_t   pg_mask;      /**< physical page mask value. */\n+\tuintptr_t   elt_va_start;\n+\t/**< Virtual address of the first mempool object. */\n+\tuintptr_t   elt_va_end;\n+\t/**< Virtual address of the <size + 1> mempool object. */\n+\tphys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];\n+\t/**< Array of physical pages addresses for the mempool objects buffer. */\n+\n+}  __rte_cache_aligned;\n+\n+#define MEMPOOL_F_NO_SPREAD      0x0001 /**< Do not spread in memory. */\n+#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/\n+#define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is \"single-producer\".*/\n+#define MEMPOOL_F_SC_GET         0x0008 /**< Default get is \"single-consumer\".*/\n+\n+/**\n+ * @internal When debug is enabled, store some statistics.\n+ * @param mp\n+ *   Pointer to the memory pool.\n+ * @param name\n+ *   Name of the statistics field to increment in the memory pool.\n+ * @param n\n+ *   Number to add to the object-oriented statistics.\n+ */\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+#define __MEMPOOL_STAT_ADD(mp, name, n) do {\t\t\t\\\n+\t\tunsigned __lcore_id = rte_lcore_id();\t\t\\\n+\t\tmp->stats[__lcore_id].name##_objs += n;\t\t\\\n+\t\tmp->stats[__lcore_id].name##_bulk += 1;\t\t\\\n+\t} while(0)\n+#else\n+#define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)\n+#endif\n+\n+/**\n+ * Calculates size of the mempool header.\n+ * @param mp\n+ *   Pointer to the memory pool.\n+ * @param pgn\n+ *   Number of page used to store mempool objects.\n+ */\n+#define\tMEMPOOL_HEADER_SIZE(mp, pgn)\t(sizeof(*(mp)) + \\\n+\tRTE_ALIGN_CEIL(((pgn) - RTE_DIM((mp)->elt_pa)) * \\\n+\tsizeof ((mp)->elt_pa[0]), RTE_CACHE_LINE_SIZE))\n+\n+/**\n+ * Returns TRUE if whole mempool is allocated in one contiguous block of memory.\n+ */\n+#define\tMEMPOOL_IS_CONTIG(mp)                      \\\n+\t((mp)->pg_num == MEMPOOL_PG_NUM_DEFAULT && \\\n+\t(mp)->phys_addr == (mp)->elt_pa[0])\n+\n+/**\n+ * @internal Get a pointer to a mempool pointer in the object header.\n+ * @param obj\n+ *   Pointer to object.\n+ * @return\n+ *   The pointer to the mempool from which the object was allocated.\n+ */\n+static inline struct rte_mempool **__mempool_from_obj(void *obj)\n+{\n+\tstruct rte_mempool **mpp;\n+\tunsigned off;\n+\n+\toff = sizeof(struct rte_mempool *);\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\toff += sizeof(uint64_t);\n+#endif\n+\tmpp = (struct rte_mempool **)((char *)obj - off);\n+\treturn mpp;\n+}\n+\n+/**\n+ * Return a pointer to the mempool owning this object.\n+ *\n+ * @param obj\n+ *   An object that is owned by a pool. If this is not the case,\n+ *   the behavior is undefined.\n+ * @return\n+ *   A pointer to the mempool structure.\n+ */\n+static inline const struct rte_mempool *rte_mempool_from_obj(void *obj)\n+{\n+\tstruct rte_mempool * const *mpp;\n+\tmpp = __mempool_from_obj(obj);\n+\treturn *mpp;\n+}\n+\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+/* get header cookie value */\n+static inline uint64_t __mempool_read_header_cookie(const void *obj)\n+{\n+\treturn *(const uint64_t *)((const char *)obj - sizeof(uint64_t));\n+}\n+\n+/* get trailer cookie value */\n+static inline uint64_t __mempool_read_trailer_cookie(void *obj)\n+{\n+\tstruct rte_mempool **mpp = __mempool_from_obj(obj);\n+\treturn *(uint64_t *)((char *)obj + (*mpp)->elt_size);\n+}\n+\n+/* write header cookie value */\n+static inline void __mempool_write_header_cookie(void *obj, int free)\n+{\n+\tuint64_t *cookie_p;\n+\tcookie_p = (uint64_t *)((char *)obj - sizeof(uint64_t));\n+\tif (free == 0)\n+\t\t*cookie_p = RTE_MEMPOOL_HEADER_COOKIE1;\n+\telse\n+\t\t*cookie_p = RTE_MEMPOOL_HEADER_COOKIE2;\n+\n+}\n+\n+/* write trailer cookie value */\n+static inline void __mempool_write_trailer_cookie(void *obj)\n+{\n+\tuint64_t *cookie_p;\n+\tstruct rte_mempool **mpp = __mempool_from_obj(obj);\n+\tcookie_p = (uint64_t *)((char *)obj + (*mpp)->elt_size);\n+\t*cookie_p = RTE_MEMPOOL_TRAILER_COOKIE;\n+}\n+#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */\n+\n+/**\n+ * @internal Check and update cookies or panic.\n+ *\n+ * @param mp\n+ *   Pointer to the memory pool.\n+ * @param obj_table_const\n+ *   Pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   Index of object in object table.\n+ * @param free\n+ *   - 0: object is supposed to be allocated, mark it as free\n+ *   - 1: object is supposed to be free, mark it as allocated\n+ *   - 2: just check that cookie is valid (free or allocated)\n+ */\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+#ifndef __INTEL_COMPILER\n+#pragma GCC diagnostic ignored \"-Wcast-qual\"\n+#endif\n+static inline void __mempool_check_cookies(const struct rte_mempool *mp,\n+\t\t\t\t\t   void * const *obj_table_const,\n+\t\t\t\t\t   unsigned n, int free)\n+{\n+\tuint64_t cookie;\n+\tvoid *tmp;\n+\tvoid *obj;\n+\tvoid **obj_table;\n+\n+\t/* Force to drop the \"const\" attribute. This is done only when\n+\t * DEBUG is enabled */\n+\ttmp = (void *) obj_table_const;\n+\tobj_table = (void **) tmp;\n+\n+\twhile (n--) {\n+\t\tobj = obj_table[n];\n+\n+\t\tif (rte_mempool_from_obj(obj) != mp)\n+\t\t\trte_panic(\"MEMPOOL: object is owned by another \"\n+\t\t\t\t  \"mempool\\n\");\n+\n+\t\tcookie = __mempool_read_header_cookie(obj);\n+\n+\t\tif (free == 0) {\n+\t\t\tif (cookie != RTE_MEMPOOL_HEADER_COOKIE1) {\n+\t\t\t\trte_log_set_history(0);\n+\t\t\t\tRTE_LOG(CRIT, MEMPOOL,\n+\t\t\t\t\t\"obj=%p, mempool=%p, cookie=%\"PRIx64\"\\n\",\n+\t\t\t\t\tobj, mp, cookie);\n+\t\t\t\trte_panic(\"MEMPOOL: bad header cookie (put)\\n\");\n+\t\t\t}\n+\t\t\t__mempool_write_header_cookie(obj, 1);\n+\t\t}\n+\t\telse if (free == 1) {\n+\t\t\tif (cookie != RTE_MEMPOOL_HEADER_COOKIE2) {\n+\t\t\t\trte_log_set_history(0);\n+\t\t\t\tRTE_LOG(CRIT, MEMPOOL,\n+\t\t\t\t\t\"obj=%p, mempool=%p, cookie=%\"PRIx64\"\\n\",\n+\t\t\t\t\tobj, mp, cookie);\n+\t\t\t\trte_panic(\"MEMPOOL: bad header cookie (get)\\n\");\n+\t\t\t}\n+\t\t\t__mempool_write_header_cookie(obj, 0);\n+\t\t}\n+\t\telse if (free == 2) {\n+\t\t\tif (cookie != RTE_MEMPOOL_HEADER_COOKIE1 &&\n+\t\t\t    cookie != RTE_MEMPOOL_HEADER_COOKIE2) {\n+\t\t\t\trte_log_set_history(0);\n+\t\t\t\tRTE_LOG(CRIT, MEMPOOL,\n+\t\t\t\t\t\"obj=%p, mempool=%p, cookie=%\"PRIx64\"\\n\",\n+\t\t\t\t\tobj, mp, cookie);\n+\t\t\t\trte_panic(\"MEMPOOL: bad header cookie (audit)\\n\");\n+\t\t\t}\n+\t\t}\n+\t\tcookie = __mempool_read_trailer_cookie(obj);\n+\t\tif (cookie != RTE_MEMPOOL_TRAILER_COOKIE) {\n+\t\t\trte_log_set_history(0);\n+\t\t\tRTE_LOG(CRIT, MEMPOOL,\n+\t\t\t\t\"obj=%p, mempool=%p, cookie=%\"PRIx64\"\\n\",\n+\t\t\t\tobj, mp, cookie);\n+\t\t\trte_panic(\"MEMPOOL: bad trailer cookie\\n\");\n+\t\t}\n+\t}\n+}\n+#ifndef __INTEL_COMPILER\n+#pragma GCC diagnostic error \"-Wcast-qual\"\n+#endif\n+#else\n+#define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)\n+#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */\n+\n+/**\n+ * An mempool's object iterator callback function.\n+ */\n+typedef void (*rte_mempool_obj_iter_t)(void * /*obj_iter_arg*/,\n+\tvoid * /*obj_start*/,\n+\tvoid * /*obj_end*/,\n+\tuint32_t /*obj_index */);\n+\n+/*\n+ * Iterates across objects of the given size and alignment in the\n+ * provided chunk of memory. The given memory buffer can consist of\n+ * disjoint physical pages.\n+ * For each object calls the provided callback (if any).\n+ * Used to populate mempool, walk through all elements of the mempool,\n+ * estimate how many elements of the given size could be created in the given\n+ * memory buffer.\n+ * @param vaddr\n+ *   Virtual address of the memory buffer.\n+ * @param elt_num\n+ *   Maximum number of objects to iterate through.\n+ * @param elt_sz\n+ *   Size of each object.\n+ * @param paddr\n+ *   Array of phyiscall addresses of the pages that comprises given memory\n+ *   buffer.\n+ * @param pg_num\n+ *   Number of elements in the paddr array.\n+ * @param pg_shift\n+ *   LOG2 of the physical pages size.\n+ * @param obj_iter\n+ *   Object iterator callback function (could be NULL).\n+ * @param obj_iter_arg\n+ *   User defined Prameter for the object iterator callback function.\n+ *\n+ * @return\n+ *   Number of objects iterated through.\n+ */\n+\n+uint32_t rte_mempool_obj_iter(void *vaddr,\n+\tuint32_t elt_num, size_t elt_sz, size_t align,\n+\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift,\n+\trte_mempool_obj_iter_t obj_iter, void *obj_iter_arg);\n+\n+/**\n+ * An object constructor callback function for mempool.\n+ *\n+ * Arguments are the mempool, the opaque pointer given by the user in\n+ * rte_mempool_create(), the pointer to the element and the index of\n+ * the element in the pool.\n+ */\n+typedef void (rte_mempool_obj_ctor_t)(struct rte_mempool *, void *,\n+\t\t\t\t      void *, unsigned);\n+\n+/**\n+ * A mempool constructor callback function.\n+ *\n+ * Arguments are the mempool and the opaque pointer given by the user in\n+ * rte_mempool_create().\n+ */\n+typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);\n+\n+/**\n+ * Creates a new mempool named *name* in memory.\n+ *\n+ * This function uses ``memzone_reserve()`` to allocate memory. The\n+ * pool contains n elements of elt_size. Its size is set to n.\n+ * All elements of the mempool are allocated together with the mempool header,\n+ * in one physically continuous chunk of memory.\n+ *\n+ * @param name\n+ *   The name of the mempool.\n+ * @param n\n+ *   The number of elements in the mempool. The optimum size (in terms of\n+ *   memory usage) for a mempool is when n is a power of two minus one:\n+ *   n = (2^q - 1).\n+ * @param elt_size\n+ *   The size of each element.\n+ * @param cache_size\n+ *   If cache_size is non-zero, the rte_mempool library will try to\n+ *   limit the accesses to the common lockless pool, by maintaining a\n+ *   per-lcore object cache. This argument must be lower or equal to\n+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose\n+ *   cache_size to have \"n modulo cache_size == 0\": if this is\n+ *   not the case, some elements will always stay in the pool and will\n+ *   never be used. The access to the per-lcore table is of course\n+ *   faster than the multi-producer/consumer pool. The cache can be\n+ *   disabled if the cache_size argument is set to 0; it can be useful to\n+ *   avoid losing objects in cache. Note that even if not used, the\n+ *   memory space for cache is always reserved in a mempool structure,\n+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.\n+ * @param private_data_size\n+ *   The size of the private data appended after the mempool\n+ *   structure. This is useful for storing some private data after the\n+ *   mempool structure, as is done for rte_mbuf_pool for example.\n+ * @param mp_init\n+ *   A function pointer that is called for initialization of the pool,\n+ *   before object initialization. The user can initialize the private\n+ *   data in this function if needed. This parameter can be NULL if\n+ *   not needed.\n+ * @param mp_init_arg\n+ *   An opaque pointer to data that can be used in the mempool\n+ *   constructor function.\n+ * @param obj_init\n+ *   A function pointer that is called for each object at\n+ *   initialization of the pool. The user can set some meta data in\n+ *   objects if needed. This parameter can be NULL if not needed.\n+ *   The obj_init() function takes the mempool pointer, the init_arg,\n+ *   the object pointer and the object number as parameters.\n+ * @param obj_init_arg\n+ *   An opaque pointer to data that can be used as an argument for\n+ *   each call to the object constructor function.\n+ * @param socket_id\n+ *   The *socket_id* argument is the socket identifier in the case of\n+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA\n+ *   constraint for the reserved zone.\n+ * @param flags\n+ *   The *flags* arguments is an OR of following flags:\n+ *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread\n+ *     between channels in RAM: the pool allocator will add padding\n+ *     between objects depending on the hardware configuration. See\n+ *     Memory alignment constraints for details. If this flag is set,\n+ *     the allocator will just align them to a cache line.\n+ *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are\n+ *     cache-aligned. This flag removes this constraint, and no\n+ *     padding will be present between objects. This flag implies\n+ *     MEMPOOL_F_NO_SPREAD.\n+ *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior\n+ *     when using rte_mempool_put() or rte_mempool_put_bulk() is\n+ *     \"single-producer\". Otherwise, it is \"multi-producers\".\n+ *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior\n+ *     when using rte_mempool_get() or rte_mempool_get_bulk() is\n+ *     \"single-consumer\". Otherwise, it is \"multi-consumers\".\n+ * @return\n+ *   The pointer to the new allocated mempool, on success. NULL on error\n+ *   with rte_errno set appropriately. Possible rte_errno values include:\n+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ *    - E_RTE_SECONDARY - function was called from a secondary process instance\n+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list\n+ *    - EINVAL - cache size provided is too large\n+ *    - ENOSPC - the maximum number of memzones has already been allocated\n+ *    - EEXIST - a memzone with the same name already exists\n+ *    - ENOMEM - no appropriate memory area found in which to create memzone\n+ */\n+struct rte_mempool *\n+rte_mempool_create(const char *name, unsigned n, unsigned elt_size,\n+\t\t   unsigned cache_size, unsigned private_data_size,\n+\t\t   rte_mempool_ctor_t *mp_init, void *mp_init_arg,\n+\t\t   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n+\t\t   int socket_id, unsigned flags);\n+\n+/**\n+ * Creates a new mempool named *name* in memory.\n+ *\n+ * This function uses ``memzone_reserve()`` to allocate memory. The\n+ * pool contains n elements of elt_size. Its size is set to n.\n+ * Depending on the input parameters, mempool elements can be either allocated\n+ * together with the mempool header, or an externally provided memory buffer\n+ * could be used to store mempool objects. In later case, that external\n+ * memory buffer can consist of set of disjoint phyiscal pages.\n+ *\n+ * @param name\n+ *   The name of the mempool.\n+ * @param n\n+ *   The number of elements in the mempool. The optimum size (in terms of\n+ *   memory usage) for a mempool is when n is a power of two minus one:\n+ *   n = (2^q - 1).\n+ * @param elt_size\n+ *   The size of each element.\n+ * @param cache_size\n+ *   If cache_size is non-zero, the rte_mempool library will try to\n+ *   limit the accesses to the common lockless pool, by maintaining a\n+ *   per-lcore object cache. This argument must be lower or equal to\n+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose\n+ *   cache_size to have \"n modulo cache_size == 0\": if this is\n+ *   not the case, some elements will always stay in the pool and will\n+ *   never be used. The access to the per-lcore table is of course\n+ *   faster than the multi-producer/consumer pool. The cache can be\n+ *   disabled if the cache_size argument is set to 0; it can be useful to\n+ *   avoid losing objects in cache. Note that even if not used, the\n+ *   memory space for cache is always reserved in a mempool structure,\n+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.\n+ * @param private_data_size\n+ *   The size of the private data appended after the mempool\n+ *   structure. This is useful for storing some private data after the\n+ *   mempool structure, as is done for rte_mbuf_pool for example.\n+ * @param mp_init\n+ *   A function pointer that is called for initialization of the pool,\n+ *   before object initialization. The user can initialize the private\n+ *   data in this function if needed. This parameter can be NULL if\n+ *   not needed.\n+ * @param mp_init_arg\n+ *   An opaque pointer to data that can be used in the mempool\n+ *   constructor function.\n+ * @param obj_init\n+ *   A function pointer that is called for each object at\n+ *   initialization of the pool. The user can set some meta data in\n+ *   objects if needed. This parameter can be NULL if not needed.\n+ *   The obj_init() function takes the mempool pointer, the init_arg,\n+ *   the object pointer and the object number as parameters.\n+ * @param obj_init_arg\n+ *   An opaque pointer to data that can be used as an argument for\n+ *   each call to the object constructor function.\n+ * @param socket_id\n+ *   The *socket_id* argument is the socket identifier in the case of\n+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA\n+ *   constraint for the reserved zone.\n+ * @param flags\n+ *   The *flags* arguments is an OR of following flags:\n+ *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread\n+ *     between channels in RAM: the pool allocator will add padding\n+ *     between objects depending on the hardware configuration. See\n+ *     Memory alignment constraints for details. If this flag is set,\n+ *     the allocator will just align them to a cache line.\n+ *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are\n+ *     cache-aligned. This flag removes this constraint, and no\n+ *     padding will be present between objects. This flag implies\n+ *     MEMPOOL_F_NO_SPREAD.\n+ *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior\n+ *     when using rte_mempool_put() or rte_mempool_put_bulk() is\n+ *     \"single-producer\". Otherwise, it is \"multi-producers\".\n+ *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior\n+ *     when using rte_mempool_get() or rte_mempool_get_bulk() is\n+ *     \"single-consumer\". Otherwise, it is \"multi-consumers\".\n+ * @param vaddr\n+ *   Virtual address of the externally allocated memory buffer.\n+ *   Will be used to store mempool objects.\n+ * @param paddr\n+ *   Array of phyiscall addresses of the pages that comprises given memory\n+ *   buffer.\n+ * @param pg_num\n+ *   Number of elements in the paddr array.\n+ * @param pg_shift\n+ *   LOG2 of the physical pages size.\n+ * @return\n+ *   The pointer to the new allocated mempool, on success. NULL on error\n+ *   with rte_errno set appropriately. Possible rte_errno values include:\n+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ *    - E_RTE_SECONDARY - function was called from a secondary process instance\n+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list\n+ *    - EINVAL - cache size provided is too large\n+ *    - ENOSPC - the maximum number of memzones has already been allocated\n+ *    - EEXIST - a memzone with the same name already exists\n+ *    - ENOMEM - no appropriate memory area found in which to create memzone\n+ */\n+struct rte_mempool *\n+rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,\n+\t\tunsigned cache_size, unsigned private_data_size,\n+\t\trte_mempool_ctor_t *mp_init, void *mp_init_arg,\n+\t\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n+\t\tint socket_id, unsigned flags, void *vaddr,\n+\t\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift);\n+\n+#ifdef RTE_LIBRTE_XEN_DOM0\n+/**\n+ * Creates a new mempool named *name* in memory on Xen Dom0.\n+ *\n+ * This function uses ``rte_mempool_xmem_create()`` to allocate memory. The\n+ * pool contains n elements of elt_size. Its size is set to n.\n+ * All elements of the mempool are allocated together with the mempool header,\n+ * and memory buffer can consist of set of disjoint phyiscal pages.\n+ *\n+ * @param name\n+ *   The name of the mempool.\n+ * @param n\n+ *   The number of elements in the mempool. The optimum size (in terms of\n+ *   memory usage) for a mempool is when n is a power of two minus one:\n+ *   n = (2^q - 1).\n+ * @param elt_size\n+ *   The size of each element.\n+ * @param cache_size\n+ *   If cache_size is non-zero, the rte_mempool library will try to\n+ *   limit the accesses to the common lockless pool, by maintaining a\n+ *   per-lcore object cache. This argument must be lower or equal to\n+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose\n+ *   cache_size to have \"n modulo cache_size == 0\": if this is\n+ *   not the case, some elements will always stay in the pool and will\n+ *   never be used. The access to the per-lcore table is of course\n+ *   faster than the multi-producer/consumer pool. The cache can be\n+ *   disabled if the cache_size argument is set to 0; it can be useful to\n+ *   avoid losing objects in cache. Note that even if not used, the\n+ *   memory space for cache is always reserved in a mempool structure,\n+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.\n+ * @param private_data_size\n+ *   The size of the private data appended after the mempool\n+ *   structure. This is useful for storing some private data after the\n+ *   mempool structure, as is done for rte_mbuf_pool for example.\n+ * @param mp_init\n+ *   A function pointer that is called for initialization of the pool,\n+ *   before object initialization. The user can initialize the private\n+ *   data in this function if needed. This parameter can be NULL if\n+ *   not needed.\n+ * @param mp_init_arg\n+ *   An opaque pointer to data that can be used in the mempool\n+ *   constructor function.\n+ * @param obj_init\n+ *   A function pointer that is called for each object at\n+ *   initialization of the pool. The user can set some meta data in\n+ *   objects if needed. This parameter can be NULL if not needed.\n+ *   The obj_init() function takes the mempool pointer, the init_arg,\n+ *   the object pointer and the object number as parameters.\n+ * @param obj_init_arg\n+ *   An opaque pointer to data that can be used as an argument for\n+ *   each call to the object constructor function.\n+ * @param socket_id\n+ *   The *socket_id* argument is the socket identifier in the case of\n+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA\n+ *   constraint for the reserved zone.\n+ * @param flags\n+ *   The *flags* arguments is an OR of following flags:\n+ *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread\n+ *     between channels in RAM: the pool allocator will add padding\n+ *     between objects depending on the hardware configuration. See\n+ *     Memory alignment constraints for details. If this flag is set,\n+ *     the allocator will just align them to a cache line.\n+ *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are\n+ *     cache-aligned. This flag removes this constraint, and no\n+ *     padding will be present between objects. This flag implies\n+ *     MEMPOOL_F_NO_SPREAD.\n+ *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior\n+ *     when using rte_mempool_put() or rte_mempool_put_bulk() is\n+ *     \"single-producer\". Otherwise, it is \"multi-producers\".\n+ *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior\n+ *     when using rte_mempool_get() or rte_mempool_get_bulk() is\n+ *     \"single-consumer\". Otherwise, it is \"multi-consumers\".\n+ * @return\n+ *   The pointer to the new allocated mempool, on success. NULL on error\n+ *   with rte_errno set appropriately. Possible rte_errno values include:\n+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ *    - E_RTE_SECONDARY - function was called from a secondary process instance\n+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list\n+ *    - EINVAL - cache size provided is too large\n+ *    - ENOSPC - the maximum number of memzones has already been allocated\n+ *    - EEXIST - a memzone with the same name already exists\n+ *    - ENOMEM - no appropriate memory area found in which to create memzone\n+ */\n+struct rte_mempool *\n+rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size,\n+\t\tunsigned cache_size, unsigned private_data_size,\n+\t\trte_mempool_ctor_t *mp_init, void *mp_init_arg,\n+\t\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n+\t\tint socket_id, unsigned flags);\n+#endif\n+\n+/**\n+ * Dump the status of the mempool to the console.\n+ *\n+ * @param f\n+ *   A pointer to a file for output\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ */\n+void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);\n+\n+/**\n+ * @internal Put several objects back in the mempool; used internally.\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to store back in the mempool, must be strictly\n+ *   positive.\n+ * @param is_mp\n+ *   Mono-producer (0) or multi-producers (1).\n+ */\n+static inline void __attribute__((always_inline))\n+__mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,\n+\t\t    unsigned n, int is_mp)\n+{\n+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n+\tstruct rte_mempool_cache *cache;\n+\tuint32_t index;\n+\tvoid **cache_objs;\n+\tunsigned lcore_id = rte_lcore_id();\n+\tuint32_t cache_size = mp->cache_size;\n+\tuint32_t flushthresh = mp->cache_flushthresh;\n+#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */\n+\n+\t/* increment stat now, adding in mempool always success */\n+\t__MEMPOOL_STAT_ADD(mp, put, n);\n+\n+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n+\t/* cache is not enabled or single producer */\n+\tif (unlikely(cache_size == 0 || is_mp == 0))\n+\t\tgoto ring_enqueue;\n+\n+\t/* Go straight to ring if put would overflow mem allocated for cache */\n+\tif (unlikely(n > RTE_MEMPOOL_CACHE_MAX_SIZE))\n+\t\tgoto ring_enqueue;\n+\n+\tcache = &mp->local_cache[lcore_id];\n+\tcache_objs = &cache->objs[cache->len];\n+\n+\t/*\n+\t * The cache follows the following algorithm\n+\t *   1. Add the objects to the cache\n+\t *   2. Anything greater than the cache min value (if it crosses the\n+\t *   cache flush threshold) is flushed to the ring.\n+\t */\n+\n+\t/* Add elements back into the cache */\n+\tfor (index = 0; index < n; ++index, obj_table++)\n+\t\tcache_objs[index] = *obj_table;\n+\n+\tcache->len += n;\n+\n+\tif (cache->len >= flushthresh) {\n+\t\trte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],\n+\t\t\t\tcache->len - cache_size);\n+\t\tcache->len = cache_size;\n+\t}\n+\n+\treturn;\n+\n+ring_enqueue:\n+#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */\n+\n+\t/* push remaining objects in ring */\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\tif (is_mp) {\n+\t\tif (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)\n+\t\t\trte_panic(\"cannot put objects in mempool\\n\");\n+\t}\n+\telse {\n+\t\tif (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)\n+\t\t\trte_panic(\"cannot put objects in mempool\\n\");\n+\t}\n+#else\n+\tif (is_mp)\n+\t\trte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);\n+\telse\n+\t\trte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);\n+#endif\n+}\n+\n+\n+/**\n+ * Put several objects back in the mempool (multi-producers safe).\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the mempool from the obj_table.\n+ */\n+static inline void __attribute__((always_inline))\n+rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,\n+\t\t\tunsigned n)\n+{\n+\t__mempool_check_cookies(mp, obj_table, n, 0);\n+\t__mempool_put_bulk(mp, obj_table, n, 1);\n+}\n+\n+/**\n+ * Put several objects back in the mempool (NOT multi-producers safe).\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the mempool from obj_table.\n+ */\n+static inline void\n+rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,\n+\t\t\tunsigned n)\n+{\n+\t__mempool_check_cookies(mp, obj_table, n, 0);\n+\t__mempool_put_bulk(mp, obj_table, n, 0);\n+}\n+\n+/**\n+ * Put several objects back in the mempool.\n+ *\n+ * This function calls the multi-producer or the single-producer\n+ * version depending on the default behavior that was specified at\n+ * mempool creation time (see flags).\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the mempool from obj_table.\n+ */\n+static inline void __attribute__((always_inline))\n+rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,\n+\t\t     unsigned n)\n+{\n+\t__mempool_check_cookies(mp, obj_table, n, 0);\n+\t__mempool_put_bulk(mp, obj_table, n, !(mp->flags & MEMPOOL_F_SP_PUT));\n+}\n+\n+/**\n+ * Put one object in the mempool (multi-producers safe).\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj\n+ *   A pointer to the object to be added.\n+ */\n+static inline void __attribute__((always_inline))\n+rte_mempool_mp_put(struct rte_mempool *mp, void *obj)\n+{\n+\trte_mempool_mp_put_bulk(mp, &obj, 1);\n+}\n+\n+/**\n+ * Put one object back in the mempool (NOT multi-producers safe).\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj\n+ *   A pointer to the object to be added.\n+ */\n+static inline void __attribute__((always_inline))\n+rte_mempool_sp_put(struct rte_mempool *mp, void *obj)\n+{\n+\trte_mempool_sp_put_bulk(mp, &obj, 1);\n+}\n+\n+/**\n+ * Put one object back in the mempool.\n+ *\n+ * This function calls the multi-producer or the single-producer\n+ * version depending on the default behavior that was specified at\n+ * mempool creation time (see flags).\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj\n+ *   A pointer to the object to be added.\n+ */\n+static inline void __attribute__((always_inline))\n+rte_mempool_put(struct rte_mempool *mp, void *obj)\n+{\n+\trte_mempool_put_bulk(mp, &obj, 1);\n+}\n+\n+/**\n+ * @internal Get several objects from the mempool; used internally.\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to get, must be strictly positive.\n+ * @param is_mc\n+ *   Mono-consumer (0) or multi-consumers (1).\n+ * @return\n+ *   - >=0: Success; number of objects supplied.\n+ *   - <0: Error; code of ring dequeue function.\n+ */\n+static inline int __attribute__((always_inline))\n+__mempool_get_bulk(struct rte_mempool *mp, void **obj_table,\n+\t\t   unsigned n, int is_mc)\n+{\n+\tint ret;\n+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n+\tstruct rte_mempool_cache *cache;\n+\tuint32_t index, len;\n+\tvoid **cache_objs;\n+\tunsigned lcore_id = rte_lcore_id();\n+\tuint32_t cache_size = mp->cache_size;\n+\n+\t/* cache is not enabled or single consumer */\n+\tif (unlikely(cache_size == 0 || is_mc == 0 || n >= cache_size))\n+\t\tgoto ring_dequeue;\n+\n+\tcache = &mp->local_cache[lcore_id];\n+\tcache_objs = cache->objs;\n+\n+\t/* Can this be satisfied from the cache? */\n+\tif (cache->len < n) {\n+\t\t/* No. Backfill the cache first, and then fill from it */\n+\t\tuint32_t req = n + (cache_size - cache->len);\n+\n+\t\t/* How many do we require i.e. number to fill the cache + the request */\n+\t\tret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);\n+\t\tif (unlikely(ret < 0)) {\n+\t\t\t/*\n+\t\t\t * In the offchance that we are buffer constrained,\n+\t\t\t * where we are not able to allocate cache + n, go to\n+\t\t\t * the ring directly. If that fails, we are truly out of\n+\t\t\t * buffers.\n+\t\t\t */\n+\t\t\tgoto ring_dequeue;\n+\t\t}\n+\n+\t\tcache->len += req;\n+\t}\n+\n+\t/* Now fill in the response ... */\n+\tfor (index = 0, len = cache->len - 1; index < n; ++index, len--, obj_table++)\n+\t\t*obj_table = cache_objs[len];\n+\n+\tcache->len -= n;\n+\n+\t__MEMPOOL_STAT_ADD(mp, get_success, n);\n+\n+\treturn 0;\n+\n+ring_dequeue:\n+#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */\n+\n+\t/* get remaining objects from ring */\n+\tif (is_mc)\n+\t\tret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);\n+\telse\n+\t\tret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);\n+\n+\tif (ret < 0)\n+\t\t__MEMPOOL_STAT_ADD(mp, get_fail, n);\n+\telse\n+\t\t__MEMPOOL_STAT_ADD(mp, get_success, n);\n+\n+\treturn ret;\n+}\n+\n+/**\n+ * Get several objects from the mempool (multi-consumers safe).\n+ *\n+ * If cache is enabled, objects will be retrieved first from cache,\n+ * subsequently from the common pool. Note that it can return -ENOENT when\n+ * the local cache and common pool are empty, even if cache from other\n+ * lcores are full.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to get from mempool to obj_table.\n+ * @return\n+ *   - 0: Success; objects taken.\n+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)\n+{\n+\tint ret;\n+\tret = __mempool_get_bulk(mp, obj_table, n, 1);\n+\tif (ret == 0)\n+\t\t__mempool_check_cookies(mp, obj_table, n, 1);\n+\treturn ret;\n+}\n+\n+/**\n+ * Get several objects from the mempool (NOT multi-consumers safe).\n+ *\n+ * If cache is enabled, objects will be retrieved first from cache,\n+ * subsequently from the common pool. Note that it can return -ENOENT when\n+ * the local cache and common pool are empty, even if cache from other\n+ * lcores are full.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to get from the mempool to obj_table.\n+ * @return\n+ *   - 0: Success; objects taken.\n+ *   - -ENOENT: Not enough entries in the mempool; no object is\n+ *     retrieved.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)\n+{\n+\tint ret;\n+\tret = __mempool_get_bulk(mp, obj_table, n, 0);\n+\tif (ret == 0)\n+\t\t__mempool_check_cookies(mp, obj_table, n, 1);\n+\treturn ret;\n+}\n+\n+/**\n+ * Get several objects from the mempool.\n+ *\n+ * This function calls the multi-consumers or the single-consumer\n+ * version, depending on the default behaviour that was specified at\n+ * mempool creation time (see flags).\n+ *\n+ * If cache is enabled, objects will be retrieved first from cache,\n+ * subsequently from the common pool. Note that it can return -ENOENT when\n+ * the local cache and common pool are empty, even if cache from other\n+ * lcores are full.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to get from the mempool to obj_table.\n+ * @return\n+ *   - 0: Success; objects taken\n+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)\n+{\n+\tint ret;\n+\tret = __mempool_get_bulk(mp, obj_table, n,\n+\t\t\t\t !(mp->flags & MEMPOOL_F_SC_GET));\n+\tif (ret == 0)\n+\t\t__mempool_check_cookies(mp, obj_table, n, 1);\n+\treturn ret;\n+}\n+\n+/**\n+ * Get one object from the mempool (multi-consumers safe).\n+ *\n+ * If cache is enabled, objects will be retrieved first from cache,\n+ * subsequently from the common pool. Note that it can return -ENOENT when\n+ * the local cache and common pool are empty, even if cache from other\n+ * lcores are full.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_p\n+ *   A pointer to a void * pointer (object) that will be filled.\n+ * @return\n+ *   - 0: Success; objects taken.\n+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)\n+{\n+\treturn rte_mempool_mc_get_bulk(mp, obj_p, 1);\n+}\n+\n+/**\n+ * Get one object from the mempool (NOT multi-consumers safe).\n+ *\n+ * If cache is enabled, objects will be retrieved first from cache,\n+ * subsequently from the common pool. Note that it can return -ENOENT when\n+ * the local cache and common pool are empty, even if cache from other\n+ * lcores are full.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_p\n+ *   A pointer to a void * pointer (object) that will be filled.\n+ * @return\n+ *   - 0: Success; objects taken.\n+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)\n+{\n+\treturn rte_mempool_sc_get_bulk(mp, obj_p, 1);\n+}\n+\n+/**\n+ * Get one object from the mempool.\n+ *\n+ * This function calls the multi-consumers or the single-consumer\n+ * version, depending on the default behavior that was specified at\n+ * mempool creation (see flags).\n+ *\n+ * If cache is enabled, objects will be retrieved first from cache,\n+ * subsequently from the common pool. Note that it can return -ENOENT when\n+ * the local cache and common pool are empty, even if cache from other\n+ * lcores are full.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param obj_p\n+ *   A pointer to a void * pointer (object) that will be filled.\n+ * @return\n+ *   - 0: Success; objects taken.\n+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_mempool_get(struct rte_mempool *mp, void **obj_p)\n+{\n+\treturn rte_mempool_get_bulk(mp, obj_p, 1);\n+}\n+\n+/**\n+ * Return the number of entries in the mempool.\n+ *\n+ * When cache is enabled, this function has to browse the length of\n+ * all lcores, so it should not be used in a data path, but only for\n+ * debug purposes.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @return\n+ *   The number of entries in the mempool.\n+ */\n+unsigned rte_mempool_count(const struct rte_mempool *mp);\n+\n+/**\n+ * Return the number of free entries in the mempool ring.\n+ * i.e. how many entries can be freed back to the mempool.\n+ *\n+ * NOTE: This corresponds to the number of elements *allocated* from the\n+ * memory pool, not the number of elements in the pool itself. To count\n+ * the number elements currently available in the pool, use \"rte_mempool_count\"\n+ *\n+ * When cache is enabled, this function has to browse the length of\n+ * all lcores, so it should not be used in a data path, but only for\n+ * debug purposes.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @return\n+ *   The number of free entries in the mempool.\n+ */\n+static inline unsigned\n+rte_mempool_free_count(const struct rte_mempool *mp)\n+{\n+\treturn mp->size - rte_mempool_count(mp);\n+}\n+\n+/**\n+ * Test if the mempool is full.\n+ *\n+ * When cache is enabled, this function has to browse the length of all\n+ * lcores, so it should not be used in a data path, but only for debug\n+ * purposes.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @return\n+ *   - 1: The mempool is full.\n+ *   - 0: The mempool is not full.\n+ */\n+static inline int\n+rte_mempool_full(const struct rte_mempool *mp)\n+{\n+\treturn !!(rte_mempool_count(mp) == mp->size);\n+}\n+\n+/**\n+ * Test if the mempool is empty.\n+ *\n+ * When cache is enabled, this function has to browse the length of all\n+ * lcores, so it should not be used in a data path, but only for debug\n+ * purposes.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @return\n+ *   - 1: The mempool is empty.\n+ *   - 0: The mempool is not empty.\n+ */\n+static inline int\n+rte_mempool_empty(const struct rte_mempool *mp)\n+{\n+\treturn !!(rte_mempool_count(mp) == 0);\n+}\n+\n+/**\n+ * Return the physical address of elt, which is an element of the pool mp.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param elt\n+ *   A pointer (virtual address) to the element of the pool.\n+ * @return\n+ *   The physical address of the elt element.\n+ */\n+static inline phys_addr_t\n+rte_mempool_virt2phy(const struct rte_mempool *mp, const void *elt)\n+{\n+\tif (rte_eal_has_hugepages()) {\n+\t\tuintptr_t off;\n+\n+\t\toff = (const char *)elt - (const char *)mp->elt_va_start;\n+\t\treturn (mp->elt_pa[off >> mp->pg_shift] + (off & mp->pg_mask));\n+\t} else {\n+\t\t/*\n+\t\t * If huge pages are disabled, we cannot assume the\n+\t\t * memory region to be physically contiguous.\n+\t\t * Lookup for each element.\n+\t\t */\n+\t\treturn rte_mem_virt2phy(elt);\n+\t}\n+}\n+\n+/**\n+ * Check the consistency of mempool objects.\n+ *\n+ * Verify the coherency of fields in the mempool structure. Also check\n+ * that the cookies of mempool objects (even the ones that are not\n+ * present in pool) have a correct value. If not, a panic will occur.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ */\n+void rte_mempool_audit(const struct rte_mempool *mp);\n+\n+/**\n+ * Return a pointer to the private data in an mempool structure.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @return\n+ *   A pointer to the private data.\n+ */\n+static inline void *rte_mempool_get_priv(struct rte_mempool *mp)\n+{\n+\treturn (char *)mp + MEMPOOL_HEADER_SIZE(mp, mp->pg_num);\n+}\n+\n+/**\n+ * Dump the status of all mempools on the console\n+ *\n+ * @param f\n+ *   A pointer to a file for output\n+ */\n+void rte_mempool_list_dump(FILE *f);\n+\n+/**\n+ * Search a mempool from its name\n+ *\n+ * @param name\n+ *   The name of the mempool.\n+ * @return\n+ *   The pointer to the mempool matching the name, or NULL if not found.\n+ *   NULL on error\n+ *   with rte_errno set appropriately. Possible rte_errno values include:\n+ *    - ENOENT - required entry not available to return.\n+ *\n+ */\n+struct rte_mempool *rte_mempool_lookup(const char *name);\n+\n+/**\n+ * Given a desired size of the mempool element and mempool flags,\n+ * caluclates header, trailer, body and total sizes of the mempool object.\n+ * @param elt_size\n+ *   The size of each element.\n+ * @param flags\n+ *   The flags used for the mempool creation.\n+ *   Consult rte_mempool_create() for more information about possible values.\n+ *   The size of each element.\n+ * @return\n+ *   Total size of the mempool object.\n+ */\n+uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,\n+\tstruct rte_mempool_objsz *sz);\n+\n+/**\n+ * Calculate maximum amount of memory required to store given number of objects.\n+ * Assumes that the memory buffer will be aligned at page boundary.\n+ * Note, that if object size is bigger then page size, then it assumes that\n+ * we have a subsets of physically continuous  pages big enough to store\n+ * at least one object.\n+ * @param elt_num\n+ *   Number of elements.\n+ * @param elt_sz\n+ *   The size of each element.\n+ * @param pg_shift\n+ *   LOG2 of the physical pages size.\n+ * @return\n+ *   Required memory size aligned at page boundary.\n+ */\n+size_t rte_mempool_xmem_size(uint32_t elt_num, size_t elt_sz,\n+\tuint32_t pg_shift);\n+\n+/**\n+ * Calculate how much memory would be actually required with the given\n+ * memory footprint to store required number of objects.\n+ * @param vaddr\n+ *   Virtual address of the externally allocated memory buffer.\n+ *   Will be used to store mempool objects.\n+ * @param elt_num\n+ *   Number of elements.\n+ * @param elt_sz\n+ *   The size of each element.\n+ * @param paddr\n+ *   Array of phyiscall addresses of the pages that comprises given memory\n+ *   buffer.\n+ * @param pg_num\n+ *   Number of elements in the paddr array.\n+ * @param pg_shift\n+ *   LOG2 of the physical pages size.\n+ * @return\n+ *   Number of bytes needed to store given number of objects,\n+ *   aligned to the given page size.\n+ *   If provided memory buffer is not big enough:\n+ *   (-1) * actual number of elemnts that can be stored in that buffer.\n+ */\n+ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,\n+\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift);\n+\n+/**\n+ * Walk list of all memory pools\n+ *\n+ * @param func\n+ *   Iterator function\n+ * @param arg\n+ *   Argument passed to iterator\n+ */\n+void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),\n+\t\t      void *arg);\n+\n+#ifdef __cplusplus\n+}\n+#endif\n+\n+#endif /* _RTE_MEMPOOL_H_ */\ndiff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile\ndeleted file mode 100644\nindex 9939e10..0000000\n--- a/lib/librte_mempool/Makefile\n+++ /dev/null\n@@ -1,51 +0,0 @@\n-#   BSD LICENSE\n-#\n-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n-#   All rights reserved.\n-#\n-#   Redistribution and use in source and binary forms, with or without\n-#   modification, are permitted provided that the following conditions\n-#   are met:\n-#\n-#     * Redistributions of source code must retain the above copyright\n-#       notice, this list of conditions and the following disclaimer.\n-#     * Redistributions in binary form must reproduce the above copyright\n-#       notice, this list of conditions and the following disclaimer in\n-#       the documentation and/or other materials provided with the\n-#       distribution.\n-#     * Neither the name of Intel Corporation nor the names of its\n-#       contributors may be used to endorse or promote products derived\n-#       from this software without specific prior written permission.\n-#\n-#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n-#   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n-#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n-#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n-#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n-#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n-#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n-#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n-#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n-#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n-#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n-\n-include $(RTE_SDK)/mk/rte.vars.mk\n-\n-# library name\n-LIB = librte_mempool.a\n-\n-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3\n-\n-# all source are stored in SRCS-y\n-SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c\n-ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)\n-SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c\n-endif\n-# install includes\n-SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h\n-\n-# this lib needs eal, rte_ring and rte_malloc\n-DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_eal lib/librte_ring\n-DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_malloc\n-\n-include $(RTE_SDK)/mk/rte.lib.mk\ndiff --git a/lib/librte_mempool/rte_dom0_mempool.c b/lib/librte_mempool/rte_dom0_mempool.c\ndeleted file mode 100644\nindex 9ec68fb..0000000\n--- a/lib/librte_mempool/rte_dom0_mempool.c\n+++ /dev/null\n@@ -1,134 +0,0 @@\n-/*-\n- *   BSD LICENSE\n- *\n- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n- *   All rights reserved.\n- *\n- *   Redistribution and use in source and binary forms, with or without\n- *   modification, are permitted provided that the following conditions\n- *   are met:\n- *\n- *     * Redistributions of source code must retain the above copyright\n- *       notice, this list of conditions and the following disclaimer.\n- *     * Redistributions in binary form must reproduce the above copyright\n- *       notice, this list of conditions and the following disclaimer in\n- *       the documentation and/or other materials provided with the\n- *       distribution.\n- *     * Neither the name of Intel Corporation nor the names of its\n- *       contributors may be used to endorse or promote products derived\n- *       from this software without specific prior written permission.\n- *\n- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n- *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n- */\n-\n-#include <stdio.h>\n-#include <string.h>\n-#include <stdint.h>\n-#include <unistd.h>\n-#include <stdarg.h>\n-#include <inttypes.h>\n-#include <errno.h>\n-#include <sys/queue.h>\n-\n-#include <rte_common.h>\n-#include <rte_log.h>\n-#include <rte_debug.h>\n-#include <rte_memory.h>\n-#include <rte_memzone.h>\n-#include <rte_atomic.h>\n-#include <rte_launch.h>\n-#include <rte_tailq.h>\n-#include <rte_eal.h>\n-#include <rte_eal_memconfig.h>\n-#include <rte_per_lcore.h>\n-#include <rte_lcore.h>\n-#include <rte_branch_prediction.h>\n-#include <rte_ring.h>\n-#include <rte_errno.h>\n-#include <rte_string_fns.h>\n-#include <rte_spinlock.h>\n-\n-#include \"rte_mempool.h\"\n-\n-static void\n-get_phys_map(void *va, phys_addr_t pa[], uint32_t pg_num,\n-            uint32_t pg_sz, uint32_t memseg_id)\n-{\n-    uint32_t i;\n-    uint64_t virt_addr, mfn_id;\n-    struct rte_mem_config *mcfg;\n-    uint32_t page_size = getpagesize();\n-\n-    /* get pointer to global configuration */\n-    mcfg = rte_eal_get_configuration()->mem_config;\n-    virt_addr =(uintptr_t) mcfg->memseg[memseg_id].addr;\n-\n-    for (i = 0; i != pg_num; i++) {\n-        mfn_id = ((uintptr_t)va + i * pg_sz - virt_addr) / RTE_PGSIZE_2M;\n-        pa[i] = mcfg->memseg[memseg_id].mfn[mfn_id] * page_size;\n-    }\n-}\n-\n-/* create the mempool for supporting Dom0 */\n-struct rte_mempool *\n-rte_dom0_mempool_create(const char *name, unsigned elt_num, unsigned elt_size,\n-           unsigned cache_size, unsigned private_data_size,\n-           rte_mempool_ctor_t *mp_init, void *mp_init_arg,\n-           rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n-           int socket_id, unsigned flags)\n-{\n-\tstruct rte_mempool *mp = NULL;\n-\tphys_addr_t *pa;\n-\tchar *va;\n-\tsize_t sz;\n-\tuint32_t pg_num, pg_shift, pg_sz, total_size;\n-\tconst struct rte_memzone *mz;\n-\tchar mz_name[RTE_MEMZONE_NAMESIZE];\n-\tint mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;\n-\n-\tpg_sz = RTE_PGSIZE_2M;\n-\n-\tpg_shift = rte_bsf32(pg_sz);\n-\ttotal_size = rte_mempool_calc_obj_size(elt_size, flags, NULL);\n-\n-\t/* calc max memory size and max number of pages needed. */\n-\tsz = rte_mempool_xmem_size(elt_num, total_size, pg_shift) +\n-\t\tRTE_PGSIZE_2M;\n-\tpg_num = sz >> pg_shift;\n-\n-\t/* extract physical mappings of the allocated memory. */\n-\tpa = calloc(pg_num, sizeof (*pa));\n-\tif (pa == NULL)\n-\t\treturn mp;\n-\n-\tsnprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_OBJ_NAME, name);\n-\tmz = rte_memzone_reserve(mz_name, sz, socket_id, mz_flags);\n-\tif (mz == NULL) {\n-\t\tfree(pa);\n-\t\treturn mp;\n-\t}\n-\n-\tva = (char *)RTE_ALIGN_CEIL((uintptr_t)mz->addr, RTE_PGSIZE_2M);\n-\t/* extract physical mappings of the allocated memory. */\n-\tget_phys_map(va, pa, pg_num, pg_sz, mz->memseg_id);\n-\n-\tmp = rte_mempool_xmem_create(name, elt_num, elt_size,\n-\t\tcache_size, private_data_size,\n-\t\tmp_init, mp_init_arg,\n-\t\tobj_init, obj_init_arg,\n-\t\tsocket_id, flags, va, pa, pg_num, pg_shift);\n-\n-\tfree(pa);\n-\n-\treturn (mp);\n-}\ndiff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c\ndeleted file mode 100644\nindex 4cf6c25..0000000\n--- a/lib/librte_mempool/rte_mempool.c\n+++ /dev/null\n@@ -1,901 +0,0 @@\n-/*-\n- *   BSD LICENSE\n- *\n- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n- *   All rights reserved.\n- *\n- *   Redistribution and use in source and binary forms, with or without\n- *   modification, are permitted provided that the following conditions\n- *   are met:\n- *\n- *     * Redistributions of source code must retain the above copyright\n- *       notice, this list of conditions and the following disclaimer.\n- *     * Redistributions in binary form must reproduce the above copyright\n- *       notice, this list of conditions and the following disclaimer in\n- *       the documentation and/or other materials provided with the\n- *       distribution.\n- *     * Neither the name of Intel Corporation nor the names of its\n- *       contributors may be used to endorse or promote products derived\n- *       from this software without specific prior written permission.\n- *\n- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n- *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n- */\n-\n-#include <stdio.h>\n-#include <string.h>\n-#include <stdint.h>\n-#include <stdarg.h>\n-#include <unistd.h>\n-#include <inttypes.h>\n-#include <errno.h>\n-#include <sys/queue.h>\n-\n-#include <rte_common.h>\n-#include <rte_log.h>\n-#include <rte_debug.h>\n-#include <rte_memory.h>\n-#include <rte_memzone.h>\n-#include <rte_malloc.h>\n-#include <rte_atomic.h>\n-#include <rte_launch.h>\n-#include <rte_tailq.h>\n-#include <rte_eal.h>\n-#include <rte_eal_memconfig.h>\n-#include <rte_per_lcore.h>\n-#include <rte_lcore.h>\n-#include <rte_branch_prediction.h>\n-#include <rte_ring.h>\n-#include <rte_errno.h>\n-#include <rte_string_fns.h>\n-#include <rte_spinlock.h>\n-\n-#include \"rte_mempool.h\"\n-\n-TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);\n-\n-#define CACHE_FLUSHTHRESH_MULTIPLIER 1.5\n-\n-/*\n- * return the greatest common divisor between a and b (fast algorithm)\n- *\n- */\n-static unsigned get_gcd(unsigned a, unsigned b)\n-{\n-\tunsigned c;\n-\n-\tif (0 == a)\n-\t\treturn b;\n-\tif (0 == b)\n-\t\treturn a;\n-\n-\tif (a < b) {\n-\t\tc = a;\n-\t\ta = b;\n-\t\tb = c;\n-\t}\n-\n-\twhile (b != 0) {\n-\t\tc = a % b;\n-\t\ta = b;\n-\t\tb = c;\n-\t}\n-\n-\treturn a;\n-}\n-\n-/*\n- * Depending on memory configuration, objects addresses are spread\n- * between channels and ranks in RAM: the pool allocator will add\n- * padding between objects. This function return the new size of the\n- * object.\n- */\n-static unsigned optimize_object_size(unsigned obj_size)\n-{\n-\tunsigned nrank, nchan;\n-\tunsigned new_obj_size;\n-\n-\t/* get number of channels */\n-\tnchan = rte_memory_get_nchannel();\n-\tif (nchan == 0)\n-\t\tnchan = 1;\n-\n-\tnrank = rte_memory_get_nrank();\n-\tif (nrank == 0)\n-\t\tnrank = 1;\n-\n-\t/* process new object size */\n-\tnew_obj_size = (obj_size + RTE_CACHE_LINE_MASK) / RTE_CACHE_LINE_SIZE;\n-\twhile (get_gcd(new_obj_size, nrank * nchan) != 1)\n-\t\tnew_obj_size++;\n-\treturn new_obj_size * RTE_CACHE_LINE_SIZE;\n-}\n-\n-static void\n-mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,\n-\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg)\n-{\n-\tstruct rte_mempool **mpp;\n-\n-\tobj = (char *)obj + mp->header_size;\n-\n-\t/* set mempool ptr in header */\n-\tmpp = __mempool_from_obj(obj);\n-\t*mpp = mp;\n-\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-\t__mempool_write_header_cookie(obj, 1);\n-\t__mempool_write_trailer_cookie(obj);\n-#endif\n-\t/* call the initializer */\n-\tif (obj_init)\n-\t\tobj_init(mp, obj_init_arg, obj, obj_idx);\n-\n-\t/* enqueue in ring */\n-\trte_ring_sp_enqueue(mp->ring, obj);\n-}\n-\n-uint32_t\n-rte_mempool_obj_iter(void *vaddr, uint32_t elt_num, size_t elt_sz, size_t align,\n-\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift,\n-\trte_mempool_obj_iter_t obj_iter, void *obj_iter_arg)\n-{\n-\tuint32_t i, j, k;\n-\tuint32_t pgn;\n-\tuintptr_t end, start, va;\n-\tuintptr_t pg_sz;\n-\n-\tpg_sz = (uintptr_t)1 << pg_shift;\n-\tva = (uintptr_t)vaddr;\n-\n-\ti = 0;\n-\tj = 0;\n-\n-\twhile (i != elt_num && j != pg_num) {\n-\n-\t\tstart = RTE_ALIGN_CEIL(va, align);\n-\t\tend = start + elt_sz;\n-\n-\t\tpgn = (end >> pg_shift) - (start >> pg_shift);\n-\t\tpgn += j;\n-\n-\t\t/* do we have enough space left for the next element. */\n-\t\tif (pgn >= pg_num)\n-\t\t\tbreak;\n-\n-\t\tfor (k = j;\n-\t\t\t\tk != pgn &&\n-\t\t\t\tpaddr[k] + pg_sz == paddr[k + 1];\n-\t\t\t\tk++)\n-\t\t\t;\n-\n-\t\t/*\n-\t\t * if next pgn chunks of memory physically continuous,\n-\t\t * use it to create next element.\n-\t\t * otherwise, just skip that chunk unused.\n-\t\t */\n-\t\tif (k == pgn) {\n-\t\t\tif (obj_iter != NULL)\n-\t\t\t\tobj_iter(obj_iter_arg, (void *)start,\n-\t\t\t\t\t(void *)end, i);\n-\t\t\tva = end;\n-\t\t\tj = pgn;\n-\t\t\ti++;\n-\t\t} else {\n-\t\t\tva = RTE_ALIGN_CEIL((va + 1), pg_sz);\n-\t\t\tj++;\n-\t\t}\n-\t}\n-\n-\treturn (i);\n-}\n-\n-/*\n- * Populate  mempool with the objects.\n- */\n-\n-struct mempool_populate_arg {\n-\tstruct rte_mempool     *mp;\n-\trte_mempool_obj_ctor_t *obj_init;\n-\tvoid                   *obj_init_arg;\n-};\n-\n-static void\n-mempool_obj_populate(void *arg, void *start, void *end, uint32_t idx)\n-{\n-\tstruct mempool_populate_arg *pa = arg;\n-\n-\tmempool_add_elem(pa->mp, start, idx, pa->obj_init, pa->obj_init_arg);\n-\tpa->mp->elt_va_end = (uintptr_t)end;\n-}\n-\n-static void\n-mempool_populate(struct rte_mempool *mp, size_t num, size_t align,\n-\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg)\n-{\n-\tuint32_t elt_sz;\n-\tstruct mempool_populate_arg arg;\n-\n-\telt_sz = mp->elt_size + mp->header_size + mp->trailer_size;\n-\targ.mp = mp;\n-\targ.obj_init = obj_init;\n-\targ.obj_init_arg = obj_init_arg;\n-\n-\tmp->size = rte_mempool_obj_iter((void *)mp->elt_va_start,\n-\t\tnum, elt_sz, align,\n-\t\tmp->elt_pa, mp->pg_num, mp->pg_shift,\n-\t\tmempool_obj_populate, &arg);\n-}\n-\n-uint32_t\n-rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,\n-\tstruct rte_mempool_objsz *sz)\n-{\n-\tstruct rte_mempool_objsz lsz;\n-\n-\tsz = (sz != NULL) ? sz : &lsz;\n-\n-\t/*\n-\t * In header, we have at least the pointer to the pool, and\n-\t * optionaly a 64 bits cookie.\n-\t */\n-\tsz->header_size = 0;\n-\tsz->header_size += sizeof(struct rte_mempool *); /* ptr to pool */\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-\tsz->header_size += sizeof(uint64_t); /* cookie */\n-#endif\n-\tif ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)\n-\t\tsz->header_size = RTE_ALIGN_CEIL(sz->header_size,\n-\t\t\tRTE_CACHE_LINE_SIZE);\n-\n-\t/* trailer contains the cookie in debug mode */\n-\tsz->trailer_size = 0;\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-\tsz->trailer_size += sizeof(uint64_t); /* cookie */\n-#endif\n-\t/* element size is 8 bytes-aligned at least */\n-\tsz->elt_size = RTE_ALIGN_CEIL(elt_size, sizeof(uint64_t));\n-\n-\t/* expand trailer to next cache line */\n-\tif ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {\n-\t\tsz->total_size = sz->header_size + sz->elt_size +\n-\t\t\tsz->trailer_size;\n-\t\tsz->trailer_size += ((RTE_CACHE_LINE_SIZE -\n-\t\t\t\t  (sz->total_size & RTE_CACHE_LINE_MASK)) &\n-\t\t\t\t RTE_CACHE_LINE_MASK);\n-\t}\n-\n-\t/*\n-\t * increase trailer to add padding between objects in order to\n-\t * spread them across memory channels/ranks\n-\t */\n-\tif ((flags & MEMPOOL_F_NO_SPREAD) == 0) {\n-\t\tunsigned new_size;\n-\t\tnew_size = optimize_object_size(sz->header_size + sz->elt_size +\n-\t\t\tsz->trailer_size);\n-\t\tsz->trailer_size = new_size - sz->header_size - sz->elt_size;\n-\t}\n-\n-\tif (! rte_eal_has_hugepages()) {\n-\t\t/*\n-\t\t * compute trailer size so that pool elements fit exactly in\n-\t\t * a standard page\n-\t\t */\n-\t\tint page_size = getpagesize();\n-\t\tint new_size = page_size - sz->header_size - sz->elt_size;\n-\t\tif (new_size < 0 || (unsigned int)new_size < sz->trailer_size) {\n-\t\t\tprintf(\"When hugepages are disabled, pool objects \"\n-\t\t\t       \"can't exceed PAGE_SIZE: %d + %d + %d > %d\\n\",\n-\t\t\t       sz->header_size, sz->elt_size, sz->trailer_size,\n-\t\t\t       page_size);\n-\t\t\treturn 0;\n-\t\t}\n-\t\tsz->trailer_size = new_size;\n-\t}\n-\n-\t/* this is the size of an object, including header and trailer */\n-\tsz->total_size = sz->header_size + sz->elt_size + sz->trailer_size;\n-\n-\treturn (sz->total_size);\n-}\n-\n-\n-/*\n- * Calculate maximum amount of memory required to store given number of objects.\n- */\n-size_t\n-rte_mempool_xmem_size(uint32_t elt_num, size_t elt_sz, uint32_t pg_shift)\n-{\n-\tsize_t n, pg_num, pg_sz, sz;\n-\n-\tpg_sz = (size_t)1 << pg_shift;\n-\n-\tif ((n = pg_sz / elt_sz) > 0) {\n-\t\tpg_num = (elt_num + n - 1) / n;\n-\t\tsz = pg_num << pg_shift;\n-\t} else {\n-\t\tsz = RTE_ALIGN_CEIL(elt_sz, pg_sz) * elt_num;\n-\t}\n-\n-\treturn (sz);\n-}\n-\n-/*\n- * Calculate how much memory would be actually required with the\n- * given memory footprint to store required number of elements.\n- */\n-static void\n-mempool_lelem_iter(void *arg, __rte_unused void *start, void *end,\n-        __rte_unused uint32_t idx)\n-{\n-        *(uintptr_t *)arg = (uintptr_t)end;\n-}\n-\n-ssize_t\n-rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,\n-\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)\n-{\n-\tuint32_t n;\n-\tuintptr_t va, uv;\n-\tsize_t pg_sz, usz;\n-\n-\tpg_sz = (size_t)1 << pg_shift;\n-\tva = (uintptr_t)vaddr;\n-\tuv = va;\n-\n-\tif ((n = rte_mempool_obj_iter(vaddr, elt_num, elt_sz, 1,\n-\t\t\tpaddr, pg_num, pg_shift, mempool_lelem_iter,\n-\t\t\t&uv)) != elt_num) {\n-\t\treturn (-n);\n-\t}\n-\n-\tuv = RTE_ALIGN_CEIL(uv, pg_sz);\n-\tusz = uv - va;\n-\treturn (usz);\n-}\n-\n-/* create the mempool */\n-struct rte_mempool *\n-rte_mempool_create(const char *name, unsigned n, unsigned elt_size,\n-\t\t   unsigned cache_size, unsigned private_data_size,\n-\t\t   rte_mempool_ctor_t *mp_init, void *mp_init_arg,\n-\t\t   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n-\t\t   int socket_id, unsigned flags)\n-{\n-#ifdef RTE_LIBRTE_XEN_DOM0\n-\treturn (rte_dom0_mempool_create(name, n, elt_size,\n-\t\tcache_size, private_data_size,\n-\t\tmp_init, mp_init_arg,\n-\t\tobj_init, obj_init_arg,\n-\t\tsocket_id, flags));\n-#else\n-\treturn (rte_mempool_xmem_create(name, n, elt_size,\n-\t\tcache_size, private_data_size,\n-\t\tmp_init, mp_init_arg,\n-\t\tobj_init, obj_init_arg,\n-\t\tsocket_id, flags,\n-\t\tNULL, NULL, MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX));\n-#endif\n-}\n-\n-/*\n- * Create the mempool over already allocated chunk of memory.\n- * That external memory buffer can consists of physically disjoint pages.\n- * Setting vaddr to NULL, makes mempool to fallback to original behaviour\n- * and allocate space for mempool and it's elements as one big chunk of\n- * physically continuos memory.\n- * */\n-struct rte_mempool *\n-rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,\n-\t\tunsigned cache_size, unsigned private_data_size,\n-\t\trte_mempool_ctor_t *mp_init, void *mp_init_arg,\n-\t\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n-\t\tint socket_id, unsigned flags, void *vaddr,\n-\t\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)\n-{\n-\tchar mz_name[RTE_MEMZONE_NAMESIZE];\n-\tchar rg_name[RTE_RING_NAMESIZE];\n-\tstruct rte_mempool *mp = NULL;\n-\tstruct rte_tailq_entry *te;\n-\tstruct rte_ring *r;\n-\tconst struct rte_memzone *mz;\n-\tsize_t mempool_size;\n-\tint mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;\n-\tint rg_flags = 0;\n-\tvoid *obj;\n-\tstruct rte_mempool_objsz objsz;\n-\tvoid *startaddr;\n-\tint page_size = getpagesize();\n-\n-\t/* compilation-time checks */\n-\tRTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n-\tRTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-\tRTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-#endif\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-\tRTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-\tRTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-#endif\n-\n-\t/* check that we have an initialised tail queue */\n-\tif (RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL,\n-\t\t\trte_mempool_list) == NULL) {\n-\t\trte_errno = E_RTE_NO_TAILQ;\n-\t\treturn NULL;\n-\t}\n-\n-\t/* asked cache too big */\n-\tif (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) {\n-\t\trte_errno = EINVAL;\n-\t\treturn NULL;\n-\t}\n-\n-\t/* check that we have both VA and PA */\n-\tif (vaddr != NULL && paddr == NULL) {\n-\t\trte_errno = EINVAL;\n-\t\treturn NULL;\n-\t}\n-\n-\t/* Check that pg_num and pg_shift parameters are valid. */\n-\tif (pg_num < RTE_DIM(mp->elt_pa) || pg_shift > MEMPOOL_PG_SHIFT_MAX) {\n-\t\trte_errno = EINVAL;\n-\t\treturn NULL;\n-\t}\n-\n-\t/* \"no cache align\" imply \"no spread\" */\n-\tif (flags & MEMPOOL_F_NO_CACHE_ALIGN)\n-\t\tflags |= MEMPOOL_F_NO_SPREAD;\n-\n-\t/* ring flags */\n-\tif (flags & MEMPOOL_F_SP_PUT)\n-\t\trg_flags |= RING_F_SP_ENQ;\n-\tif (flags & MEMPOOL_F_SC_GET)\n-\t\trg_flags |= RING_F_SC_DEQ;\n-\n-\t/* calculate mempool object sizes. */\n-\tif (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {\n-\t\trte_errno = EINVAL;\n-\t\treturn NULL;\n-\t}\n-\n-\trte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);\n-\n-\t/* allocate the ring that will be used to store objects */\n-\t/* Ring functions will return appropriate errors if we are\n-\t * running as a secondary process etc., so no checks made\n-\t * in this function for that condition */\n-\tsnprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);\n-\tr = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);\n-\tif (r == NULL)\n-\t\tgoto exit;\n-\n-\t/*\n-\t * reserve a memory zone for this mempool: private data is\n-\t * cache-aligned\n-\t */\n-\tprivate_data_size = (private_data_size +\n-\t\t\t     RTE_CACHE_LINE_MASK) & (~RTE_CACHE_LINE_MASK);\n-\n-\tif (! rte_eal_has_hugepages()) {\n-\t\t/*\n-\t\t * expand private data size to a whole page, so that the\n-\t\t * first pool element will start on a new standard page\n-\t\t */\n-\t\tint head = sizeof(struct rte_mempool);\n-\t\tint new_size = (private_data_size + head) % page_size;\n-\t\tif (new_size) {\n-\t\t\tprivate_data_size += page_size - new_size;\n-\t\t}\n-\t}\n-\n-\t/* try to allocate tailq entry */\n-\tte = rte_zmalloc(\"MEMPOOL_TAILQ_ENTRY\", sizeof(*te), 0);\n-\tif (te == NULL) {\n-\t\tRTE_LOG(ERR, MEMPOOL, \"Cannot allocate tailq entry!\\n\");\n-\t\tgoto exit;\n-\t}\n-\n-\t/*\n-\t * If user provided an external memory buffer, then use it to\n-\t * store mempool objects. Otherwise reserve memzone big enough to\n-\t * hold mempool header and metadata plus mempool objects.\n-\t */\n-\tmempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;\n-\tif (vaddr == NULL)\n-\t\tmempool_size += (size_t)objsz.total_size * n;\n-\n-\tif (! rte_eal_has_hugepages()) {\n-\t\t/*\n-\t\t * we want the memory pool to start on a page boundary,\n-\t\t * because pool elements crossing page boundaries would\n-\t\t * result in discontiguous physical addresses\n-\t\t */\n-\t\tmempool_size += page_size;\n-\t}\n-\n-\tsnprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);\n-\n-\tmz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);\n-\n-\t/*\n-\t * no more memory: in this case we loose previously reserved\n-\t * space for the as we cannot free it\n-\t */\n-\tif (mz == NULL) {\n-\t\trte_free(te);\n-\t\tgoto exit;\n-\t}\n-\n-\tif (rte_eal_has_hugepages()) {\n-\t\tstartaddr = (void*)mz->addr;\n-\t} else {\n-\t\t/* align memory pool start address on a page boundary */\n-\t\tunsigned long addr = (unsigned long)mz->addr;\n-\t\tif (addr & (page_size - 1)) {\n-\t\t\taddr += page_size;\n-\t\t\taddr &= ~(page_size - 1);\n-\t\t}\n-\t\tstartaddr = (void*)addr;\n-\t}\n-\n-\t/* init the mempool structure */\n-\tmp = startaddr;\n-\tmemset(mp, 0, sizeof(*mp));\n-\tsnprintf(mp->name, sizeof(mp->name), \"%s\", name);\n-\tmp->phys_addr = mz->phys_addr;\n-\tmp->ring = r;\n-\tmp->size = n;\n-\tmp->flags = flags;\n-\tmp->elt_size = objsz.elt_size;\n-\tmp->header_size = objsz.header_size;\n-\tmp->trailer_size = objsz.trailer_size;\n-\tmp->cache_size = cache_size;\n-\tmp->cache_flushthresh = (uint32_t)\n-\t\t(cache_size * CACHE_FLUSHTHRESH_MULTIPLIER);\n-\tmp->private_data_size = private_data_size;\n-\n-\t/* calculate address of the first element for continuous mempool. */\n-\tobj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +\n-\t\tprivate_data_size;\n-\n-\t/* populate address translation fields. */\n-\tmp->pg_num = pg_num;\n-\tmp->pg_shift = pg_shift;\n-\tmp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));\n-\n-\t/* mempool elements allocated together with mempool */\n-\tif (vaddr == NULL) {\n-\t\tmp->elt_va_start = (uintptr_t)obj;\n-\t\tmp->elt_pa[0] = mp->phys_addr +\n-\t\t\t(mp->elt_va_start - (uintptr_t)mp);\n-\n-\t/* mempool elements in a separate chunk of memory. */\n-\t} else {\n-\t\tmp->elt_va_start = (uintptr_t)vaddr;\n-\t\tmemcpy(mp->elt_pa, paddr, sizeof (mp->elt_pa[0]) * pg_num);\n-\t}\n-\n-\tmp->elt_va_end = mp->elt_va_start;\n-\n-\t/* call the initializer */\n-\tif (mp_init)\n-\t\tmp_init(mp, mp_init_arg);\n-\n-\tmempool_populate(mp, n, 1, obj_init, obj_init_arg);\n-\n-\tte->data = (void *) mp;\n-\n-\tRTE_EAL_TAILQ_INSERT_TAIL(RTE_TAILQ_MEMPOOL, rte_mempool_list, te);\n-\n-exit:\n-\trte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);\n-\n-\treturn mp;\n-}\n-\n-/* Return the number of entries in the mempool */\n-unsigned\n-rte_mempool_count(const struct rte_mempool *mp)\n-{\n-\tunsigned count;\n-\n-\tcount = rte_ring_count(mp->ring);\n-\n-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n-\t{\n-\t\tunsigned lcore_id;\n-\t\tif (mp->cache_size == 0)\n-\t\t\treturn count;\n-\n-\t\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)\n-\t\t\tcount += mp->local_cache[lcore_id].len;\n-\t}\n-#endif\n-\n-\t/*\n-\t * due to race condition (access to len is not locked), the\n-\t * total can be greater than size... so fix the result\n-\t */\n-\tif (count > mp->size)\n-\t\treturn mp->size;\n-\treturn count;\n-}\n-\n-/* dump the cache status */\n-static unsigned\n-rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)\n-{\n-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n-\tunsigned lcore_id;\n-\tunsigned count = 0;\n-\tunsigned cache_count;\n-\n-\tfprintf(f, \"  cache infos:\\n\");\n-\tfprintf(f, \"    cache_size=%\"PRIu32\"\\n\", mp->cache_size);\n-\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n-\t\tcache_count = mp->local_cache[lcore_id].len;\n-\t\tfprintf(f, \"    cache_count[%u]=%u\\n\", lcore_id, cache_count);\n-\t\tcount += cache_count;\n-\t}\n-\tfprintf(f, \"    total_cache_count=%u\\n\", count);\n-\treturn count;\n-#else\n-\tRTE_SET_USED(mp);\n-\tfprintf(f, \"  cache disabled\\n\");\n-\treturn 0;\n-#endif\n-}\n-\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-/* check cookies before and after objects */\n-#ifndef __INTEL_COMPILER\n-#pragma GCC diagnostic ignored \"-Wcast-qual\"\n-#endif\n-\n-struct mempool_audit_arg {\n-\tconst struct rte_mempool *mp;\n-\tuintptr_t obj_end;\n-\tuint32_t obj_num;\n-};\n-\n-static void\n-mempool_obj_audit(void *arg, void *start, void *end, uint32_t idx)\n-{\n-\tstruct mempool_audit_arg *pa = arg;\n-\tvoid *obj;\n-\n-\tobj = (char *)start + pa->mp->header_size;\n-\tpa->obj_end = (uintptr_t)end;\n-\tpa->obj_num = idx + 1;\n-\t__mempool_check_cookies(pa->mp, &obj, 1, 2);\n-}\n-\n-static void\n-mempool_audit_cookies(const struct rte_mempool *mp)\n-{\n-\tuint32_t elt_sz, num;\n-\tstruct mempool_audit_arg arg;\n-\n-\telt_sz = mp->elt_size + mp->header_size + mp->trailer_size;\n-\n-\targ.mp = mp;\n-\targ.obj_end = mp->elt_va_start;\n-\targ.obj_num = 0;\n-\n-\tnum = rte_mempool_obj_iter((void *)mp->elt_va_start,\n-\t\tmp->size, elt_sz, 1,\n-\t\tmp->elt_pa, mp->pg_num, mp->pg_shift,\n-\t\tmempool_obj_audit, &arg);\n-\n-\tif (num != mp->size) {\n-\t\t\trte_panic(\"rte_mempool_obj_iter(mempool=%p, size=%u) \"\n-\t\t\t\"iterated only over %u elements\\n\",\n-\t\t\tmp, mp->size, num);\n-\t} else if (arg.obj_end != mp->elt_va_end || arg.obj_num != mp->size) {\n-\t\t\trte_panic(\"rte_mempool_obj_iter(mempool=%p, size=%u) \"\n-\t\t\t\"last callback va_end: %#tx (%#tx expeceted), \"\n-\t\t\t\"num of objects: %u (%u expected)\\n\",\n-\t\t\tmp, mp->size,\n-\t\t\targ.obj_end, mp->elt_va_end,\n-\t\t\targ.obj_num, mp->size);\n-\t}\n-}\n-\n-#ifndef __INTEL_COMPILER\n-#pragma GCC diagnostic error \"-Wcast-qual\"\n-#endif\n-#else\n-#define mempool_audit_cookies(mp) do {} while(0)\n-#endif\n-\n-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n-/* check cookies before and after objects */\n-static void\n-mempool_audit_cache(const struct rte_mempool *mp)\n-{\n-\t/* check cache size consistency */\n-\tunsigned lcore_id;\n-\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n-\t\tif (mp->local_cache[lcore_id].len > mp->cache_flushthresh) {\n-\t\t\tRTE_LOG(CRIT, MEMPOOL, \"badness on cache[%u]\\n\",\n-\t\t\t\tlcore_id);\n-\t\t\trte_panic(\"MEMPOOL: invalid cache len\\n\");\n-\t\t}\n-\t}\n-}\n-#else\n-#define mempool_audit_cache(mp) do {} while(0)\n-#endif\n-\n-\n-/* check the consistency of mempool (size, cookies, ...) */\n-void\n-rte_mempool_audit(const struct rte_mempool *mp)\n-{\n-\tmempool_audit_cache(mp);\n-\tmempool_audit_cookies(mp);\n-\n-\t/* For case where mempool DEBUG is not set, and cache size is 0 */\n-\tRTE_SET_USED(mp);\n-}\n-\n-/* dump the status of the mempool on the console */\n-void\n-rte_mempool_dump(FILE *f, const struct rte_mempool *mp)\n-{\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-\tstruct rte_mempool_debug_stats sum;\n-\tunsigned lcore_id;\n-#endif\n-\tunsigned common_count;\n-\tunsigned cache_count;\n-\n-\tRTE_VERIFY(f != NULL);\n-\tRTE_VERIFY(mp != NULL);\n-\n-\tfprintf(f, \"mempool <%s>@%p\\n\", mp->name, mp);\n-\tfprintf(f, \"  flags=%x\\n\", mp->flags);\n-\tfprintf(f, \"  ring=<%s>@%p\\n\", mp->ring->name, mp->ring);\n-\tfprintf(f, \"  phys_addr=0x%\" PRIx64 \"\\n\", mp->phys_addr);\n-\tfprintf(f, \"  size=%\"PRIu32\"\\n\", mp->size);\n-\tfprintf(f, \"  header_size=%\"PRIu32\"\\n\", mp->header_size);\n-\tfprintf(f, \"  elt_size=%\"PRIu32\"\\n\", mp->elt_size);\n-\tfprintf(f, \"  trailer_size=%\"PRIu32\"\\n\", mp->trailer_size);\n-\tfprintf(f, \"  total_obj_size=%\"PRIu32\"\\n\",\n-\t       mp->header_size + mp->elt_size + mp->trailer_size);\n-\n-\tfprintf(f, \"  private_data_size=%\"PRIu32\"\\n\", mp->private_data_size);\n-\tfprintf(f, \"  pg_num=%\"PRIu32\"\\n\", mp->pg_num);\n-\tfprintf(f, \"  pg_shift=%\"PRIu32\"\\n\", mp->pg_shift);\n-\tfprintf(f, \"  pg_mask=%#tx\\n\", mp->pg_mask);\n-\tfprintf(f, \"  elt_va_start=%#tx\\n\", mp->elt_va_start);\n-\tfprintf(f, \"  elt_va_end=%#tx\\n\", mp->elt_va_end);\n-\tfprintf(f, \"  elt_pa[0]=0x%\" PRIx64 \"\\n\", mp->elt_pa[0]);\n-\n-\tif (mp->size != 0)\n-\t\tfprintf(f, \"  avg bytes/object=%#Lf\\n\",\n-\t\t\t(long double)(mp->elt_va_end - mp->elt_va_start) /\n-\t\t\tmp->size);\n-\n-\tcache_count = rte_mempool_dump_cache(f, mp);\n-\tcommon_count = rte_ring_count(mp->ring);\n-\tif ((cache_count + common_count) > mp->size)\n-\t\tcommon_count = mp->size - cache_count;\n-\tfprintf(f, \"  common_pool_count=%u\\n\", common_count);\n-\n-\t/* sum and dump statistics */\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-\tmemset(&sum, 0, sizeof(sum));\n-\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n-\t\tsum.put_bulk += mp->stats[lcore_id].put_bulk;\n-\t\tsum.put_objs += mp->stats[lcore_id].put_objs;\n-\t\tsum.get_success_bulk += mp->stats[lcore_id].get_success_bulk;\n-\t\tsum.get_success_objs += mp->stats[lcore_id].get_success_objs;\n-\t\tsum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk;\n-\t\tsum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;\n-\t}\n-\tfprintf(f, \"  stats:\\n\");\n-\tfprintf(f, \"    put_bulk=%\"PRIu64\"\\n\", sum.put_bulk);\n-\tfprintf(f, \"    put_objs=%\"PRIu64\"\\n\", sum.put_objs);\n-\tfprintf(f, \"    get_success_bulk=%\"PRIu64\"\\n\", sum.get_success_bulk);\n-\tfprintf(f, \"    get_success_objs=%\"PRIu64\"\\n\", sum.get_success_objs);\n-\tfprintf(f, \"    get_fail_bulk=%\"PRIu64\"\\n\", sum.get_fail_bulk);\n-\tfprintf(f, \"    get_fail_objs=%\"PRIu64\"\\n\", sum.get_fail_objs);\n-#else\n-\tfprintf(f, \"  no statistics available\\n\");\n-#endif\n-\n-\trte_mempool_audit(mp);\n-}\n-\n-/* dump the status of all mempools on the console */\n-void\n-rte_mempool_list_dump(FILE *f)\n-{\n-\tconst struct rte_mempool *mp = NULL;\n-\tstruct rte_tailq_entry *te;\n-\tstruct rte_mempool_list *mempool_list;\n-\n-\tif ((mempool_list =\n-\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {\n-\t\trte_errno = E_RTE_NO_TAILQ;\n-\t\treturn;\n-\t}\n-\n-\trte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);\n-\n-\tTAILQ_FOREACH(te, mempool_list, next) {\n-\t\tmp = (struct rte_mempool *) te->data;\n-\t\trte_mempool_dump(f, mp);\n-\t}\n-\n-\trte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);\n-}\n-\n-/* search a mempool from its name */\n-struct rte_mempool *\n-rte_mempool_lookup(const char *name)\n-{\n-\tstruct rte_mempool *mp = NULL;\n-\tstruct rte_tailq_entry *te;\n-\tstruct rte_mempool_list *mempool_list;\n-\n-\tif ((mempool_list =\n-\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {\n-\t\trte_errno = E_RTE_NO_TAILQ;\n-\t\treturn NULL;\n-\t}\n-\n-\trte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);\n-\n-\tTAILQ_FOREACH(te, mempool_list, next) {\n-\t\tmp = (struct rte_mempool *) te->data;\n-\t\tif (strncmp(name, mp->name, RTE_MEMPOOL_NAMESIZE) == 0)\n-\t\t\tbreak;\n-\t}\n-\n-\trte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);\n-\n-\tif (te == NULL) {\n-\t\trte_errno = ENOENT;\n-\t\treturn NULL;\n-\t}\n-\n-\treturn mp;\n-}\n-\n-void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),\n-\t\t      void *arg)\n-{\n-\tstruct rte_tailq_entry *te = NULL;\n-\tstruct rte_mempool_list *mempool_list;\n-\n-\tif ((mempool_list =\n-\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {\n-\t\trte_errno = E_RTE_NO_TAILQ;\n-\t\treturn;\n-\t}\n-\n-\trte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);\n-\n-\tTAILQ_FOREACH(te, mempool_list, next) {\n-\t\t(*func)((struct rte_mempool *) te->data, arg);\n-\t}\n-\n-\trte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);\n-}\ndiff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h\ndeleted file mode 100644\nindex 3314651..0000000\n--- a/lib/librte_mempool/rte_mempool.h\n+++ /dev/null\n@@ -1,1392 +0,0 @@\n-/*-\n- *   BSD LICENSE\n- *\n- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n- *   All rights reserved.\n- *\n- *   Redistribution and use in source and binary forms, with or without\n- *   modification, are permitted provided that the following conditions\n- *   are met:\n- *\n- *     * Redistributions of source code must retain the above copyright\n- *       notice, this list of conditions and the following disclaimer.\n- *     * Redistributions in binary form must reproduce the above copyright\n- *       notice, this list of conditions and the following disclaimer in\n- *       the documentation and/or other materials provided with the\n- *       distribution.\n- *     * Neither the name of Intel Corporation nor the names of its\n- *       contributors may be used to endorse or promote products derived\n- *       from this software without specific prior written permission.\n- *\n- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n- *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n- */\n-\n-#ifndef _RTE_MEMPOOL_H_\n-#define _RTE_MEMPOOL_H_\n-\n-/**\n- * @file\n- * RTE Mempool.\n- *\n- * A memory pool is an allocator of fixed-size object. It is\n- * identified by its name, and uses a ring to store free objects. It\n- * provides some other optional services, like a per-core object\n- * cache, and an alignment helper to ensure that objects are padded\n- * to spread them equally on all RAM channels, ranks, and so on.\n- *\n- * Objects owned by a mempool should never be added in another\n- * mempool. When an object is freed using rte_mempool_put() or\n- * equivalent, the object data is not modified; the user can save some\n- * meta-data in the object data and retrieve them when allocating a\n- * new object.\n- *\n- * Note: the mempool implementation is not preemptable. A lcore must\n- * not be interrupted by another task that uses the same mempool\n- * (because it uses a ring which is not preemptable). Also, mempool\n- * functions must not be used outside the DPDK environment: for\n- * example, in linuxapp environment, a thread that is not created by\n- * the EAL must not use mempools. This is due to the per-lcore cache\n- * that won't work as rte_lcore_id() will not return a correct value.\n- */\n-\n-#include <stdio.h>\n-#include <stdlib.h>\n-#include <stdint.h>\n-#include <errno.h>\n-#include <inttypes.h>\n-#include <sys/queue.h>\n-\n-#include <rte_log.h>\n-#include <rte_debug.h>\n-#include <rte_lcore.h>\n-#include <rte_memory.h>\n-#include <rte_branch_prediction.h>\n-#include <rte_ring.h>\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#define RTE_MEMPOOL_HEADER_COOKIE1  0xbadbadbadadd2e55ULL /**< Header cookie. */\n-#define RTE_MEMPOOL_HEADER_COOKIE2  0xf2eef2eedadd2e55ULL /**< Header cookie. */\n-#define RTE_MEMPOOL_TRAILER_COOKIE  0xadd2e55badbadbadULL /**< Trailer cookie.*/\n-\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-/**\n- * A structure that stores the mempool statistics (per-lcore).\n- */\n-struct rte_mempool_debug_stats {\n-\tuint64_t put_bulk;         /**< Number of puts. */\n-\tuint64_t put_objs;         /**< Number of objects successfully put. */\n-\tuint64_t get_success_bulk; /**< Successful allocation number. */\n-\tuint64_t get_success_objs; /**< Objects successfully allocated. */\n-\tuint64_t get_fail_bulk;    /**< Failed allocation number. */\n-\tuint64_t get_fail_objs;    /**< Objects that failed to be allocated. */\n-} __rte_cache_aligned;\n-#endif\n-\n-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n-/**\n- * A structure that stores a per-core object cache.\n- */\n-struct rte_mempool_cache {\n-\tunsigned len; /**< Cache len */\n-\t/*\n-\t * Cache is allocated to this size to allow it to overflow in certain\n-\t * cases to avoid needless emptying of cache.\n-\t */\n-\tvoid *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache objects */\n-} __rte_cache_aligned;\n-#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */\n-\n-struct rte_mempool_objsz {\n-\tuint32_t elt_size;     /**< Size of an element. */\n-\tuint32_t header_size;  /**< Size of header (before elt). */\n-\tuint32_t trailer_size; /**< Size of trailer (after elt). */\n-\tuint32_t total_size;\n-\t/**< Total size of an object (header + elt + trailer). */\n-};\n-\n-#define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory pool. */\n-#define RTE_MEMPOOL_MZ_PREFIX \"MP_\"\n-\n-/* \"MP_<name>\" */\n-#define\tRTE_MEMPOOL_MZ_FORMAT\tRTE_MEMPOOL_MZ_PREFIX \"%s\"\n-\n-#ifdef RTE_LIBRTE_XEN_DOM0\n-\n-/* \"<name>_MP_elt\" */\n-#define\tRTE_MEMPOOL_OBJ_NAME\t\"%s_\" RTE_MEMPOOL_MZ_PREFIX \"elt\"\n-\n-#else\n-\n-#define\tRTE_MEMPOOL_OBJ_NAME\tRTE_MEMPOOL_MZ_FORMAT\n-\n-#endif /* RTE_LIBRTE_XEN_DOM0 */\n-\n-#define\tMEMPOOL_PG_SHIFT_MAX\t(sizeof(uintptr_t) * CHAR_BIT - 1)\n-\n-/** Mempool over one chunk of physically continuous memory */\n-#define\tMEMPOOL_PG_NUM_DEFAULT\t1\n-\n-/**\n- * The RTE mempool structure.\n- */\n-struct rte_mempool {\n-\tchar name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */\n-\tstruct rte_ring *ring;           /**< Ring to store objects. */\n-\tphys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */\n-\tint flags;                       /**< Flags of the mempool. */\n-\tuint32_t size;                   /**< Size of the mempool. */\n-\tuint32_t cache_size;             /**< Size of per-lcore local cache. */\n-\tuint32_t cache_flushthresh;\n-\t/**< Threshold before we flush excess elements. */\n-\n-\tuint32_t elt_size;               /**< Size of an element. */\n-\tuint32_t header_size;            /**< Size of header (before elt). */\n-\tuint32_t trailer_size;           /**< Size of trailer (after elt). */\n-\n-\tunsigned private_data_size;      /**< Size of private data. */\n-\n-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n-\t/** Per-lcore local cache. */\n-\tstruct rte_mempool_cache local_cache[RTE_MAX_LCORE];\n-#endif\n-\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-\t/** Per-lcore statistics. */\n-\tstruct rte_mempool_debug_stats stats[RTE_MAX_LCORE];\n-#endif\n-\n-\t/* Address translation support, starts from next cache line. */\n-\n-\t/** Number of elements in the elt_pa array. */\n-\tuint32_t    pg_num __rte_cache_aligned;\n-\tuint32_t    pg_shift;     /**< LOG2 of the physical pages. */\n-\tuintptr_t   pg_mask;      /**< physical page mask value. */\n-\tuintptr_t   elt_va_start;\n-\t/**< Virtual address of the first mempool object. */\n-\tuintptr_t   elt_va_end;\n-\t/**< Virtual address of the <size + 1> mempool object. */\n-\tphys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];\n-\t/**< Array of physical pages addresses for the mempool objects buffer. */\n-\n-}  __rte_cache_aligned;\n-\n-#define MEMPOOL_F_NO_SPREAD      0x0001 /**< Do not spread in memory. */\n-#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/\n-#define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is \"single-producer\".*/\n-#define MEMPOOL_F_SC_GET         0x0008 /**< Default get is \"single-consumer\".*/\n-\n-/**\n- * @internal When debug is enabled, store some statistics.\n- * @param mp\n- *   Pointer to the memory pool.\n- * @param name\n- *   Name of the statistics field to increment in the memory pool.\n- * @param n\n- *   Number to add to the object-oriented statistics.\n- */\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-#define __MEMPOOL_STAT_ADD(mp, name, n) do {\t\t\t\\\n-\t\tunsigned __lcore_id = rte_lcore_id();\t\t\\\n-\t\tmp->stats[__lcore_id].name##_objs += n;\t\t\\\n-\t\tmp->stats[__lcore_id].name##_bulk += 1;\t\t\\\n-\t} while(0)\n-#else\n-#define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)\n-#endif\n-\n-/**\n- * Calculates size of the mempool header.\n- * @param mp\n- *   Pointer to the memory pool.\n- * @param pgn\n- *   Number of page used to store mempool objects.\n- */\n-#define\tMEMPOOL_HEADER_SIZE(mp, pgn)\t(sizeof(*(mp)) + \\\n-\tRTE_ALIGN_CEIL(((pgn) - RTE_DIM((mp)->elt_pa)) * \\\n-\tsizeof ((mp)->elt_pa[0]), RTE_CACHE_LINE_SIZE))\n-\n-/**\n- * Returns TRUE if whole mempool is allocated in one contiguous block of memory.\n- */\n-#define\tMEMPOOL_IS_CONTIG(mp)                      \\\n-\t((mp)->pg_num == MEMPOOL_PG_NUM_DEFAULT && \\\n-\t(mp)->phys_addr == (mp)->elt_pa[0])\n-\n-/**\n- * @internal Get a pointer to a mempool pointer in the object header.\n- * @param obj\n- *   Pointer to object.\n- * @return\n- *   The pointer to the mempool from which the object was allocated.\n- */\n-static inline struct rte_mempool **__mempool_from_obj(void *obj)\n-{\n-\tstruct rte_mempool **mpp;\n-\tunsigned off;\n-\n-\toff = sizeof(struct rte_mempool *);\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-\toff += sizeof(uint64_t);\n-#endif\n-\tmpp = (struct rte_mempool **)((char *)obj - off);\n-\treturn mpp;\n-}\n-\n-/**\n- * Return a pointer to the mempool owning this object.\n- *\n- * @param obj\n- *   An object that is owned by a pool. If this is not the case,\n- *   the behavior is undefined.\n- * @return\n- *   A pointer to the mempool structure.\n- */\n-static inline const struct rte_mempool *rte_mempool_from_obj(void *obj)\n-{\n-\tstruct rte_mempool * const *mpp;\n-\tmpp = __mempool_from_obj(obj);\n-\treturn *mpp;\n-}\n-\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-/* get header cookie value */\n-static inline uint64_t __mempool_read_header_cookie(const void *obj)\n-{\n-\treturn *(const uint64_t *)((const char *)obj - sizeof(uint64_t));\n-}\n-\n-/* get trailer cookie value */\n-static inline uint64_t __mempool_read_trailer_cookie(void *obj)\n-{\n-\tstruct rte_mempool **mpp = __mempool_from_obj(obj);\n-\treturn *(uint64_t *)((char *)obj + (*mpp)->elt_size);\n-}\n-\n-/* write header cookie value */\n-static inline void __mempool_write_header_cookie(void *obj, int free)\n-{\n-\tuint64_t *cookie_p;\n-\tcookie_p = (uint64_t *)((char *)obj - sizeof(uint64_t));\n-\tif (free == 0)\n-\t\t*cookie_p = RTE_MEMPOOL_HEADER_COOKIE1;\n-\telse\n-\t\t*cookie_p = RTE_MEMPOOL_HEADER_COOKIE2;\n-\n-}\n-\n-/* write trailer cookie value */\n-static inline void __mempool_write_trailer_cookie(void *obj)\n-{\n-\tuint64_t *cookie_p;\n-\tstruct rte_mempool **mpp = __mempool_from_obj(obj);\n-\tcookie_p = (uint64_t *)((char *)obj + (*mpp)->elt_size);\n-\t*cookie_p = RTE_MEMPOOL_TRAILER_COOKIE;\n-}\n-#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */\n-\n-/**\n- * @internal Check and update cookies or panic.\n- *\n- * @param mp\n- *   Pointer to the memory pool.\n- * @param obj_table_const\n- *   Pointer to a table of void * pointers (objects).\n- * @param n\n- *   Index of object in object table.\n- * @param free\n- *   - 0: object is supposed to be allocated, mark it as free\n- *   - 1: object is supposed to be free, mark it as allocated\n- *   - 2: just check that cookie is valid (free or allocated)\n- */\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-#ifndef __INTEL_COMPILER\n-#pragma GCC diagnostic ignored \"-Wcast-qual\"\n-#endif\n-static inline void __mempool_check_cookies(const struct rte_mempool *mp,\n-\t\t\t\t\t   void * const *obj_table_const,\n-\t\t\t\t\t   unsigned n, int free)\n-{\n-\tuint64_t cookie;\n-\tvoid *tmp;\n-\tvoid *obj;\n-\tvoid **obj_table;\n-\n-\t/* Force to drop the \"const\" attribute. This is done only when\n-\t * DEBUG is enabled */\n-\ttmp = (void *) obj_table_const;\n-\tobj_table = (void **) tmp;\n-\n-\twhile (n--) {\n-\t\tobj = obj_table[n];\n-\n-\t\tif (rte_mempool_from_obj(obj) != mp)\n-\t\t\trte_panic(\"MEMPOOL: object is owned by another \"\n-\t\t\t\t  \"mempool\\n\");\n-\n-\t\tcookie = __mempool_read_header_cookie(obj);\n-\n-\t\tif (free == 0) {\n-\t\t\tif (cookie != RTE_MEMPOOL_HEADER_COOKIE1) {\n-\t\t\t\trte_log_set_history(0);\n-\t\t\t\tRTE_LOG(CRIT, MEMPOOL,\n-\t\t\t\t\t\"obj=%p, mempool=%p, cookie=%\"PRIx64\"\\n\",\n-\t\t\t\t\tobj, mp, cookie);\n-\t\t\t\trte_panic(\"MEMPOOL: bad header cookie (put)\\n\");\n-\t\t\t}\n-\t\t\t__mempool_write_header_cookie(obj, 1);\n-\t\t}\n-\t\telse if (free == 1) {\n-\t\t\tif (cookie != RTE_MEMPOOL_HEADER_COOKIE2) {\n-\t\t\t\trte_log_set_history(0);\n-\t\t\t\tRTE_LOG(CRIT, MEMPOOL,\n-\t\t\t\t\t\"obj=%p, mempool=%p, cookie=%\"PRIx64\"\\n\",\n-\t\t\t\t\tobj, mp, cookie);\n-\t\t\t\trte_panic(\"MEMPOOL: bad header cookie (get)\\n\");\n-\t\t\t}\n-\t\t\t__mempool_write_header_cookie(obj, 0);\n-\t\t}\n-\t\telse if (free == 2) {\n-\t\t\tif (cookie != RTE_MEMPOOL_HEADER_COOKIE1 &&\n-\t\t\t    cookie != RTE_MEMPOOL_HEADER_COOKIE2) {\n-\t\t\t\trte_log_set_history(0);\n-\t\t\t\tRTE_LOG(CRIT, MEMPOOL,\n-\t\t\t\t\t\"obj=%p, mempool=%p, cookie=%\"PRIx64\"\\n\",\n-\t\t\t\t\tobj, mp, cookie);\n-\t\t\t\trte_panic(\"MEMPOOL: bad header cookie (audit)\\n\");\n-\t\t\t}\n-\t\t}\n-\t\tcookie = __mempool_read_trailer_cookie(obj);\n-\t\tif (cookie != RTE_MEMPOOL_TRAILER_COOKIE) {\n-\t\t\trte_log_set_history(0);\n-\t\t\tRTE_LOG(CRIT, MEMPOOL,\n-\t\t\t\t\"obj=%p, mempool=%p, cookie=%\"PRIx64\"\\n\",\n-\t\t\t\tobj, mp, cookie);\n-\t\t\trte_panic(\"MEMPOOL: bad trailer cookie\\n\");\n-\t\t}\n-\t}\n-}\n-#ifndef __INTEL_COMPILER\n-#pragma GCC diagnostic error \"-Wcast-qual\"\n-#endif\n-#else\n-#define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)\n-#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */\n-\n-/**\n- * An mempool's object iterator callback function.\n- */\n-typedef void (*rte_mempool_obj_iter_t)(void * /*obj_iter_arg*/,\n-\tvoid * /*obj_start*/,\n-\tvoid * /*obj_end*/,\n-\tuint32_t /*obj_index */);\n-\n-/*\n- * Iterates across objects of the given size and alignment in the\n- * provided chunk of memory. The given memory buffer can consist of\n- * disjoint physical pages.\n- * For each object calls the provided callback (if any).\n- * Used to populate mempool, walk through all elements of the mempool,\n- * estimate how many elements of the given size could be created in the given\n- * memory buffer.\n- * @param vaddr\n- *   Virtual address of the memory buffer.\n- * @param elt_num\n- *   Maximum number of objects to iterate through.\n- * @param elt_sz\n- *   Size of each object.\n- * @param paddr\n- *   Array of phyiscall addresses of the pages that comprises given memory\n- *   buffer.\n- * @param pg_num\n- *   Number of elements in the paddr array.\n- * @param pg_shift\n- *   LOG2 of the physical pages size.\n- * @param obj_iter\n- *   Object iterator callback function (could be NULL).\n- * @param obj_iter_arg\n- *   User defined Prameter for the object iterator callback function.\n- *\n- * @return\n- *   Number of objects iterated through.\n- */\n-\n-uint32_t rte_mempool_obj_iter(void *vaddr,\n-\tuint32_t elt_num, size_t elt_sz, size_t align,\n-\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift,\n-\trte_mempool_obj_iter_t obj_iter, void *obj_iter_arg);\n-\n-/**\n- * An object constructor callback function for mempool.\n- *\n- * Arguments are the mempool, the opaque pointer given by the user in\n- * rte_mempool_create(), the pointer to the element and the index of\n- * the element in the pool.\n- */\n-typedef void (rte_mempool_obj_ctor_t)(struct rte_mempool *, void *,\n-\t\t\t\t      void *, unsigned);\n-\n-/**\n- * A mempool constructor callback function.\n- *\n- * Arguments are the mempool and the opaque pointer given by the user in\n- * rte_mempool_create().\n- */\n-typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);\n-\n-/**\n- * Creates a new mempool named *name* in memory.\n- *\n- * This function uses ``memzone_reserve()`` to allocate memory. The\n- * pool contains n elements of elt_size. Its size is set to n.\n- * All elements of the mempool are allocated together with the mempool header,\n- * in one physically continuous chunk of memory.\n- *\n- * @param name\n- *   The name of the mempool.\n- * @param n\n- *   The number of elements in the mempool. The optimum size (in terms of\n- *   memory usage) for a mempool is when n is a power of two minus one:\n- *   n = (2^q - 1).\n- * @param elt_size\n- *   The size of each element.\n- * @param cache_size\n- *   If cache_size is non-zero, the rte_mempool library will try to\n- *   limit the accesses to the common lockless pool, by maintaining a\n- *   per-lcore object cache. This argument must be lower or equal to\n- *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose\n- *   cache_size to have \"n modulo cache_size == 0\": if this is\n- *   not the case, some elements will always stay in the pool and will\n- *   never be used. The access to the per-lcore table is of course\n- *   faster than the multi-producer/consumer pool. The cache can be\n- *   disabled if the cache_size argument is set to 0; it can be useful to\n- *   avoid losing objects in cache. Note that even if not used, the\n- *   memory space for cache is always reserved in a mempool structure,\n- *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.\n- * @param private_data_size\n- *   The size of the private data appended after the mempool\n- *   structure. This is useful for storing some private data after the\n- *   mempool structure, as is done for rte_mbuf_pool for example.\n- * @param mp_init\n- *   A function pointer that is called for initialization of the pool,\n- *   before object initialization. The user can initialize the private\n- *   data in this function if needed. This parameter can be NULL if\n- *   not needed.\n- * @param mp_init_arg\n- *   An opaque pointer to data that can be used in the mempool\n- *   constructor function.\n- * @param obj_init\n- *   A function pointer that is called for each object at\n- *   initialization of the pool. The user can set some meta data in\n- *   objects if needed. This parameter can be NULL if not needed.\n- *   The obj_init() function takes the mempool pointer, the init_arg,\n- *   the object pointer and the object number as parameters.\n- * @param obj_init_arg\n- *   An opaque pointer to data that can be used as an argument for\n- *   each call to the object constructor function.\n- * @param socket_id\n- *   The *socket_id* argument is the socket identifier in the case of\n- *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA\n- *   constraint for the reserved zone.\n- * @param flags\n- *   The *flags* arguments is an OR of following flags:\n- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread\n- *     between channels in RAM: the pool allocator will add padding\n- *     between objects depending on the hardware configuration. See\n- *     Memory alignment constraints for details. If this flag is set,\n- *     the allocator will just align them to a cache line.\n- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are\n- *     cache-aligned. This flag removes this constraint, and no\n- *     padding will be present between objects. This flag implies\n- *     MEMPOOL_F_NO_SPREAD.\n- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior\n- *     when using rte_mempool_put() or rte_mempool_put_bulk() is\n- *     \"single-producer\". Otherwise, it is \"multi-producers\".\n- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior\n- *     when using rte_mempool_get() or rte_mempool_get_bulk() is\n- *     \"single-consumer\". Otherwise, it is \"multi-consumers\".\n- * @return\n- *   The pointer to the new allocated mempool, on success. NULL on error\n- *   with rte_errno set appropriately. Possible rte_errno values include:\n- *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n- *    - E_RTE_SECONDARY - function was called from a secondary process instance\n- *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list\n- *    - EINVAL - cache size provided is too large\n- *    - ENOSPC - the maximum number of memzones has already been allocated\n- *    - EEXIST - a memzone with the same name already exists\n- *    - ENOMEM - no appropriate memory area found in which to create memzone\n- */\n-struct rte_mempool *\n-rte_mempool_create(const char *name, unsigned n, unsigned elt_size,\n-\t\t   unsigned cache_size, unsigned private_data_size,\n-\t\t   rte_mempool_ctor_t *mp_init, void *mp_init_arg,\n-\t\t   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n-\t\t   int socket_id, unsigned flags);\n-\n-/**\n- * Creates a new mempool named *name* in memory.\n- *\n- * This function uses ``memzone_reserve()`` to allocate memory. The\n- * pool contains n elements of elt_size. Its size is set to n.\n- * Depending on the input parameters, mempool elements can be either allocated\n- * together with the mempool header, or an externally provided memory buffer\n- * could be used to store mempool objects. In later case, that external\n- * memory buffer can consist of set of disjoint phyiscal pages.\n- *\n- * @param name\n- *   The name of the mempool.\n- * @param n\n- *   The number of elements in the mempool. The optimum size (in terms of\n- *   memory usage) for a mempool is when n is a power of two minus one:\n- *   n = (2^q - 1).\n- * @param elt_size\n- *   The size of each element.\n- * @param cache_size\n- *   If cache_size is non-zero, the rte_mempool library will try to\n- *   limit the accesses to the common lockless pool, by maintaining a\n- *   per-lcore object cache. This argument must be lower or equal to\n- *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose\n- *   cache_size to have \"n modulo cache_size == 0\": if this is\n- *   not the case, some elements will always stay in the pool and will\n- *   never be used. The access to the per-lcore table is of course\n- *   faster than the multi-producer/consumer pool. The cache can be\n- *   disabled if the cache_size argument is set to 0; it can be useful to\n- *   avoid losing objects in cache. Note that even if not used, the\n- *   memory space for cache is always reserved in a mempool structure,\n- *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.\n- * @param private_data_size\n- *   The size of the private data appended after the mempool\n- *   structure. This is useful for storing some private data after the\n- *   mempool structure, as is done for rte_mbuf_pool for example.\n- * @param mp_init\n- *   A function pointer that is called for initialization of the pool,\n- *   before object initialization. The user can initialize the private\n- *   data in this function if needed. This parameter can be NULL if\n- *   not needed.\n- * @param mp_init_arg\n- *   An opaque pointer to data that can be used in the mempool\n- *   constructor function.\n- * @param obj_init\n- *   A function pointer that is called for each object at\n- *   initialization of the pool. The user can set some meta data in\n- *   objects if needed. This parameter can be NULL if not needed.\n- *   The obj_init() function takes the mempool pointer, the init_arg,\n- *   the object pointer and the object number as parameters.\n- * @param obj_init_arg\n- *   An opaque pointer to data that can be used as an argument for\n- *   each call to the object constructor function.\n- * @param socket_id\n- *   The *socket_id* argument is the socket identifier in the case of\n- *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA\n- *   constraint for the reserved zone.\n- * @param flags\n- *   The *flags* arguments is an OR of following flags:\n- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread\n- *     between channels in RAM: the pool allocator will add padding\n- *     between objects depending on the hardware configuration. See\n- *     Memory alignment constraints for details. If this flag is set,\n- *     the allocator will just align them to a cache line.\n- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are\n- *     cache-aligned. This flag removes this constraint, and no\n- *     padding will be present between objects. This flag implies\n- *     MEMPOOL_F_NO_SPREAD.\n- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior\n- *     when using rte_mempool_put() or rte_mempool_put_bulk() is\n- *     \"single-producer\". Otherwise, it is \"multi-producers\".\n- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior\n- *     when using rte_mempool_get() or rte_mempool_get_bulk() is\n- *     \"single-consumer\". Otherwise, it is \"multi-consumers\".\n- * @param vaddr\n- *   Virtual address of the externally allocated memory buffer.\n- *   Will be used to store mempool objects.\n- * @param paddr\n- *   Array of phyiscall addresses of the pages that comprises given memory\n- *   buffer.\n- * @param pg_num\n- *   Number of elements in the paddr array.\n- * @param pg_shift\n- *   LOG2 of the physical pages size.\n- * @return\n- *   The pointer to the new allocated mempool, on success. NULL on error\n- *   with rte_errno set appropriately. Possible rte_errno values include:\n- *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n- *    - E_RTE_SECONDARY - function was called from a secondary process instance\n- *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list\n- *    - EINVAL - cache size provided is too large\n- *    - ENOSPC - the maximum number of memzones has already been allocated\n- *    - EEXIST - a memzone with the same name already exists\n- *    - ENOMEM - no appropriate memory area found in which to create memzone\n- */\n-struct rte_mempool *\n-rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,\n-\t\tunsigned cache_size, unsigned private_data_size,\n-\t\trte_mempool_ctor_t *mp_init, void *mp_init_arg,\n-\t\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n-\t\tint socket_id, unsigned flags, void *vaddr,\n-\t\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift);\n-\n-#ifdef RTE_LIBRTE_XEN_DOM0\n-/**\n- * Creates a new mempool named *name* in memory on Xen Dom0.\n- *\n- * This function uses ``rte_mempool_xmem_create()`` to allocate memory. The\n- * pool contains n elements of elt_size. Its size is set to n.\n- * All elements of the mempool are allocated together with the mempool header,\n- * and memory buffer can consist of set of disjoint phyiscal pages.\n- *\n- * @param name\n- *   The name of the mempool.\n- * @param n\n- *   The number of elements in the mempool. The optimum size (in terms of\n- *   memory usage) for a mempool is when n is a power of two minus one:\n- *   n = (2^q - 1).\n- * @param elt_size\n- *   The size of each element.\n- * @param cache_size\n- *   If cache_size is non-zero, the rte_mempool library will try to\n- *   limit the accesses to the common lockless pool, by maintaining a\n- *   per-lcore object cache. This argument must be lower or equal to\n- *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose\n- *   cache_size to have \"n modulo cache_size == 0\": if this is\n- *   not the case, some elements will always stay in the pool and will\n- *   never be used. The access to the per-lcore table is of course\n- *   faster than the multi-producer/consumer pool. The cache can be\n- *   disabled if the cache_size argument is set to 0; it can be useful to\n- *   avoid losing objects in cache. Note that even if not used, the\n- *   memory space for cache is always reserved in a mempool structure,\n- *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.\n- * @param private_data_size\n- *   The size of the private data appended after the mempool\n- *   structure. This is useful for storing some private data after the\n- *   mempool structure, as is done for rte_mbuf_pool for example.\n- * @param mp_init\n- *   A function pointer that is called for initialization of the pool,\n- *   before object initialization. The user can initialize the private\n- *   data in this function if needed. This parameter can be NULL if\n- *   not needed.\n- * @param mp_init_arg\n- *   An opaque pointer to data that can be used in the mempool\n- *   constructor function.\n- * @param obj_init\n- *   A function pointer that is called for each object at\n- *   initialization of the pool. The user can set some meta data in\n- *   objects if needed. This parameter can be NULL if not needed.\n- *   The obj_init() function takes the mempool pointer, the init_arg,\n- *   the object pointer and the object number as parameters.\n- * @param obj_init_arg\n- *   An opaque pointer to data that can be used as an argument for\n- *   each call to the object constructor function.\n- * @param socket_id\n- *   The *socket_id* argument is the socket identifier in the case of\n- *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA\n- *   constraint for the reserved zone.\n- * @param flags\n- *   The *flags* arguments is an OR of following flags:\n- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread\n- *     between channels in RAM: the pool allocator will add padding\n- *     between objects depending on the hardware configuration. See\n- *     Memory alignment constraints for details. If this flag is set,\n- *     the allocator will just align them to a cache line.\n- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are\n- *     cache-aligned. This flag removes this constraint, and no\n- *     padding will be present between objects. This flag implies\n- *     MEMPOOL_F_NO_SPREAD.\n- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior\n- *     when using rte_mempool_put() or rte_mempool_put_bulk() is\n- *     \"single-producer\". Otherwise, it is \"multi-producers\".\n- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior\n- *     when using rte_mempool_get() or rte_mempool_get_bulk() is\n- *     \"single-consumer\". Otherwise, it is \"multi-consumers\".\n- * @return\n- *   The pointer to the new allocated mempool, on success. NULL on error\n- *   with rte_errno set appropriately. Possible rte_errno values include:\n- *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n- *    - E_RTE_SECONDARY - function was called from a secondary process instance\n- *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list\n- *    - EINVAL - cache size provided is too large\n- *    - ENOSPC - the maximum number of memzones has already been allocated\n- *    - EEXIST - a memzone with the same name already exists\n- *    - ENOMEM - no appropriate memory area found in which to create memzone\n- */\n-struct rte_mempool *\n-rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size,\n-\t\tunsigned cache_size, unsigned private_data_size,\n-\t\trte_mempool_ctor_t *mp_init, void *mp_init_arg,\n-\t\trte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,\n-\t\tint socket_id, unsigned flags);\n-#endif\n-\n-/**\n- * Dump the status of the mempool to the console.\n- *\n- * @param f\n- *   A pointer to a file for output\n- * @param mp\n- *   A pointer to the mempool structure.\n- */\n-void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);\n-\n-/**\n- * @internal Put several objects back in the mempool; used internally.\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to store back in the mempool, must be strictly\n- *   positive.\n- * @param is_mp\n- *   Mono-producer (0) or multi-producers (1).\n- */\n-static inline void __attribute__((always_inline))\n-__mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,\n-\t\t    unsigned n, int is_mp)\n-{\n-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n-\tstruct rte_mempool_cache *cache;\n-\tuint32_t index;\n-\tvoid **cache_objs;\n-\tunsigned lcore_id = rte_lcore_id();\n-\tuint32_t cache_size = mp->cache_size;\n-\tuint32_t flushthresh = mp->cache_flushthresh;\n-#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */\n-\n-\t/* increment stat now, adding in mempool always success */\n-\t__MEMPOOL_STAT_ADD(mp, put, n);\n-\n-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n-\t/* cache is not enabled or single producer */\n-\tif (unlikely(cache_size == 0 || is_mp == 0))\n-\t\tgoto ring_enqueue;\n-\n-\t/* Go straight to ring if put would overflow mem allocated for cache */\n-\tif (unlikely(n > RTE_MEMPOOL_CACHE_MAX_SIZE))\n-\t\tgoto ring_enqueue;\n-\n-\tcache = &mp->local_cache[lcore_id];\n-\tcache_objs = &cache->objs[cache->len];\n-\n-\t/*\n-\t * The cache follows the following algorithm\n-\t *   1. Add the objects to the cache\n-\t *   2. Anything greater than the cache min value (if it crosses the\n-\t *   cache flush threshold) is flushed to the ring.\n-\t */\n-\n-\t/* Add elements back into the cache */\n-\tfor (index = 0; index < n; ++index, obj_table++)\n-\t\tcache_objs[index] = *obj_table;\n-\n-\tcache->len += n;\n-\n-\tif (cache->len >= flushthresh) {\n-\t\trte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],\n-\t\t\t\tcache->len - cache_size);\n-\t\tcache->len = cache_size;\n-\t}\n-\n-\treturn;\n-\n-ring_enqueue:\n-#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */\n-\n-\t/* push remaining objects in ring */\n-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n-\tif (is_mp) {\n-\t\tif (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)\n-\t\t\trte_panic(\"cannot put objects in mempool\\n\");\n-\t}\n-\telse {\n-\t\tif (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)\n-\t\t\trte_panic(\"cannot put objects in mempool\\n\");\n-\t}\n-#else\n-\tif (is_mp)\n-\t\trte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);\n-\telse\n-\t\trte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);\n-#endif\n-}\n-\n-\n-/**\n- * Put several objects back in the mempool (multi-producers safe).\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the mempool from the obj_table.\n- */\n-static inline void __attribute__((always_inline))\n-rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,\n-\t\t\tunsigned n)\n-{\n-\t__mempool_check_cookies(mp, obj_table, n, 0);\n-\t__mempool_put_bulk(mp, obj_table, n, 1);\n-}\n-\n-/**\n- * Put several objects back in the mempool (NOT multi-producers safe).\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the mempool from obj_table.\n- */\n-static inline void\n-rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,\n-\t\t\tunsigned n)\n-{\n-\t__mempool_check_cookies(mp, obj_table, n, 0);\n-\t__mempool_put_bulk(mp, obj_table, n, 0);\n-}\n-\n-/**\n- * Put several objects back in the mempool.\n- *\n- * This function calls the multi-producer or the single-producer\n- * version depending on the default behavior that was specified at\n- * mempool creation time (see flags).\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the mempool from obj_table.\n- */\n-static inline void __attribute__((always_inline))\n-rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,\n-\t\t     unsigned n)\n-{\n-\t__mempool_check_cookies(mp, obj_table, n, 0);\n-\t__mempool_put_bulk(mp, obj_table, n, !(mp->flags & MEMPOOL_F_SP_PUT));\n-}\n-\n-/**\n- * Put one object in the mempool (multi-producers safe).\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj\n- *   A pointer to the object to be added.\n- */\n-static inline void __attribute__((always_inline))\n-rte_mempool_mp_put(struct rte_mempool *mp, void *obj)\n-{\n-\trte_mempool_mp_put_bulk(mp, &obj, 1);\n-}\n-\n-/**\n- * Put one object back in the mempool (NOT multi-producers safe).\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj\n- *   A pointer to the object to be added.\n- */\n-static inline void __attribute__((always_inline))\n-rte_mempool_sp_put(struct rte_mempool *mp, void *obj)\n-{\n-\trte_mempool_sp_put_bulk(mp, &obj, 1);\n-}\n-\n-/**\n- * Put one object back in the mempool.\n- *\n- * This function calls the multi-producer or the single-producer\n- * version depending on the default behavior that was specified at\n- * mempool creation time (see flags).\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj\n- *   A pointer to the object to be added.\n- */\n-static inline void __attribute__((always_inline))\n-rte_mempool_put(struct rte_mempool *mp, void *obj)\n-{\n-\trte_mempool_put_bulk(mp, &obj, 1);\n-}\n-\n-/**\n- * @internal Get several objects from the mempool; used internally.\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to get, must be strictly positive.\n- * @param is_mc\n- *   Mono-consumer (0) or multi-consumers (1).\n- * @return\n- *   - >=0: Success; number of objects supplied.\n- *   - <0: Error; code of ring dequeue function.\n- */\n-static inline int __attribute__((always_inline))\n-__mempool_get_bulk(struct rte_mempool *mp, void **obj_table,\n-\t\t   unsigned n, int is_mc)\n-{\n-\tint ret;\n-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0\n-\tstruct rte_mempool_cache *cache;\n-\tuint32_t index, len;\n-\tvoid **cache_objs;\n-\tunsigned lcore_id = rte_lcore_id();\n-\tuint32_t cache_size = mp->cache_size;\n-\n-\t/* cache is not enabled or single consumer */\n-\tif (unlikely(cache_size == 0 || is_mc == 0 || n >= cache_size))\n-\t\tgoto ring_dequeue;\n-\n-\tcache = &mp->local_cache[lcore_id];\n-\tcache_objs = cache->objs;\n-\n-\t/* Can this be satisfied from the cache? */\n-\tif (cache->len < n) {\n-\t\t/* No. Backfill the cache first, and then fill from it */\n-\t\tuint32_t req = n + (cache_size - cache->len);\n-\n-\t\t/* How many do we require i.e. number to fill the cache + the request */\n-\t\tret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);\n-\t\tif (unlikely(ret < 0)) {\n-\t\t\t/*\n-\t\t\t * In the offchance that we are buffer constrained,\n-\t\t\t * where we are not able to allocate cache + n, go to\n-\t\t\t * the ring directly. If that fails, we are truly out of\n-\t\t\t * buffers.\n-\t\t\t */\n-\t\t\tgoto ring_dequeue;\n-\t\t}\n-\n-\t\tcache->len += req;\n-\t}\n-\n-\t/* Now fill in the response ... */\n-\tfor (index = 0, len = cache->len - 1; index < n; ++index, len--, obj_table++)\n-\t\t*obj_table = cache_objs[len];\n-\n-\tcache->len -= n;\n-\n-\t__MEMPOOL_STAT_ADD(mp, get_success, n);\n-\n-\treturn 0;\n-\n-ring_dequeue:\n-#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */\n-\n-\t/* get remaining objects from ring */\n-\tif (is_mc)\n-\t\tret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);\n-\telse\n-\t\tret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);\n-\n-\tif (ret < 0)\n-\t\t__MEMPOOL_STAT_ADD(mp, get_fail, n);\n-\telse\n-\t\t__MEMPOOL_STAT_ADD(mp, get_success, n);\n-\n-\treturn ret;\n-}\n-\n-/**\n- * Get several objects from the mempool (multi-consumers safe).\n- *\n- * If cache is enabled, objects will be retrieved first from cache,\n- * subsequently from the common pool. Note that it can return -ENOENT when\n- * the local cache and common pool are empty, even if cache from other\n- * lcores are full.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to get from mempool to obj_table.\n- * @return\n- *   - 0: Success; objects taken.\n- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n- */\n-static inline int __attribute__((always_inline))\n-rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)\n-{\n-\tint ret;\n-\tret = __mempool_get_bulk(mp, obj_table, n, 1);\n-\tif (ret == 0)\n-\t\t__mempool_check_cookies(mp, obj_table, n, 1);\n-\treturn ret;\n-}\n-\n-/**\n- * Get several objects from the mempool (NOT multi-consumers safe).\n- *\n- * If cache is enabled, objects will be retrieved first from cache,\n- * subsequently from the common pool. Note that it can return -ENOENT when\n- * the local cache and common pool are empty, even if cache from other\n- * lcores are full.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to get from the mempool to obj_table.\n- * @return\n- *   - 0: Success; objects taken.\n- *   - -ENOENT: Not enough entries in the mempool; no object is\n- *     retrieved.\n- */\n-static inline int __attribute__((always_inline))\n-rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)\n-{\n-\tint ret;\n-\tret = __mempool_get_bulk(mp, obj_table, n, 0);\n-\tif (ret == 0)\n-\t\t__mempool_check_cookies(mp, obj_table, n, 1);\n-\treturn ret;\n-}\n-\n-/**\n- * Get several objects from the mempool.\n- *\n- * This function calls the multi-consumers or the single-consumer\n- * version, depending on the default behaviour that was specified at\n- * mempool creation time (see flags).\n- *\n- * If cache is enabled, objects will be retrieved first from cache,\n- * subsequently from the common pool. Note that it can return -ENOENT when\n- * the local cache and common pool are empty, even if cache from other\n- * lcores are full.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to get from the mempool to obj_table.\n- * @return\n- *   - 0: Success; objects taken\n- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n- */\n-static inline int __attribute__((always_inline))\n-rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)\n-{\n-\tint ret;\n-\tret = __mempool_get_bulk(mp, obj_table, n,\n-\t\t\t\t !(mp->flags & MEMPOOL_F_SC_GET));\n-\tif (ret == 0)\n-\t\t__mempool_check_cookies(mp, obj_table, n, 1);\n-\treturn ret;\n-}\n-\n-/**\n- * Get one object from the mempool (multi-consumers safe).\n- *\n- * If cache is enabled, objects will be retrieved first from cache,\n- * subsequently from the common pool. Note that it can return -ENOENT when\n- * the local cache and common pool are empty, even if cache from other\n- * lcores are full.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_p\n- *   A pointer to a void * pointer (object) that will be filled.\n- * @return\n- *   - 0: Success; objects taken.\n- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n- */\n-static inline int __attribute__((always_inline))\n-rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)\n-{\n-\treturn rte_mempool_mc_get_bulk(mp, obj_p, 1);\n-}\n-\n-/**\n- * Get one object from the mempool (NOT multi-consumers safe).\n- *\n- * If cache is enabled, objects will be retrieved first from cache,\n- * subsequently from the common pool. Note that it can return -ENOENT when\n- * the local cache and common pool are empty, even if cache from other\n- * lcores are full.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_p\n- *   A pointer to a void * pointer (object) that will be filled.\n- * @return\n- *   - 0: Success; objects taken.\n- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n- */\n-static inline int __attribute__((always_inline))\n-rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)\n-{\n-\treturn rte_mempool_sc_get_bulk(mp, obj_p, 1);\n-}\n-\n-/**\n- * Get one object from the mempool.\n- *\n- * This function calls the multi-consumers or the single-consumer\n- * version, depending on the default behavior that was specified at\n- * mempool creation (see flags).\n- *\n- * If cache is enabled, objects will be retrieved first from cache,\n- * subsequently from the common pool. Note that it can return -ENOENT when\n- * the local cache and common pool are empty, even if cache from other\n- * lcores are full.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param obj_p\n- *   A pointer to a void * pointer (object) that will be filled.\n- * @return\n- *   - 0: Success; objects taken.\n- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.\n- */\n-static inline int __attribute__((always_inline))\n-rte_mempool_get(struct rte_mempool *mp, void **obj_p)\n-{\n-\treturn rte_mempool_get_bulk(mp, obj_p, 1);\n-}\n-\n-/**\n- * Return the number of entries in the mempool.\n- *\n- * When cache is enabled, this function has to browse the length of\n- * all lcores, so it should not be used in a data path, but only for\n- * debug purposes.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @return\n- *   The number of entries in the mempool.\n- */\n-unsigned rte_mempool_count(const struct rte_mempool *mp);\n-\n-/**\n- * Return the number of free entries in the mempool ring.\n- * i.e. how many entries can be freed back to the mempool.\n- *\n- * NOTE: This corresponds to the number of elements *allocated* from the\n- * memory pool, not the number of elements in the pool itself. To count\n- * the number elements currently available in the pool, use \"rte_mempool_count\"\n- *\n- * When cache is enabled, this function has to browse the length of\n- * all lcores, so it should not be used in a data path, but only for\n- * debug purposes.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @return\n- *   The number of free entries in the mempool.\n- */\n-static inline unsigned\n-rte_mempool_free_count(const struct rte_mempool *mp)\n-{\n-\treturn mp->size - rte_mempool_count(mp);\n-}\n-\n-/**\n- * Test if the mempool is full.\n- *\n- * When cache is enabled, this function has to browse the length of all\n- * lcores, so it should not be used in a data path, but only for debug\n- * purposes.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @return\n- *   - 1: The mempool is full.\n- *   - 0: The mempool is not full.\n- */\n-static inline int\n-rte_mempool_full(const struct rte_mempool *mp)\n-{\n-\treturn !!(rte_mempool_count(mp) == mp->size);\n-}\n-\n-/**\n- * Test if the mempool is empty.\n- *\n- * When cache is enabled, this function has to browse the length of all\n- * lcores, so it should not be used in a data path, but only for debug\n- * purposes.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @return\n- *   - 1: The mempool is empty.\n- *   - 0: The mempool is not empty.\n- */\n-static inline int\n-rte_mempool_empty(const struct rte_mempool *mp)\n-{\n-\treturn !!(rte_mempool_count(mp) == 0);\n-}\n-\n-/**\n- * Return the physical address of elt, which is an element of the pool mp.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @param elt\n- *   A pointer (virtual address) to the element of the pool.\n- * @return\n- *   The physical address of the elt element.\n- */\n-static inline phys_addr_t\n-rte_mempool_virt2phy(const struct rte_mempool *mp, const void *elt)\n-{\n-\tif (rte_eal_has_hugepages()) {\n-\t\tuintptr_t off;\n-\n-\t\toff = (const char *)elt - (const char *)mp->elt_va_start;\n-\t\treturn (mp->elt_pa[off >> mp->pg_shift] + (off & mp->pg_mask));\n-\t} else {\n-\t\t/*\n-\t\t * If huge pages are disabled, we cannot assume the\n-\t\t * memory region to be physically contiguous.\n-\t\t * Lookup for each element.\n-\t\t */\n-\t\treturn rte_mem_virt2phy(elt);\n-\t}\n-}\n-\n-/**\n- * Check the consistency of mempool objects.\n- *\n- * Verify the coherency of fields in the mempool structure. Also check\n- * that the cookies of mempool objects (even the ones that are not\n- * present in pool) have a correct value. If not, a panic will occur.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- */\n-void rte_mempool_audit(const struct rte_mempool *mp);\n-\n-/**\n- * Return a pointer to the private data in an mempool structure.\n- *\n- * @param mp\n- *   A pointer to the mempool structure.\n- * @return\n- *   A pointer to the private data.\n- */\n-static inline void *rte_mempool_get_priv(struct rte_mempool *mp)\n-{\n-\treturn (char *)mp + MEMPOOL_HEADER_SIZE(mp, mp->pg_num);\n-}\n-\n-/**\n- * Dump the status of all mempools on the console\n- *\n- * @param f\n- *   A pointer to a file for output\n- */\n-void rte_mempool_list_dump(FILE *f);\n-\n-/**\n- * Search a mempool from its name\n- *\n- * @param name\n- *   The name of the mempool.\n- * @return\n- *   The pointer to the mempool matching the name, or NULL if not found.\n- *   NULL on error\n- *   with rte_errno set appropriately. Possible rte_errno values include:\n- *    - ENOENT - required entry not available to return.\n- *\n- */\n-struct rte_mempool *rte_mempool_lookup(const char *name);\n-\n-/**\n- * Given a desired size of the mempool element and mempool flags,\n- * caluclates header, trailer, body and total sizes of the mempool object.\n- * @param elt_size\n- *   The size of each element.\n- * @param flags\n- *   The flags used for the mempool creation.\n- *   Consult rte_mempool_create() for more information about possible values.\n- *   The size of each element.\n- * @return\n- *   Total size of the mempool object.\n- */\n-uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,\n-\tstruct rte_mempool_objsz *sz);\n-\n-/**\n- * Calculate maximum amount of memory required to store given number of objects.\n- * Assumes that the memory buffer will be aligned at page boundary.\n- * Note, that if object size is bigger then page size, then it assumes that\n- * we have a subsets of physically continuous  pages big enough to store\n- * at least one object.\n- * @param elt_num\n- *   Number of elements.\n- * @param elt_sz\n- *   The size of each element.\n- * @param pg_shift\n- *   LOG2 of the physical pages size.\n- * @return\n- *   Required memory size aligned at page boundary.\n- */\n-size_t rte_mempool_xmem_size(uint32_t elt_num, size_t elt_sz,\n-\tuint32_t pg_shift);\n-\n-/**\n- * Calculate how much memory would be actually required with the given\n- * memory footprint to store required number of objects.\n- * @param vaddr\n- *   Virtual address of the externally allocated memory buffer.\n- *   Will be used to store mempool objects.\n- * @param elt_num\n- *   Number of elements.\n- * @param elt_sz\n- *   The size of each element.\n- * @param paddr\n- *   Array of phyiscall addresses of the pages that comprises given memory\n- *   buffer.\n- * @param pg_num\n- *   Number of elements in the paddr array.\n- * @param pg_shift\n- *   LOG2 of the physical pages size.\n- * @return\n- *   Number of bytes needed to store given number of objects,\n- *   aligned to the given page size.\n- *   If provided memory buffer is not big enough:\n- *   (-1) * actual number of elemnts that can be stored in that buffer.\n- */\n-ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,\n-\tconst phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift);\n-\n-/**\n- * Walk list of all memory pools\n- *\n- * @param func\n- *   Iterator function\n- * @param arg\n- *   Argument passed to iterator\n- */\n-void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),\n-\t\t      void *arg);\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif /* _RTE_MEMPOOL_H_ */\n",
    "prefixes": [
        "dpdk-dev",
        "RFC",
        "05/13"
    ]
}