[dpdk-dev,12/15] eal/tile: add mPIPE buffer stack mempool provider

Message ID 1418029178-25162-13-git-send-email-zlu@ezchip.com (mailing list archive)
State Superseded, archived
Headers

Commit Message

Zhigang Lu Dec. 8, 2014, 8:59 a.m. UTC
  TileGX: Modified mempool to allow for variable metadata.
Signed-off-by: Zhigang Lu <zlu@ezchip.com>
Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
---
 app/test-pmd/mempool_anon.c           |   2 +-
 app/test/Makefile                     |   6 +-
 app/test/test_mempool_tile.c          | 217 ++++++++++++
 lib/Makefile                          |   5 +
 lib/librte_eal/linuxapp/eal/Makefile  |   4 +
 lib/librte_mempool_tile/Makefile      |  48 +++
 lib/librte_mempool_tile/rte_mempool.c | 381 ++++++++++++++++++++
 lib/librte_mempool_tile/rte_mempool.h | 634 ++++++++++++++++++++++++++++++++++
 8 files changed, 1295 insertions(+), 2 deletions(-)
 create mode 100644 app/test/test_mempool_tile.c
 create mode 100644 lib/librte_mempool_tile/Makefile
 create mode 100644 lib/librte_mempool_tile/rte_mempool.c
 create mode 100644 lib/librte_mempool_tile/rte_mempool.h
  

Comments

Neil Horman Dec. 9, 2014, 2:07 p.m. UTC | #1
On Mon, Dec 08, 2014 at 04:59:35PM +0800, Zhigang Lu wrote:
> TileGX: Modified mempool to allow for variable metadata.
> Signed-off-by: Zhigang Lu <zlu@ezchip.com>
> Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
> ---
>  app/test-pmd/mempool_anon.c           |   2 +-
>  app/test/Makefile                     |   6 +-
>  app/test/test_mempool_tile.c          | 217 ++++++++++++
>  lib/Makefile                          |   5 +
>  lib/librte_eal/linuxapp/eal/Makefile  |   4 +
>  lib/librte_mempool_tile/Makefile      |  48 +++
>  lib/librte_mempool_tile/rte_mempool.c | 381 ++++++++++++++++++++
>  lib/librte_mempool_tile/rte_mempool.h | 634 ++++++++++++++++++++++++++++++++++
>  8 files changed, 1295 insertions(+), 2 deletions(-)
>  create mode 100644 app/test/test_mempool_tile.c
>  create mode 100644 lib/librte_mempool_tile/Makefile
>  create mode 100644 lib/librte_mempool_tile/rte_mempool.c
>  create mode 100644 lib/librte_mempool_tile/rte_mempool.h
> 
NAK, this creates an alternate, parallel implementation of the mempool api, that
re-implements some aspects of the mempool api, but not others.  This will make
for completely no-portable applications (both to and from the tile arch), and
create maintnence problems, in that features for mempool will need to be
implemented in multiple libraries.

I understand wanting to use mpipe, and thats perfectly fine, but creating
no-portable apis to do so isn't the right way to go.  Instead, why not just
allow applications to use mpipe by initalizing it via the gxio library and
crating a mempool using the existing libraries' rte_mempool_xmem_create api
call, which allows for existing allocated memory space to be managed as a
mempool?

Neil
  
Zhigang Lu Dec. 12, 2014, 8:30 a.m. UTC | #2
>-----Original Message-----
>From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Neil Horman
>Sent: Tuesday, December 09, 2014 10:07 PM
>To: Zhigang Lu
>Cc: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 12/15] eal/tile: add mPIPE buffer stack
mempool
>provider
>
>On Mon, Dec 08, 2014 at 04:59:35PM +0800, Zhigang Lu wrote:
>> TileGX: Modified mempool to allow for variable metadata.
>> Signed-off-by: Zhigang Lu <zlu@ezchip.com>
>> Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
>> ---
>>  app/test-pmd/mempool_anon.c           |   2 +-
>>  app/test/Makefile                     |   6 +-
>>  app/test/test_mempool_tile.c          | 217 ++++++++++++
>>  lib/Makefile                          |   5 +
>>  lib/librte_eal/linuxapp/eal/Makefile  |   4 +
>>  lib/librte_mempool_tile/Makefile      |  48 +++
>>  lib/librte_mempool_tile/rte_mempool.c | 381 ++++++++++++++++++++
>> lib/librte_mempool_tile/rte_mempool.h | 634
>> ++++++++++++++++++++++++++++++++++
>>  8 files changed, 1295 insertions(+), 2 deletions(-)  create mode
>> 100644 app/test/test_mempool_tile.c  create mode 100644
>> lib/librte_mempool_tile/Makefile  create mode 100644
>> lib/librte_mempool_tile/rte_mempool.c
>>  create mode 100644 lib/librte_mempool_tile/rte_mempool.h
>>
>NAK, this creates an alternate, parallel implementation of the mempool api,
>that re-implements some aspects of the mempool api, but not others.  This
will
>make for completely no-portable applications (both to and from the tile
arch),
>and create maintnence problems, in that features for mempool will need to
be
>implemented in multiple libraries.
>
>I understand wanting to use mpipe, and thats perfectly fine, but creating
>no-portable apis to do so isn't the right way to go.  Instead, why not just
allow
>applications to use mpipe by initalizing it via the gxio library and
crating a
>mempool using the existing libraries' rte_mempool_xmem_create api call,
which
>allows for existing allocated memory space to be managed as a mempool?

Yes, the mempool we are using is very much tile-specific, as we want to use
the mpipe
hardware managed buffer pool to implement the mempool, which greatly improve
the
performance of mempool.

As Cyril replied in a previous email:
The alternative is to not include support for the hardware managed buffer
pool, but that
decision incurs a significant performance hit.
  
Neil Horman Dec. 12, 2014, 1:03 p.m. UTC | #3
On Fri, Dec 12, 2014 at 04:30:27PM +0800, Tony Lu wrote:
> >-----Original Message-----
> >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Neil Horman
> >Sent: Tuesday, December 09, 2014 10:07 PM
> >To: Zhigang Lu
> >Cc: dev@dpdk.org
> >Subject: Re: [dpdk-dev] [PATCH 12/15] eal/tile: add mPIPE buffer stack
> mempool
> >provider
> >
> >On Mon, Dec 08, 2014 at 04:59:35PM +0800, Zhigang Lu wrote:
> >> TileGX: Modified mempool to allow for variable metadata.
> >> Signed-off-by: Zhigang Lu <zlu@ezchip.com>
> >> Signed-off-by: Cyril Chemparathy <cchemparathy@ezchip.com>
> >> ---
> >>  app/test-pmd/mempool_anon.c           |   2 +-
> >>  app/test/Makefile                     |   6 +-
> >>  app/test/test_mempool_tile.c          | 217 ++++++++++++
> >>  lib/Makefile                          |   5 +
> >>  lib/librte_eal/linuxapp/eal/Makefile  |   4 +
> >>  lib/librte_mempool_tile/Makefile      |  48 +++
> >>  lib/librte_mempool_tile/rte_mempool.c | 381 ++++++++++++++++++++
> >> lib/librte_mempool_tile/rte_mempool.h | 634
> >> ++++++++++++++++++++++++++++++++++
> >>  8 files changed, 1295 insertions(+), 2 deletions(-)  create mode
> >> 100644 app/test/test_mempool_tile.c  create mode 100644
> >> lib/librte_mempool_tile/Makefile  create mode 100644
> >> lib/librte_mempool_tile/rte_mempool.c
> >>  create mode 100644 lib/librte_mempool_tile/rte_mempool.h
> >>
> >NAK, this creates an alternate, parallel implementation of the mempool api,
> >that re-implements some aspects of the mempool api, but not others.  This
> will
> >make for completely no-portable applications (both to and from the tile
> arch),
> >and create maintnence problems, in that features for mempool will need to
> be
> >implemented in multiple libraries.
> >
> >I understand wanting to use mpipe, and thats perfectly fine, but creating
> >no-portable apis to do so isn't the right way to go.  Instead, why not just
> allow
> >applications to use mpipe by initalizing it via the gxio library and
> crating a
> >mempool using the existing libraries' rte_mempool_xmem_create api call,
> which
> >allows for existing allocated memory space to be managed as a mempool?
> 
> Yes, the mempool we are using is very much tile-specific, as we want to use
> the mpipe
> hardware managed buffer pool to implement the mempool, which greatly improve
> the
> performance of mempool.
> 
> As Cyril replied in a previous email:
> The alternative is to not include support for the hardware managed buffer
> pool, but that
> decision incurs a significant performance hit.
> 
You're not reading my previous notes clearly.  There is no need to completely
re-implement the mempool interface.  Doing so is completely broken.  There
already exists a mechanism in the existing interface to allow you to pass
pre-allocated memory to mempool for management.  As such, you can use the
existing gxio library to allocate and initlize your mpipe hardware, and use the
existing mempool infrastructure to manage it.

Until then, I re-iterate my NAK

Neil
  

Patch

diff --git a/app/test-pmd/mempool_anon.c b/app/test-pmd/mempool_anon.c
index 559a625..05b4c77 100644
--- a/app/test-pmd/mempool_anon.c
+++ b/app/test-pmd/mempool_anon.c
@@ -36,7 +36,7 @@ 
 #include "mempool_osdep.h"
 #include <rte_errno.h>
 
-#ifdef RTE_EXEC_ENV_LINUXAPP
+#if defined(RTE_EXEC_ENV_LINUXAPP) && !defined(RTE_ARCH_TILE)
 
 #include <fcntl.h>
 #include <unistd.h>
diff --git a/app/test/Makefile b/app/test/Makefile
index 4311f96..40ad971 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -72,10 +72,14 @@  SRCS-y += test_rwlock.c
 SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
 SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
 
+ifeq ($(CONFIG_RTE_ARCH),"tile")
+SRCS-y += test_mempool_tile.c
+else
 SRCS-y += test_mempool.c
 SRCS-y += test_mempool_perf.c
-
 SRCS-y += test_mbuf.c
+endif
+
 SRCS-y += test_logs.c
 
 SRCS-y += test_memcpy.c
diff --git a/app/test/test_mempool_tile.c b/app/test/test_mempool_tile.c
new file mode 100644
index 0000000..41a375d
--- /dev/null
+++ b/app/test/test_mempool_tile.c
@@ -0,0 +1,217 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_cycles.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_spinlock.h>
+#include <rte_malloc.h>
+
+#include "test.h"
+
+/*
+ * Mempool
+ * =======
+ *
+ * Basic tests: done on one core with and without cache:
+ *
+ *    - Get one object, put one object
+ *    - Get two objects, put two objects
+ *    - Get all objects, test that their content is not modified and
+ *      put them back in the pool.
+ */
+
+#define MEMPOOL_ELT_SIZE 2048
+#define MEMPOOL_SIZE (8 * 1024)
+
+static struct rte_mempool *mp;
+
+
+/*
+ * save the object number in the first 4 bytes of object data. All
+ * other bytes are set to 0.
+ */
+static void
+my_obj_init(struct rte_mempool *mp, __attribute__((unused)) void *arg,
+	    void *obj, unsigned i)
+{
+	uint32_t *objnum = obj;
+
+	memset(obj, 0, mp->elt_size);
+	*objnum = i;
+}
+
+/* basic tests (done on one core) */
+static int
+test_mempool_basic(void)
+{
+	uint32_t *objnum;
+	void **objtable;
+	void *obj, *obj2;
+	char *obj_data;
+	int ret = 0;
+	unsigned i, j;
+
+	/* dump the mempool status */
+	rte_mempool_dump(stdout, mp);
+
+	printf("get an object\n");
+	if (rte_mempool_get(mp, &obj) < 0)
+		return -1;
+	rte_mempool_dump(stdout, mp);
+
+	printf("get private data\n");
+	if (rte_mempool_get_priv(mp) != (char *) mp + sizeof(*mp))
+		return -1;
+
+	printf("get physical address of an object\n");
+	if (rte_mempool_virt2phy(mp, obj) !=
+			(phys_addr_t) (mp->phys_addr +
+			(phys_addr_t) ((char *) obj - (char *) mp)))
+		return -1;
+
+	printf("put the object back\n");
+	rte_mempool_put(mp, obj);
+	rte_mempool_dump(stdout, mp);
+
+	printf("get 2 objects\n");
+	if (rte_mempool_get(mp, &obj) < 0)
+		return -1;
+	if (rte_mempool_get(mp, &obj2) < 0) {
+		rte_mempool_put(mp, obj);
+		return -1;
+	}
+	rte_mempool_dump(stdout, mp);
+
+	printf("put the objects back\n");
+	rte_mempool_put(mp, obj);
+	rte_mempool_put(mp, obj2);
+	rte_mempool_dump(stdout, mp);
+
+	/*
+	 * get many objects: we cannot get them all because the cache
+	 * on other cores may not be empty.
+	 */
+	objtable = malloc(MEMPOOL_SIZE * sizeof(void *));
+	if (objtable == NULL)
+		return -1;
+
+	for (i = 0; i < MEMPOOL_SIZE; i++) {
+		if (rte_mempool_get(mp, &objtable[i]) < 0)
+			break;
+	}
+
+	printf("got %d buffers\n", i);
+	rte_mempool_dump(stdout, mp);
+
+	/*
+	 * for each object, check that its content was not modified,
+	 * and put objects back in pool
+	 */
+	while (i--) {
+		obj = objtable[i];
+		obj_data = obj;
+		objnum = obj;
+		if (*objnum > MEMPOOL_SIZE) {
+			printf("bad object number %u @%p\n", *objnum, objnum);
+			ret = -1;
+			break;
+		}
+		for (j = sizeof(*objnum); j < mp->elt_size; j++) {
+			if (obj_data[j] != 0)
+				ret = -1;
+		}
+
+		rte_mempool_put(mp, objtable[i]);
+	}
+
+	free(objtable);
+	if (ret == -1)
+		printf("objects were modified!\n");
+
+	return ret;
+}
+
+static int
+test_mempool(void)
+{
+	/* create a mempool (without cache) */
+	if (mp == NULL)
+		mp = rte_mempool_create("test", MEMPOOL_SIZE,
+						MEMPOOL_ELT_SIZE, 0, 0,
+						NULL, NULL,
+						my_obj_init, NULL,
+						SOCKET_ID_ANY, 0);
+	if (mp == NULL)
+		return -1;
+
+	/* retrieve the mempool from its name */
+	if (rte_mempool_lookup("test") != mp) {
+		printf("Cannot lookup mempool from its name\n");
+		return -1;
+	}
+
+	/* basic tests without cache */
+	mp = mp;
+	if (test_mempool_basic() < 0)
+		return -1;
+
+	return 0;
+}
+
+static struct test_command mempool_cmd = {
+	.command = "mempool_autotest",
+	.callback = test_mempool,
+};
+REGISTER_TEST_COMMAND(mempool_cmd);
diff --git a/lib/Makefile b/lib/Makefile
index 0ffc982..60aa0ff 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -34,7 +34,12 @@  include $(RTE_SDK)/mk/rte.vars.mk
 DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
 DIRS-$(CONFIG_RTE_LIBRTE_MALLOC) += librte_malloc
 DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
+ifeq ($(CONFIG_RTE_ARCH),"tile")
+DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool_tile
+ODIR-librte_mempool_tile = librte_mempool
+else
 DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
+endif
 DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile
index 99536b6..74aa503 100644
--- a/lib/librte_eal/linuxapp/eal/Makefile
+++ b/lib/librte_eal/linuxapp/eal/Makefile
@@ -39,7 +39,11 @@  CFLAGS += -I$(SRCDIR)/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_ring
+ifeq ($(CONFIG_RTE_ARCH),"tile")
+CFLAGS += -I$(RTE_SDK)/lib/librte_mempool_tile
+else
 CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
+endif
 CFLAGS += -I$(RTE_SDK)/lib/librte_malloc
 CFLAGS += -I$(RTE_SDK)/lib/librte_ether
 CFLAGS += -I$(RTE_SDK)/lib/librte_ivshmem
diff --git a/lib/librte_mempool_tile/Makefile b/lib/librte_mempool_tile/Makefile
new file mode 100644
index 0000000..65e8fa0
--- /dev/null
+++ b/lib/librte_mempool_tile/Makefile
@@ -0,0 +1,48 @@ 
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mempool.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -I$(RTE_SDK)/lib/librte_mbuf
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_eal
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mempool_tile/rte_mempool.c b/lib/librte_mempool_tile/rte_mempool.c
new file mode 100644
index 0000000..96ecaac
--- /dev/null
+++ b/lib/librte_mempool_tile/rte_mempool.c
@@ -0,0 +1,381 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_mbuf.h>
+#include <rte_eal_memconfig.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_errno.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+
+#include "rte_mempool.h"
+
+/*
+ * The mPIPE buffer stack requires 128 byte aligned buffers, and restricts
+ * available headroom to a max of 127 bytes.  As a result of these
+ * restrictions, we keep the buffer metadata (mbuf, ofpbuf, etc.) outside the
+ * "mPIPE buffer", with some careful alignment jugglery and probing.
+ *
+ * The layout of each element in the mPIPE mempool is therefore as follows:
+ *
+ *   +--------------------------------+
+ *   | variable sized pad space       |
+ *   +--------------------------------+ cacheline boundary
+ *   | struct rte_mempool_obj_header  |
+ *   +--------------------------------+ cacheline boundary
+ *   | struct rte_mbuf/ofpbuf/...     |
+ *   +--------------------------------+ 128 byte aligned boundary
+ *   | buffer data headroom           |
+ *   |                                |
+ *   | buffer data                    |
+ *   |                                |
+ *   +--------------------------------+
+ *
+ * The addresses pushed into the mPIPE buffer stack are the 128 byte aligned
+ * start address of the buffer headroom.  The mempool user transacts in
+ * objects that point to the start of the rte_mbuf/ofpbuf structures in the
+ * element.  These objects cannot directly be pushed into the mPIPE buffer
+ * stack.
+ */
+
+#define BSM_MEM_ALIGN		65536
+#define BSM_MAX_META		(4 * RTE_CACHE_LINE_SIZE)
+
+TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
+
+const struct rte_mempool *last_mp;
+
+/* register buffer memory with mPIPE. */
+static int
+__bsm_register_mem(struct rte_mempool *mp, void *mem, size_t size)
+{
+	uintptr_t pagesz = mp->mz->hugepage_sz;
+	uintptr_t start = (uintptr_t)mem & ~(pagesz - 1);
+	uintptr_t end = (uintptr_t)mem + size;
+	int rc, count = 0;
+
+	while (start < end) {
+		rc = gxio_mpipe_register_page(mp->context, mp->stack_idx,
+					      (void *)start, pagesz, 0);
+		if (rc < 0)
+			return rc;
+		start += pagesz;
+		count++;
+	}
+	return count;
+}
+
+/* Probe the element constructor to figure out the metadata size. */
+static unsigned
+__bsm_meta_size(unsigned elt_size, rte_mempool_obj_ctor_t *obj_init,
+		void *obj_init_arg)
+{
+	struct rte_mempool mp;
+	unsigned offset;
+	union {
+		struct rte_mbuf mbuf;
+		unsigned char data[elt_size];
+	} u;
+
+	if (!obj_init) {
+		RTE_LOG(ERR, MEMPOOL, "No constructor to probe meta size!\n");
+		return 0;
+	}
+
+	memset(&u, 0, sizeof(u));
+	memset(&mp, 0, sizeof(mp));
+
+	mp.elt_size = elt_size;
+
+	obj_init(&mp, obj_init_arg, &u.mbuf, 0);
+
+	offset = (uintptr_t)u.mbuf.buf_addr - (uintptr_t)&u.mbuf;
+	if (offset > BSM_MAX_META) {
+		RTE_LOG(ERR, MEMPOOL, "Meta data overflow (%d > %d)!\n",
+			offset, BSM_MAX_META);
+		return 0;
+	}
+
+	RTE_LOG(ERR, MEMPOOL, "Meta data probed to be %d\n",
+		offset);
+
+	return offset;
+}
+
+/* create the mempool */
+struct rte_mempool *
+rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
+		   unsigned cache_size __rte_unused, unsigned private_data_size,
+		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		   int socket_id, unsigned flags)
+{
+	unsigned bsm_size, meta_size, pool_size, total_size, i;
+	int mz_flags = RTE_MEMZONE_1GB | RTE_MEMZONE_SIZE_HINT_ONLY;
+	gxio_mpipe_buffer_size_enum_t size_code;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	gxio_mpipe_context_t *context;
+	const struct rte_memzone *mz;
+	int rc, instance, stack_idx;
+	struct rte_tailq_entry *te;
+	struct rte_mempool *mp;
+	void *obj, *buf;
+	char *mem;
+
+	instance = socket_id % rte_eal_mpipe_instances;
+	context = rte_eal_mpipe_context(instance);
+
+	if (!context) {
+		RTE_LOG(ERR, MEMPOOL, "No mPIPE context!\n");
+		return NULL;
+	}
+
+	/* check that we have an initialised tail queue */
+	if (RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL,
+			rte_mempool_list) == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Uninitialized tailq list!\n");
+		return NULL;
+	}
+
+	/* We cannot operate without hugepages. */
+	if (!rte_eal_has_hugepages()) {
+		RTE_LOG(ERR, MEMPOOL, "Mempool requires hugepages!\n");
+		return NULL;
+	}
+
+	/* try to allocate tailq entry */
+	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
+		return NULL;
+	}
+
+	bsm_size  = gxio_mpipe_calc_buffer_stack_bytes(n);
+	bsm_size  = RTE_ALIGN_CEIL(bsm_size, BSM_ALIGN_SIZE);
+
+	pool_size = sizeof(struct rte_mempool) + private_data_size;
+	pool_size = RTE_ALIGN_CEIL(pool_size, BSM_ALIGN_SIZE);
+
+	meta_size = __bsm_meta_size(elt_size, obj_init, obj_init_arg);
+	size_code =
+	gxio_mpipe_buffer_size_to_buffer_size_enum(elt_size - meta_size);
+	elt_size  = gxio_mpipe_buffer_size_enum_to_buffer_size(size_code) +
+			meta_size;
+
+	total_size = (BSM_MEM_ALIGN + bsm_size + pool_size + n * elt_size);
+
+	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
+
+	rc = gxio_mpipe_alloc_buffer_stacks(context, 1, 0, 0);
+	if (rc < 0) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate buffer stack!\n");
+		return NULL;
+	}
+	stack_idx = rc;
+
+	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	mz = rte_memzone_reserve(mz_name, total_size, socket_id, mz_flags);
+	if (!mz) {
+		rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+		return NULL;
+	}
+
+	mem = mz->addr;
+	mp  = mz->addr;
+	memset(mp, 0, sizeof(*mp));
+	snprintf(mp->name, sizeof(mp->name), "%s", name);
+	mp->phys_addr = mz->phys_addr;
+	mp->mz = mz;
+	mp->size = n;
+	mp->flags = flags;
+	mp->meta_size = meta_size;
+	mp->elt_size = elt_size;
+	mp->private_data_size = private_data_size;
+	mp->stack_idx = stack_idx;
+	mp->size_code = size_code;
+	mp->context = context;
+	mp->instance = instance;
+
+	mem += pool_size;
+
+	mem = RTE_PTR_ALIGN_CEIL(mem, BSM_MEM_ALIGN);
+	rc = gxio_mpipe_init_buffer_stack(context, stack_idx, size_code,
+					  mem, bsm_size, 0);
+	if (rc < 0) {
+		rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+		RTE_LOG(ERR, MEMPOOL, "Cannot initialize buffer stack!\n");
+		return NULL;
+	}
+	mem += bsm_size;
+
+	mem = RTE_PTR_ALIGN_CEIL(mem, BSM_ALIGN_SIZE);
+	rc = __bsm_register_mem(mp, mem, n * elt_size);
+	if (rc < 0) {
+		rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+		RTE_LOG(ERR, MEMPOOL, "Cannot register buffer mem (%d)!\n", rc);
+		return NULL;
+	}
+
+	/* call the initializer */
+	if (mp_init)
+		mp_init(mp, mp_init_arg);
+
+	for (i = 0; i < n; i++) {
+		buf = RTE_PTR_ALIGN_CEIL(mem + meta_size, BSM_ALIGN_SIZE);
+		obj = __mempool_buf_to_obj(mp, buf);
+
+		if (obj_init)
+			obj_init(mp, obj_init_arg, obj, i);
+		rte_mempool_mp_put(mp, obj);
+		mem = RTE_PTR_ADD(obj, mp->elt_size);
+	}
+
+	te->data = (void *) mp;
+	RTE_EAL_TAILQ_INSERT_TAIL(RTE_TAILQ_MEMPOOL, rte_mempool_list, te);
+
+	rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	return mp;
+}
+
+/* dump the status of the mempool on the console */
+void
+rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
+{
+	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
+	fprintf(f, "  flags=%x\n", mp->flags);
+	fprintf(f, "  count=%d\n", rte_mempool_count(mp));
+}
+
+/* dump the status of all mempools on the console */
+void
+rte_mempool_list_dump(FILE *f)
+{
+	const struct rte_mempool *mp = NULL;
+	struct rte_tailq_entry *te;
+	struct rte_mempool_list *mempool_list;
+
+	mempool_list = RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL,
+						rte_mempool_list);
+	if (mempool_list == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return;
+	}
+
+	rte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	TAILQ_FOREACH(te, mempool_list, next) {
+		mp = (struct rte_mempool *) te->data;
+		rte_mempool_dump(f, mp);
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+}
+
+/* search a mempool from its name */
+struct rte_mempool *
+rte_mempool_lookup(const char *name)
+{
+	struct rte_mempool *mp = NULL;
+	struct rte_tailq_entry *te;
+	struct rte_mempool_list *mempool_list;
+
+	mempool_list = RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL,
+						rte_mempool_list);
+	if (mempool_list == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return NULL;
+	}
+
+	rte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	TAILQ_FOREACH(te, mempool_list, next) {
+		mp = (struct rte_mempool *) te->data;
+		if (strncmp(name, mp->name, RTE_MEMPOOL_NAMESIZE) == 0)
+			break;
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	if (te == NULL) {
+		rte_errno = ENOENT;
+		return NULL;
+	}
+
+	return mp;
+}
+
+void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
+		      void *arg)
+{
+	struct rte_tailq_entry *te = NULL;
+	struct rte_mempool_list *mempool_list;
+
+	mempool_list = RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL,
+						rte_mempool_list);
+	if (mempool_list == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return;
+	}
+
+	rte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	TAILQ_FOREACH(te, mempool_list, next) {
+		(*func)((struct rte_mempool *) te->data, arg);
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+}
diff --git a/lib/librte_mempool_tile/rte_mempool.h b/lib/librte_mempool_tile/rte_mempool.h
new file mode 100644
index 0000000..ca893d5
--- /dev/null
+++ b/lib/librte_mempool_tile/rte_mempool.h
@@ -0,0 +1,634 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MEMPOOL_TILE_H_
+#define _RTE_MEMPOOL_TILE_H_
+
+/**
+ * @file
+ * RTE Mempool.
+ *
+ * A memory pool is an allocator of fixed-size object. It is
+ * identified by its name, and uses a ring to store free objects. It
+ * provides some other optional services, like a per-core object
+ * cache, and an alignment helper to ensure that objects are padded
+ * to spread them equally on all RAM channels, ranks, and so on.
+ *
+ * Objects owned by a mempool should never be added in another
+ * mempool. When an object is freed using rte_mempool_put() or
+ * equivalent, the object data is not modified; the user can save some
+ * meta-data in the object data and retrieve them when allocating a
+ * new object.
+ *
+ * Note: the mempool implementation is not preemptable. A lcore must
+ * not be interrupted by another task that uses the same mempool
+ * (because it uses a ring which is not preemptable). Also, mempool
+ * functions must not be used outside the DPDK environment: for
+ * example, in linuxapp environment, a thread that is not created by
+ * the EAL must not use mempools. This is due to the per-lcore cache
+ * that won't work as rte_lcore_id() will not return a correct value.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <errno.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_lcore.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
+#include <rte_mpipe.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_MEMPOOL_NAMESIZE	32
+#define RTE_MEMPOOL_MZ_PREFIX	"TMP_"
+#define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
+#define	RTE_MEMPOOL_OBJ_NAME	RTE_MEMPOOL_MZ_FORMAT
+
+struct rte_mempool {
+	TAILQ_ENTRY(rte_mempool) next;   /**< Next in list. */
+	char         name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
+	uint32_t     flags;              /**< Flags of the mempool. */
+	uint32_t     size;               /*<< Number of elements in pool. */
+	uint32_t     elt_size;           /**< Size of an element. */
+	uint32_t     meta_size;          /**< Size of buffer metadata. */
+	int          stack_idx;          /**< mPIPE buffer stack index. */
+	int          instance;           /**< mPIPE instance. */
+	gxio_mpipe_buffer_size_enum_t size_code; /**< mPIPE buffer size enum. */
+	gxio_mpipe_context_t *context;   /**< mPIPE context. */
+	uint32_t     private_data_size;  /**< Size of private data. */
+	phys_addr_t  phys_addr;          /**< Phys. addr. of mempool struct. */
+	const struct rte_memzone *mz;    /**< Memory zone. */
+
+	char         private_data[0] __rte_cache_aligned;
+}  __rte_cache_aligned;
+
+#define MEMPOOL_F_NO_SPREAD      0x0001 /**< Do not spread in memory. */
+#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
+#define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
+#define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
+
+static inline void *__attribute__((always_inline))
+__mempool_buf_to_obj(const struct rte_mempool *mp, void *buf)
+{
+	return RTE_PTR_SUB(buf, mp->meta_size);
+}
+
+static inline void *__attribute__((always_inline))
+__mempool_obj_to_buf(const struct rte_mempool *mp, void *obj)
+{
+	return RTE_PTR_ADD(obj, mp->meta_size);
+}
+
+/**
+ * @internal Put several objects back in the mempool; used internally.
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to store back in the mempool, must be strictly
+ *   positive.
+ * @param is_mp
+ *   Mono-producer (0) or multi-producers (1).
+ */
+static inline void __attribute__((always_inline))
+__mempool_put_bulk(struct rte_mempool *mp, void **obj_table,
+		    unsigned n)
+{
+	unsigned i;
+	void *buf;
+
+	for (i = 0; i < n; i++) {
+		buf = __mempool_obj_to_buf(mp, obj_table[i]);
+		gxio_mpipe_push_buffer(mp->context, mp->stack_idx, buf);
+	}
+}
+
+
+/**
+ * Put several objects back in the mempool (multi-producers safe).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the mempool from the obj_table.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_mp_put_bulk(struct rte_mempool *mp, void **obj_table,
+			unsigned n)
+{
+	__mempool_put_bulk(mp, obj_table, n);
+}
+
+/**
+ * Put several objects back in the mempool (NOT multi-producers safe).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the mempool from obj_table.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_sp_put_bulk(struct rte_mempool *mp, void **obj_table,
+			unsigned n)
+{
+	__mempool_put_bulk(mp, obj_table, n);
+}
+
+/**
+ * Put several objects back in the mempool.
+ *
+ * This function calls the multi-producer or the single-producer
+ * version depending on the default behavior that was specified at
+ * mempool creation time (see flags).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the mempool from obj_table.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_put_bulk(struct rte_mempool *mp, void **obj_table,
+		     unsigned n)
+{
+	__mempool_put_bulk(mp, obj_table, n);
+}
+
+/**
+ * Put one object in the mempool (multi-producers safe).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj
+ *   A pointer to the object to be added.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
+{
+	rte_mempool_mp_put_bulk(mp, &obj, 1);
+}
+
+/**
+ * Put one object back in the mempool (NOT multi-producers safe).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj
+ *   A pointer to the object to be added.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
+{
+	rte_mempool_sp_put_bulk(mp, &obj, 1);
+}
+
+/**
+ * Put one object back in the mempool.
+ *
+ * This function calls the multi-producer or the single-producer
+ * version depending on the default behavior that was specified at
+ * mempool creation time (see flags).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj
+ *   A pointer to the object to be added.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_put(struct rte_mempool *mp, void *obj)
+{
+	rte_mempool_put_bulk(mp, &obj, 1);
+}
+
+/**
+ * @internal Get several objects from the mempool; used internally.
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to get, must be strictly positive.
+ * @param is_mc
+ *   Mono-consumer (0) or multi-consumers (1).
+ * @return
+ *   - >=0: Success; number of objects supplied.
+ *   - <0: Error; code of ring dequeue function.
+ */
+static inline int __attribute__((always_inline))
+__mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
+		   unsigned n)
+{
+	unsigned i;
+	void *buf;
+
+	for (i = 0; i < n; i++) {
+		buf = gxio_mpipe_pop_buffer(mp->context, mp->stack_idx);
+		if (unlikely(!buf)) {
+			__mempool_put_bulk(mp, obj_table, i);
+			return -ENOENT;
+		}
+		obj_table[i] = __mempool_buf_to_obj(mp, buf);
+		rte_prefetch0(obj_table[i]);
+	}
+	return i;
+}
+
+/**
+ * Get several objects from the mempool (multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to get from mempool to obj_table.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return __mempool_get_bulk(mp, obj_table, n);
+}
+
+/**
+ * Get several objects from the mempool (NOT multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to get from the mempool to obj_table.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is
+ *     retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return __mempool_get_bulk(mp, obj_table, n);
+}
+
+/**
+ * Get several objects from the mempool.
+ *
+ * This function calls the multi-consumers or the single-consumer
+ * version, depending on the default behaviour that was specified at
+ * mempool creation time (see flags).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to get from the mempool to obj_table.
+ * @return
+ *   - 0: Success; objects taken
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	return __mempool_get_bulk(mp, obj_table, n);
+}
+
+/**
+ * Get one object from the mempool (multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_p
+ *   A pointer to a void * pointer (object) that will be filled.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
+{
+	return rte_mempool_mc_get_bulk(mp, obj_p, 1);
+}
+
+/**
+ * Get one object from the mempool (NOT multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_p
+ *   A pointer to a void * pointer (object) that will be filled.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
+{
+	return rte_mempool_sc_get_bulk(mp, obj_p, 1);
+}
+
+/**
+ * Get one object from the mempool.
+ *
+ * This function calls the multi-consumers or the single-consumer
+ * version, depending on the default behavior that was specified at
+ * mempool creation (see flags).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_p
+ *   A pointer to a void * pointer (object) that will be filled.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_get(struct rte_mempool *mp, void **obj_p)
+{
+	return rte_mempool_get_bulk(mp, obj_p, 1);
+}
+
+/**
+ * An object constructor callback function for mempool.
+ *
+ * Arguments are the mempool, the opaque pointer given by the user in
+ * rte_mempool_create(), the pointer to the element and the index of
+ * the element in the pool.
+ */
+typedef void (rte_mempool_obj_ctor_t)(struct rte_mempool *, void *,
+				      void *, unsigned);
+
+/**
+ * A mempool constructor callback function.
+ *
+ * Arguments are the mempool and the opaque pointer given by the user in
+ * rte_mempool_create().
+ */
+typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
+
+/**
+ * Creates a new mempool named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory. The
+ * pool contains n elements of elt_size. Its size is set to n.
+ * All elements of the mempool are allocated together with the mempool header,
+ * in one physically continuous chunk of memory.
+ *
+ * @param name
+ *   The name of the mempool.
+ * @param n
+ *   The number of elements in the mempool. The optimum size (in terms of
+ *   memory usage) for a mempool is when n is a power of two minus one:
+ *   n = (2^q - 1).
+ * @param elt_size
+ *   The size of each element.
+ * @param cache_size
+ *   If cache_size is non-zero, the rte_mempool library will try to
+ *   limit the accesses to the common lockless pool, by maintaining a
+ *   per-lcore object cache. This argument must be lower or equal to
+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose
+ *   cache_size to have "n modulo cache_size == 0": if this is
+ *   not the case, some elements will always stay in the pool and will
+ *   never be used. The access to the per-lcore table is of course
+ *   faster than the multi-producer/consumer pool. The cache can be
+ *   disabled if the cache_size argument is set to 0; it can be useful to
+ *   avoid losing objects in cache. Note that even if not used, the
+ *   memory space for cache is always reserved in a mempool structure,
+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
+ * @param private_data_size
+ *   The size of the private data appended after the mempool
+ *   structure. This is useful for storing some private data after the
+ *   mempool structure, as is done for rte_mbuf_pool for example.
+ * @param mp_init
+ *   A function pointer that is called for initialization of the pool,
+ *   before object initialization. The user can initialize the private
+ *   data in this function if needed. This parameter can be NULL if
+ *   not needed.
+ * @param mp_init_arg
+ *   An opaque pointer to data that can be used in the mempool
+ *   constructor function.
+ * @param obj_init
+ *   A function pointer that is called for each object at
+ *   initialization of the pool. The user can set some meta data in
+ *   objects if needed. This parameter can be NULL if not needed.
+ *   The obj_init() function takes the mempool pointer, the init_arg,
+ *   the object pointer and the object number as parameters.
+ * @param obj_init_arg
+ *   An opaque pointer to data that can be used as an argument for
+ *   each call to the object constructor function.
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in the case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ *   The *flags* arguments is an OR of following flags:
+ *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
+ *     between channels in RAM: the pool allocator will add padding
+ *     between objects depending on the hardware configuration. See
+ *     Memory alignment constraints for details. If this flag is set,
+ *     the allocator will just align them to a cache line.
+ *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
+ *     cache-aligned. This flag removes this constraint, and no
+ *     padding will be present between objects. This flag implies
+ *     MEMPOOL_F_NO_SPREAD.
+ *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
+ *     when using rte_mempool_put() or rte_mempool_put_bulk() is
+ *     "single-producer". Otherwise, it is "multi-producers".
+ *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
+ *     when using rte_mempool_get() or rte_mempool_get_bulk() is
+ *     "single-consumer". Otherwise, it is "multi-consumers".
+ * @return
+ *   The pointer to the new allocated mempool, on success. NULL on error
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list
+ *    - EINVAL - cache size provided is too large
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_mempool *
+rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
+		   unsigned cache_size, unsigned private_data_size,
+		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		   int socket_id, unsigned flags);
+
+/**
+ * Search a mempool from its name
+ *
+ * @param name
+ *   The name of the mempool.
+ * @return
+ *   The pointer to the mempool matching the name, or NULL if not found.
+ *   NULL on error
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - ENOENT - required entry not available to return.
+ *
+ */
+struct rte_mempool *rte_mempool_lookup(const char *name);
+
+/**
+ * Dump the status of the mempool to the console.
+ *
+ * @param f
+ *   A pointer to a file for output
+ * @param mp
+ *   A pointer to the mempool structure.
+ */
+void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
+
+/**
+ * Dump the status of all mempools on the console
+ *
+ * @param f
+ *   A pointer to a file for output
+ */
+void rte_mempool_list_dump(FILE *f);
+
+/**
+ * Walk list of all memory pools
+ *
+ * @param func
+ *   Iterator function
+ * @param arg
+ *   Argument passed to iterator
+ */
+void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
+		      void *arg);
+
+
+/**
+ * Return the physical address of elt, which is an element of the pool mp.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param elt
+ *   A pointer (virtual address) to the element of the pool.
+ * @return
+ *   The physical address of the elt element.
+ */
+static inline phys_addr_t __attribute__((always_inline))
+rte_mempool_virt2phy(const struct rte_mempool *mp, const void *elt)
+{
+	uintptr_t off = (const char *)elt - (const char *)mp;
+
+	return mp->phys_addr + off;
+}
+
+/**
+ * Return a pointer to the private data in an mempool structure.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   A pointer to the private data.
+ */
+static inline void *rte_mempool_get_priv(struct rte_mempool *mp)
+{
+	return mp->private_data;
+}
+
+/**
+ * Return the number of entries in the mempool.
+ *
+ * When cache is enabled, this function has to browse the length of
+ * all lcores, so it should not be used in a data path, but only for
+ * debug purposes.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   The number of entries in the mempool.
+ */
+static inline unsigned
+rte_mempool_count(const struct rte_mempool *mp)
+{
+	return gxio_mpipe_get_buffer_count(mp->context, mp->stack_idx);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MEMPOOL_TILE_H_ */