[dpdk-dev,v3,1/2] mempool: add stack (lifo) mempool handler
Commit Message
This is a mempool handler that is useful for pipelining apps, where
the mempool cache doesn't really work - example, where we have one
core doing rx (and alloc), and another core doing Tx (and return). In
such a case, the mempool ring simply cycles through all the mbufs,
resulting in a LLC miss on every mbuf allocated when the number of
mbufs is large. A stack recycles buffers more effectively in this
case.
Signed-off-by: David Hunt <david.hunt@intel.com>
---
lib/librte_mempool/Makefile | 1 +
lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
2 files changed, 146 insertions(+)
create mode 100644 lib/librte_mempool/rte_mempool_stack.c
Comments
On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote:
> This is a mempool handler that is useful for pipelining apps, where
> the mempool cache doesn't really work - example, where we have one
> core doing rx (and alloc), and another core doing Tx (and return). In
> such a case, the mempool ring simply cycles through all the mbufs,
> resulting in a LLC miss on every mbuf allocated when the number of
> mbufs is large. A stack recycles buffers more effectively in this
> case.
>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
> lib/librte_mempool/Makefile | 1 +
> lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
How about moving new mempool handlers to drivers/mempool? (or similar).
In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea.
Jerin
2016-06-20 18:55, Jerin Jacob:
> On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote:
> > This is a mempool handler that is useful for pipelining apps, where
> > the mempool cache doesn't really work - example, where we have one
> > core doing rx (and alloc), and another core doing Tx (and return). In
> > such a case, the mempool ring simply cycles through all the mbufs,
> > resulting in a LLC miss on every mbuf allocated when the number of
> > mbufs is large. A stack recycles buffers more effectively in this
> > case.
> >
> > Signed-off-by: David Hunt <david.hunt@intel.com>
> > ---
> > lib/librte_mempool/Makefile | 1 +
> > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
>
> How about moving new mempool handlers to drivers/mempool? (or similar).
> In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea.
You're probably right.
However we need to check and understand what a HW mempool handler will be.
I imagine the first of them will have to move handlers in drivers/
Jerin, are you volunteer?
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> Sent: Monday, June 20, 2016 2:54 PM
> To: Jerin Jacob
> Cc: dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com
> Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
>
> 2016-06-20 18:55, Jerin Jacob:
> > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote:
> > > This is a mempool handler that is useful for pipelining apps, where
> > > the mempool cache doesn't really work - example, where we have one
> > > core doing rx (and alloc), and another core doing Tx (and return). In
> > > such a case, the mempool ring simply cycles through all the mbufs,
> > > resulting in a LLC miss on every mbuf allocated when the number of
> > > mbufs is large. A stack recycles buffers more effectively in this
> > > case.
> > >
> > > Signed-off-by: David Hunt <david.hunt@intel.com>
> > > ---
> > > lib/librte_mempool/Makefile | 1 +
> > > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
> >
> > How about moving new mempool handlers to drivers/mempool? (or similar).
> > In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea.
>
> You're probably right.
> However we need to check and understand what a HW mempool handler will be.
> I imagine the first of them will have to move handlers in drivers/
Does it mean it we'll have to move mbuf into drivers too?
Again other libs do use mempool too.
Why not just lib/librte_mempool/arch/<arch_specific_dir_here>
?
Konstantin
> Jerin, are you volunteer?
On Mon, Jun 20, 2016 at 01:58:04PM +0000, Ananyev, Konstantin wrote:
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> > Sent: Monday, June 20, 2016 2:54 PM
> > To: Jerin Jacob
> > Cc: dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com
> > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
> >
> > 2016-06-20 18:55, Jerin Jacob:
> > > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote:
> > > > This is a mempool handler that is useful for pipelining apps, where
> > > > the mempool cache doesn't really work - example, where we have one
> > > > core doing rx (and alloc), and another core doing Tx (and return). In
> > > > such a case, the mempool ring simply cycles through all the mbufs,
> > > > resulting in a LLC miss on every mbuf allocated when the number of
> > > > mbufs is large. A stack recycles buffers more effectively in this
> > > > case.
> > > >
> > > > Signed-off-by: David Hunt <david.hunt@intel.com>
> > > > ---
> > > > lib/librte_mempool/Makefile | 1 +
> > > > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
> > >
> > > How about moving new mempool handlers to drivers/mempool? (or similar).
> > > In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea.
> >
> > You're probably right.
> > However we need to check and understand what a HW mempool handler will be.
> > I imagine the first of them will have to move handlers in drivers/
>
> Does it mean it we'll have to move mbuf into drivers too?
> Again other libs do use mempool too.
> Why not just lib/librte_mempool/arch/<arch_specific_dir_here>
> ?
I was proposing only to move only the new
handler(lib/librte_mempool/rte_mempool_stack.c). Not any library or any
other common code.
Just like DPDK crypto device, Even if it is software implementation its
better to move in driver/crypto instead of lib/librte_cryptodev
"lib/librte_mempool/arch/" is not correct place as it is platform specific
not architecture specific and HW mempool device may be PCIe or platform
device.
> Konstantin
>
>
> > Jerin, are you volunteer?
> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Monday, June 20, 2016 3:22 PM
> To: Ananyev, Konstantin
> Cc: Thomas Monjalon; dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com
> Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
>
> On Mon, Jun 20, 2016 at 01:58:04PM +0000, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> > > Sent: Monday, June 20, 2016 2:54 PM
> > > To: Jerin Jacob
> > > Cc: dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com
> > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
> > >
> > > 2016-06-20 18:55, Jerin Jacob:
> > > > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote:
> > > > > This is a mempool handler that is useful for pipelining apps, where
> > > > > the mempool cache doesn't really work - example, where we have one
> > > > > core doing rx (and alloc), and another core doing Tx (and return). In
> > > > > such a case, the mempool ring simply cycles through all the mbufs,
> > > > > resulting in a LLC miss on every mbuf allocated when the number of
> > > > > mbufs is large. A stack recycles buffers more effectively in this
> > > > > case.
> > > > >
> > > > > Signed-off-by: David Hunt <david.hunt@intel.com>
> > > > > ---
> > > > > lib/librte_mempool/Makefile | 1 +
> > > > > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
> > > >
> > > > How about moving new mempool handlers to drivers/mempool? (or similar).
> > > > In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea.
> > >
> > > You're probably right.
> > > However we need to check and understand what a HW mempool handler will be.
> > > I imagine the first of them will have to move handlers in drivers/
> >
> > Does it mean it we'll have to move mbuf into drivers too?
> > Again other libs do use mempool too.
> > Why not just lib/librte_mempool/arch/<arch_specific_dir_here>
> > ?
>
> I was proposing only to move only the new
> handler(lib/librte_mempool/rte_mempool_stack.c). Not any library or any
> other common code.
>
> Just like DPDK crypto device, Even if it is software implementation its
> better to move in driver/crypto instead of lib/librte_cryptodev
>
> "lib/librte_mempool/arch/" is not correct place as it is platform specific
> not architecture specific and HW mempool device may be PCIe or platform
> device.
Ok, but why rte_mempool_stack.c has to be moved?
I can hardly imagine it is a 'platform sepcific'.
From my understanding it is a generic code.
Konstantin
>
> > Konstantin
> >
> >
> > > Jerin, are you volunteer?
On Mon, Jun 20, 2016 at 05:56:40PM +0000, Ananyev, Konstantin wrote:
>
>
> > -----Original Message-----
> > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > Sent: Monday, June 20, 2016 3:22 PM
> > To: Ananyev, Konstantin
> > Cc: Thomas Monjalon; dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com
> > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
> >
> > On Mon, Jun 20, 2016 at 01:58:04PM +0000, Ananyev, Konstantin wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> > > > Sent: Monday, June 20, 2016 2:54 PM
> > > > To: Jerin Jacob
> > > > Cc: dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com
> > > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
> > > >
> > > > 2016-06-20 18:55, Jerin Jacob:
> > > > > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote:
> > > > > > This is a mempool handler that is useful for pipelining apps, where
> > > > > > the mempool cache doesn't really work - example, where we have one
> > > > > > core doing rx (and alloc), and another core doing Tx (and return). In
> > > > > > such a case, the mempool ring simply cycles through all the mbufs,
> > > > > > resulting in a LLC miss on every mbuf allocated when the number of
> > > > > > mbufs is large. A stack recycles buffers more effectively in this
> > > > > > case.
> > > > > >
> > > > > > Signed-off-by: David Hunt <david.hunt@intel.com>
> > > > > > ---
> > > > > > lib/librte_mempool/Makefile | 1 +
> > > > > > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
> > > > >
> > > > > How about moving new mempool handlers to drivers/mempool? (or similar).
> > > > > In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea.
> > > >
> > > > You're probably right.
> > > > However we need to check and understand what a HW mempool handler will be.
> > > > I imagine the first of them will have to move handlers in drivers/
> > >
> > > Does it mean it we'll have to move mbuf into drivers too?
> > > Again other libs do use mempool too.
> > > Why not just lib/librte_mempool/arch/<arch_specific_dir_here>
> > > ?
> >
> > I was proposing only to move only the new
> > handler(lib/librte_mempool/rte_mempool_stack.c). Not any library or any
> > other common code.
> >
> > Just like DPDK crypto device, Even if it is software implementation its
> > better to move in driver/crypto instead of lib/librte_cryptodev
> >
> > "lib/librte_mempool/arch/" is not correct place as it is platform specific
> > not architecture specific and HW mempool device may be PCIe or platform
> > device.
>
> Ok, but why rte_mempool_stack.c has to be moved?
Just thought of having all the mempool handlers at one place.
We can't move all HW mempool handlers at lib/librte_mempool/
Jerin
> I can hardly imagine it is a 'platform sepcific'.
> From my understanding it is a generic code.
> Konstantin
>
>
> >
> > > Konstantin
> > >
> > >
> > > > Jerin, are you volunteer?
On Mon, Jun 20, 2016 at 03:54:20PM +0200, Thomas Monjalon wrote:
> 2016-06-20 18:55, Jerin Jacob:
> > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote:
> > > This is a mempool handler that is useful for pipelining apps, where
> > > the mempool cache doesn't really work - example, where we have one
> > > core doing rx (and alloc), and another core doing Tx (and return). In
> > > such a case, the mempool ring simply cycles through all the mbufs,
> > > resulting in a LLC miss on every mbuf allocated when the number of
> > > mbufs is large. A stack recycles buffers more effectively in this
> > > case.
> > >
> > > Signed-off-by: David Hunt <david.hunt@intel.com>
> > > ---
> > > lib/librte_mempool/Makefile | 1 +
> > > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
> >
> > How about moving new mempool handlers to drivers/mempool? (or similar).
> > In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea.
>
> You're probably right.
> However we need to check and understand what a HW mempool handler will be.
> I imagine the first of them will have to move handlers in drivers/
> Jerin, are you volunteer?
Thomas,
We are planning to upstream a HW based mempool handler.
Not sure about the timelines. We will take up this as part of a HW
based mempool upstreaming if no one takes it before that.
Jerin
> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Tuesday, June 21, 2016 4:35 AM
> To: Ananyev, Konstantin
> Cc: Thomas Monjalon; dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com
> Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
>
> On Mon, Jun 20, 2016 at 05:56:40PM +0000, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > > Sent: Monday, June 20, 2016 3:22 PM
> > > To: Ananyev, Konstantin
> > > Cc: Thomas Monjalon; dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com
> > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
> > >
> > > On Mon, Jun 20, 2016 at 01:58:04PM +0000, Ananyev, Konstantin wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> > > > > Sent: Monday, June 20, 2016 2:54 PM
> > > > > To: Jerin Jacob
> > > > > Cc: dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com
> > > > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler
> > > > >
> > > > > 2016-06-20 18:55, Jerin Jacob:
> > > > > > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote:
> > > > > > > This is a mempool handler that is useful for pipelining apps, where
> > > > > > > the mempool cache doesn't really work - example, where we have one
> > > > > > > core doing rx (and alloc), and another core doing Tx (and return). In
> > > > > > > such a case, the mempool ring simply cycles through all the mbufs,
> > > > > > > resulting in a LLC miss on every mbuf allocated when the number of
> > > > > > > mbufs is large. A stack recycles buffers more effectively in this
> > > > > > > case.
> > > > > > >
> > > > > > > Signed-off-by: David Hunt <david.hunt@intel.com>
> > > > > > > ---
> > > > > > > lib/librte_mempool/Makefile | 1 +
> > > > > > > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++
> > > > > >
> > > > > > How about moving new mempool handlers to drivers/mempool? (or similar).
> > > > > > In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea.
> > > > >
> > > > > You're probably right.
> > > > > However we need to check and understand what a HW mempool handler will be.
> > > > > I imagine the first of them will have to move handlers in drivers/
> > > >
> > > > Does it mean it we'll have to move mbuf into drivers too?
> > > > Again other libs do use mempool too.
> > > > Why not just lib/librte_mempool/arch/<arch_specific_dir_here>
> > > > ?
> > >
> > > I was proposing only to move only the new
> > > handler(lib/librte_mempool/rte_mempool_stack.c). Not any library or any
> > > other common code.
> > >
> > > Just like DPDK crypto device, Even if it is software implementation its
> > > better to move in driver/crypto instead of lib/librte_cryptodev
> > >
> > > "lib/librte_mempool/arch/" is not correct place as it is platform specific
> > > not architecture specific and HW mempool device may be PCIe or platform
> > > device.
> >
> > Ok, but why rte_mempool_stack.c has to be moved?
>
> Just thought of having all the mempool handlers at one place.
> We can't move all HW mempool handlers at lib/librte_mempool/
Yep, but from your previous mail I thought we might have specific ones
for specific devices, no?
If so, why to put them in one place, why just not in:
Drivers/xxx_dev/xxx_mempool.[h,c]
?
And keep generic ones in lib/librte_mempool
?
Konstantin
>
> Jerin
>
> > I can hardly imagine it is a 'platform sepcific'.
> > From my understanding it is a generic code.
> > Konstantin
> >
> >
> > >
> > > > Konstantin
> > > >
> > > >
> > > > > Jerin, are you volunteer?
Hi,
On 06/21/2016 11:28 AM, Ananyev, Konstantin wrote:
>>>> I was proposing only to move only the new
>>>> handler(lib/librte_mempool/rte_mempool_stack.c). Not any library or any
>>>> other common code.
>>>>
>>>> Just like DPDK crypto device, Even if it is software implementation its
>>>> better to move in driver/crypto instead of lib/librte_cryptodev
>>>>
>>>> "lib/librte_mempool/arch/" is not correct place as it is platform specific
>>>> not architecture specific and HW mempool device may be PCIe or platform
>>>> device.
>>>
>>> Ok, but why rte_mempool_stack.c has to be moved?
>>
>> Just thought of having all the mempool handlers at one place.
>> We can't move all HW mempool handlers at lib/librte_mempool/
>
> Yep, but from your previous mail I thought we might have specific ones
> for specific devices, no?
> If so, why to put them in one place, why just not in:
> Drivers/xxx_dev/xxx_mempool.[h,c]
> ?
> And keep generic ones in lib/librte_mempool
> ?
I think all drivers (generic or not) should be at the same place for
consistency.
I'm not sure having them lib/librte_mempool is really a problem today,
but once hardware-dependent handler are pushed, we may move all of them
in drivers/mempool because I think we should avoid to have hw-specific
code in lib/.
I don't think it will cause ABI/API breakage since the user always
talk to the generic mempool API.
Regards
Olivier
@@ -44,6 +44,7 @@ LIBABIVER := 2
SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool.c
SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_ops.c
SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_ring.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_stack.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
new file mode 100644
@@ -0,0 +1,145 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+
+struct rte_mempool_stack {
+ rte_spinlock_t sl;
+
+ uint32_t size;
+ uint32_t len;
+ void *objs[];
+};
+
+static int
+stack_alloc(struct rte_mempool *mp)
+{
+ struct rte_mempool_stack *s;
+ unsigned n = mp->size;
+ int size = sizeof(*s) + (n+16)*sizeof(void *);
+
+ /* Allocate our local memory structure */
+ s = rte_zmalloc_socket("mempool-stack",
+ size,
+ RTE_CACHE_LINE_SIZE,
+ mp->socket_id);
+ if (s == NULL) {
+ RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
+ return -ENOMEM;
+ }
+
+ rte_spinlock_init(&s->sl);
+
+ s->size = n;
+ mp->pool_data = s;
+
+ return 0;
+}
+
+static int stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
+ unsigned n)
+{
+ struct rte_mempool_stack *s = mp->pool_data;
+ void **cache_objs;
+ unsigned index;
+
+ rte_spinlock_lock(&s->sl);
+ cache_objs = &s->objs[s->len];
+
+ /* Is there sufficient space in the stack ? */
+ if ((s->len + n) > s->size) {
+ rte_spinlock_unlock(&s->sl);
+ return -ENOBUFS;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ s->len += n;
+
+ rte_spinlock_unlock(&s->sl);
+ return 0;
+}
+
+static int stack_dequeue(struct rte_mempool *mp, void **obj_table,
+ unsigned n)
+{
+ struct rte_mempool_stack *s = mp->pool_data;
+ void **cache_objs;
+ unsigned index, len;
+
+ rte_spinlock_lock(&s->sl);
+
+ if (unlikely(n > s->len)) {
+ rte_spinlock_unlock(&s->sl);
+ return -ENOENT;
+ }
+
+ cache_objs = s->objs;
+
+ for (index = 0, len = s->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ s->len -= n;
+ rte_spinlock_unlock(&s->sl);
+ return n;
+}
+
+static unsigned
+stack_get_count(const struct rte_mempool *mp)
+{
+ struct rte_mempool_stack *s = mp->pool_data;
+
+ return s->len;
+}
+
+static void
+stack_free(struct rte_mempool *mp)
+{
+ rte_free((void *)(mp->pool_data));
+}
+
+static struct rte_mempool_ops ops_stack = {
+ .name = "stack",
+ .alloc = stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
+MEMPOOL_REGISTER_OPS(ops_stack);