[dpdk-dev,2/6] mempool: add stack (lifo) based external mempool handler

Message ID 1455634095-4183-3-git-send-email-david.hunt@intel.com (mailing list archive)
State Superseded, archived
Headers

Commit Message

Hunt, David Feb. 16, 2016, 2:48 p.m. UTC
  adds a simple stack based mempool handler

Signed-off-by: David Hunt <david.hunt@intel.com>
---
 lib/librte_mempool/Makefile            |   2 +-
 lib/librte_mempool/rte_mempool.c       |   4 +-
 lib/librte_mempool/rte_mempool.h       |   1 +
 lib/librte_mempool/rte_mempool_stack.c | 164 +++++++++++++++++++++++++++++++++
 4 files changed, 169 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_mempool/rte_mempool_stack.c
  

Comments

Olivier Matz Feb. 19, 2016, 1:31 p.m. UTC | #1
Hi David,

On 02/16/2016 03:48 PM, David Hunt wrote:
> adds a simple stack based mempool handler
> 
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
>  lib/librte_mempool/Makefile            |   2 +-
>  lib/librte_mempool/rte_mempool.c       |   4 +-
>  lib/librte_mempool/rte_mempool.h       |   1 +
>  lib/librte_mempool/rte_mempool_stack.c | 164 +++++++++++++++++++++++++++++++++
>  4 files changed, 169 insertions(+), 2 deletions(-)
>  create mode 100644 lib/librte_mempool/rte_mempool_stack.c
> 

I don't get what is the purpose of this handler. Is it an example
or is it something that could be useful for dpdk applications?

If it's an example, we should find a way to put the code outside
the librte_mempool library, maybe in the test program. I see there
is also a "custom handler". Do we really need to have both?


Regards,
Olivier
  
Hunt, David Feb. 29, 2016, 11:04 a.m. UTC | #2
On 2/19/2016 1:31 PM, Olivier MATZ wrote:
> Hi David,
>
> On 02/16/2016 03:48 PM, David Hunt wrote:
>> adds a simple stack based mempool handler
>>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
>> ---
>>   lib/librte_mempool/Makefile            |   2 +-
>>   lib/librte_mempool/rte_mempool.c       |   4 +-
>>   lib/librte_mempool/rte_mempool.h       |   1 +
>>   lib/librte_mempool/rte_mempool_stack.c | 164 +++++++++++++++++++++++++++++++++
>>   4 files changed, 169 insertions(+), 2 deletions(-)
>>   create mode 100644 lib/librte_mempool/rte_mempool_stack.c
>>
> I don't get what is the purpose of this handler. Is it an example
> or is it something that could be useful for dpdk applications?
>
> If it's an example, we should find a way to put the code outside
> the librte_mempool library, maybe in the test program. I see there
> is also a "custom handler". Do we really need to have both?
They are both example handlers. I agree that we could reduce down to 
one, and since the 'custom' handler has autotests, I would suggest we 
keep that one.

The next question is where it should live. I agree that it's not ideal 
to have example code living in the same directory as the mempool 
library, but they are an integral part of the library itself. How about 
creating a handlers sub-directory? We could then keep all additional and 
sample handlers in there, away from the built-in handlers. Also, seeing 
as the handler code is intended to be part of the library, I think 
moving it out to the examples directory may confuse matters further.

Regards,
David.
  
Olivier Matz March 4, 2016, 9:04 a.m. UTC | #3
Hi David,

On 02/29/2016 12:04 PM, Hunt, David wrote:
> 
> On 2/19/2016 1:31 PM, Olivier MATZ wrote:
>> Hi David,
>>
>> On 02/16/2016 03:48 PM, David Hunt wrote:
>>> adds a simple stack based mempool handler
>>>
>>> Signed-off-by: David Hunt <david.hunt@intel.com>
>>> ---
>>>   lib/librte_mempool/Makefile            |   2 +-
>>>   lib/librte_mempool/rte_mempool.c       |   4 +-
>>>   lib/librte_mempool/rte_mempool.h       |   1 +
>>>   lib/librte_mempool/rte_mempool_stack.c | 164
>>> +++++++++++++++++++++++++++++++++
>>>   4 files changed, 169 insertions(+), 2 deletions(-)
>>>   create mode 100644 lib/librte_mempool/rte_mempool_stack.c
>>>
>> I don't get what is the purpose of this handler. Is it an example
>> or is it something that could be useful for dpdk applications?
>>
>> If it's an example, we should find a way to put the code outside
>> the librte_mempool library, maybe in the test program. I see there
>> is also a "custom handler". Do we really need to have both?
> They are both example handlers. I agree that we could reduce down to
> one, and since the 'custom' handler has autotests, I would suggest we
> keep that one.

ok

> The next question is where it should live. I agree that it's not ideal
> to have example code living in the same directory as the mempool
> library, but they are an integral part of the library itself. How about
> creating a handlers sub-directory? We could then keep all additional and
> sample handlers in there, away from the built-in handlers. Also, seeing
> as the handler code is intended to be part of the library, I think
> moving it out to the examples directory may confuse matters further.

I really don't think example code should go in the library. Either it
should go in dpdk/examples/ or in dpdk/app/test*.

From your initial description: "The External Mempool Manager is an
extension to the mempool API that allows users to add and use an
external mempool manager, which allows external memory subsystems such
as external hardware memory management systems and software based
memory allocators to be used with DPDK."

Can we find a hardware where the external mempool manager is required?
This would be the best example ever I think.

Regards,
Olivier
  
Venkatesan, Venky March 8, 2016, 8:45 p.m. UTC | #4
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> Sent: Friday, February 19, 2016 5:31 AM
> To: Hunt, David <david.hunt@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 2/6] mempool: add stack (lifo) based
> external mempool handler
> 
> Hi David,
> 
> On 02/16/2016 03:48 PM, David Hunt wrote:
> > adds a simple stack based mempool handler
> >
> > Signed-off-by: David Hunt <david.hunt@intel.com>
> > ---
> >  lib/librte_mempool/Makefile            |   2 +-
> >  lib/librte_mempool/rte_mempool.c       |   4 +-
> >  lib/librte_mempool/rte_mempool.h       |   1 +
> >  lib/librte_mempool/rte_mempool_stack.c | 164
> > +++++++++++++++++++++++++++++++++
> >  4 files changed, 169 insertions(+), 2 deletions(-)  create mode
> > 100644 lib/librte_mempool/rte_mempool_stack.c
> >
> 
> I don't get what is the purpose of this handler. Is it an example or is it
> something that could be useful for dpdk applications?
> 
This is actually something that is useful for pipelining apps, where the mempool cache doesn't really work - example, where we have one core doing rx (and alloc), and another core doing Tx (and return). In such a case, the mempool ring simply cycles through all the mbufs, resulting in a LLC miss on every mbuf allocated when the number of mbufs is large. A stack recycles buffers more effectively in this case.

> If it's an example, we should find a way to put the code outside the
> librte_mempool library, maybe in the test program. I see there is also a
> "custom handler". Do we really need to have both?
> 
> 
> Regards,
> Olivier
>
  
Olivier Matz March 9, 2016, 2:53 p.m. UTC | #5
Hi,

>> Hi David,
>>
>> On 02/16/2016 03:48 PM, David Hunt wrote:
>>> adds a simple stack based mempool handler
>>>
>>> Signed-off-by: David Hunt <david.hunt@intel.com>
>>> ---
>>>  lib/librte_mempool/Makefile            |   2 +-
>>>  lib/librte_mempool/rte_mempool.c       |   4 +-
>>>  lib/librte_mempool/rte_mempool.h       |   1 +
>>>  lib/librte_mempool/rte_mempool_stack.c | 164
>>> +++++++++++++++++++++++++++++++++
>>>  4 files changed, 169 insertions(+), 2 deletions(-)  create mode
>>> 100644 lib/librte_mempool/rte_mempool_stack.c
>>>
>>
>> I don't get what is the purpose of this handler. Is it an example or is it
>> something that could be useful for dpdk applications?
>>
> This is actually something that is useful for pipelining apps,
> where the mempool cache doesn't really work - example, where we
> have one core doing rx (and alloc), and another core doing
> Tx (and return). In such a case, the mempool ring simply cycles
> through all the mbufs, resulting in a LLC miss on every mbuf
> allocated when the number of mbufs is large. A stack recycles
> buffers more effectively in this case.
> 

While I agree on the principle, if this is the case the commit should
come with an explanation about when this handler should be used, a
small test report showing the performance numbers and probably an
example app.

Also, I think there is a some room for optimizations, especially I
don't think that the spinlock will scale with many cores.

Regards,
Olivier
  

Patch

diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
index aeaffd1..d795b48 100644
--- a/lib/librte_mempool/Makefile
+++ b/lib/librte_mempool/Makefile
@@ -43,7 +43,7 @@  LIBABIVER := 1
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_default.c
-
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool_stack.c
 ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
 endif
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index a577a3e..44bc92f 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -672,7 +672,9 @@  rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
 	 * examine the
 	 * flags to set the correct index into the handler table.
 	 */
-	if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+	if (flags & MEMPOOL_F_USE_STACK)
+		sprintf(handler_name, "%s", "stack");
+	else if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
 		sprintf(handler_name, "%s", "ring_sp_sc");
 	else if (flags & MEMPOOL_F_SP_PUT)
 		sprintf(handler_name, "%s", "ring_sp_mc");
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 3705fbd..8d8201f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -303,6 +303,7 @@  struct rte_mempool {
 #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
 #define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
+#define MEMPOOL_F_USE_STACK      0x0010 /**< Use a stack for the common pool. */
 #define MEMPOOL_F_INT_HANDLER    0x0020 /**< Using internal mempool handler */
 
 
diff --git a/lib/librte_mempool/rte_mempool_stack.c b/lib/librte_mempool/rte_mempool_stack.c
new file mode 100644
index 0000000..d341793
--- /dev/null
+++ b/lib/librte_mempool/rte_mempool_stack.c
@@ -0,0 +1,164 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <string.h>
+
+#include "rte_mempool_internal.h"
+
+struct rte_mempool_common_stack {
+	/* Spinlock to protect access */
+	rte_spinlock_t sl;
+
+	uint32_t size;
+	uint32_t len;
+	void *objs[];
+};
+
+static void *
+common_stack_alloc(struct rte_mempool *mp,
+		const char *name, unsigned n, int socket_id, unsigned flags)
+{
+	struct rte_mempool_common_stack *s;
+	char stack_name[RTE_RING_NAMESIZE];
+
+	int size = sizeof(*s) + (n+16)*sizeof(void *);
+
+	flags = flags;
+
+	/* Allocate our local memory structure */
+	snprintf(stack_name, sizeof(stack_name), "%s-common-stack", name);
+	s = rte_zmalloc_socket(stack_name,
+					size, RTE_CACHE_LINE_SIZE, socket_id);
+	if (s == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
+		return NULL;
+	}
+
+	/* And the spinlock we use to protect access */
+	rte_spinlock_init(&s->sl);
+
+	s->size = n;
+	mp->rt_pool = (void *) s;
+	mp->handler_idx = rte_get_mempool_handler_idx("stack");
+
+	return s;
+}
+
+static int common_stack_put(void *p, void * const *obj_table,
+		unsigned n)
+{
+	struct rte_mempool_common_stack *s =
+		(struct rte_mempool_common_stack *)p;
+	void **cache_objs;
+	unsigned index;
+
+	/* Acquire lock */
+	rte_spinlock_lock(&s->sl);
+	cache_objs = &s->objs[s->len];
+
+	/* Is there sufficient space in the stack ? */
+	if ((s->len + n) > s->size) {
+		rte_spinlock_unlock(&s->sl);
+		return -ENOENT;
+	}
+
+	/* Add elements back into the cache */
+	for (index = 0; index < n; ++index, obj_table++)
+		cache_objs[index] = *obj_table;
+
+	s->len += n;
+
+	rte_spinlock_unlock(&s->sl);
+	return 0;
+}
+
+static int common_stack_get(void *p, void **obj_table,
+		unsigned n)
+{
+	struct rte_mempool_common_stack *s =
+		(struct rte_mempool_common_stack *)p;
+	void **cache_objs;
+	unsigned index, len;
+
+	/* Acquire lock */
+	rte_spinlock_lock(&s->sl);
+
+	if (unlikely(n > s->len)) {
+		rte_spinlock_unlock(&s->sl);
+		return -ENOENT;
+	}
+
+	cache_objs = s->objs;
+
+	for (index = 0, len = s->len - 1; index < n;
+					++index, len--, obj_table++)
+		*obj_table = cache_objs[len];
+
+	s->len -= n;
+	rte_spinlock_unlock(&s->sl);
+	return n;
+}
+
+static unsigned common_stack_get_count(void *p)
+{
+	struct rte_mempool_common_stack *s =
+		(struct rte_mempool_common_stack *)p;
+
+	return s->len;
+}
+
+static int
+common_stack_free(struct rte_mempool *mp)
+{
+	struct rte_mempool_common_stack *s;
+
+	s = mp->rt_pool;
+
+	rte_free(s);
+
+	return 0;
+}
+
+static struct rte_mempool_handler handler_stack = {
+	.name = "stack",
+	.alloc = common_stack_alloc,
+	.put = common_stack_put,
+	.get = common_stack_get,
+	.get_count = common_stack_get_count,
+	.free = common_stack_free
+};
+
+REGISTER_MEMPOOL_HANDLER(handler_stack);