From patchwork Mon Jun 20 13:08:10 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hunt, David" X-Patchwork-Id: 14028 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 8514D8E7F; Mon, 20 Jun 2016 15:10:02 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id A028C7E93 for ; Mon, 20 Jun 2016 15:09:58 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP; 20 Jun 2016 06:09:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.26,498,1459839600"; d="scan'208"; a="1001534747" Received: from sie-lab-214-251.ir.intel.com (HELO silpixa373510.ir.intel.com) ([10.237.214.251]) by orsmga002.jf.intel.com with ESMTP; 20 Jun 2016 06:09:55 -0700 From: David Hunt To: dev@dpdk.org Cc: olivier.matz@6wind.com, viktorin@rehivetech.com, jerin.jacob@caviumnetworks.com, shreyansh.jain@nxp.com, David Hunt Date: Mon, 20 Jun 2016 14:08:10 +0100 Message-Id: <1466428091-115821-2-git-send-email-david.hunt@intel.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1466428091-115821-1-git-send-email-david.hunt@intel.com> References: <1463669335-30378-1-git-send-email-david.hunt@intel.com> <1466428091-115821-1-git-send-email-david.hunt@intel.com> Subject: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This is a mempool handler that is useful for pipelining apps, where the mempool cache doesn't really work - example, where we have one core doing rx (and alloc), and another core doing Tx (and return). In such a case, the mempool ring simply cycles through all the mbufs, resulting in a LLC miss on every mbuf allocated when the number of mbufs is large. A stack recycles buffers more effectively in this case. Signed-off-by: David Hunt --- lib/librte_mempool/Makefile | 1 + lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++ 2 files changed, 146 insertions(+) create mode 100644 lib/librte_mempool/rte_mempool_stack.c diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile index a4c089e..057a6ab 100644 --- a/lib/librte_mempool/Makefile +++ b/lib/librte_mempool/Makefile @@ -44,6 +44,7 @@ LIBABIVER := 2 SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool.c SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_ops.c SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_ring.c +SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_stack.c # install includes SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h diff --git a/lib/librte_mempool/rte_mempool_stack.c b/lib/librte_mempool/rte_mempool_stack.c new file mode 100644 index 0000000..a92da69 --- /dev/null +++ b/lib/librte_mempool/rte_mempool_stack.c @@ -0,0 +1,145 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2016 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include + +struct rte_mempool_stack { + rte_spinlock_t sl; + + uint32_t size; + uint32_t len; + void *objs[]; +}; + +static int +stack_alloc(struct rte_mempool *mp) +{ + struct rte_mempool_stack *s; + unsigned n = mp->size; + int size = sizeof(*s) + (n+16)*sizeof(void *); + + /* Allocate our local memory structure */ + s = rte_zmalloc_socket("mempool-stack", + size, + RTE_CACHE_LINE_SIZE, + mp->socket_id); + if (s == NULL) { + RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n"); + return -ENOMEM; + } + + rte_spinlock_init(&s->sl); + + s->size = n; + mp->pool_data = s; + + return 0; +} + +static int stack_enqueue(struct rte_mempool *mp, void * const *obj_table, + unsigned n) +{ + struct rte_mempool_stack *s = mp->pool_data; + void **cache_objs; + unsigned index; + + rte_spinlock_lock(&s->sl); + cache_objs = &s->objs[s->len]; + + /* Is there sufficient space in the stack ? */ + if ((s->len + n) > s->size) { + rte_spinlock_unlock(&s->sl); + return -ENOBUFS; + } + + /* Add elements back into the cache */ + for (index = 0; index < n; ++index, obj_table++) + cache_objs[index] = *obj_table; + + s->len += n; + + rte_spinlock_unlock(&s->sl); + return 0; +} + +static int stack_dequeue(struct rte_mempool *mp, void **obj_table, + unsigned n) +{ + struct rte_mempool_stack *s = mp->pool_data; + void **cache_objs; + unsigned index, len; + + rte_spinlock_lock(&s->sl); + + if (unlikely(n > s->len)) { + rte_spinlock_unlock(&s->sl); + return -ENOENT; + } + + cache_objs = s->objs; + + for (index = 0, len = s->len - 1; index < n; + ++index, len--, obj_table++) + *obj_table = cache_objs[len]; + + s->len -= n; + rte_spinlock_unlock(&s->sl); + return n; +} + +static unsigned +stack_get_count(const struct rte_mempool *mp) +{ + struct rte_mempool_stack *s = mp->pool_data; + + return s->len; +} + +static void +stack_free(struct rte_mempool *mp) +{ + rte_free((void *)(mp->pool_data)); +} + +static struct rte_mempool_ops ops_stack = { + .name = "stack", + .alloc = stack_alloc, + .free = stack_free, + .enqueue = stack_enqueue, + .dequeue = stack_dequeue, + .get_count = stack_get_count +}; + +MEMPOOL_REGISTER_OPS(ops_stack);