From patchwork Mon Jan 12 16:33:57 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergio Gonzalez Monroy X-Patchwork-Id: 2239 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 9B9F75AE1; Mon, 12 Jan 2015 17:34:39 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 293A35AC9 for ; Mon, 12 Jan 2015 17:34:11 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP; 12 Jan 2015 08:30:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,744,1413270000"; d="scan'208";a="668520228" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga002.jf.intel.com with ESMTP; 12 Jan 2015 08:34:07 -0800 Received: from sivswdev02.ir.intel.com (sivswdev02.ir.intel.com [10.237.217.46]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id t0CGY7hT022016 for ; Mon, 12 Jan 2015 16:34:07 GMT Received: from sivswdev02.ir.intel.com (localhost [127.0.0.1]) by sivswdev02.ir.intel.com with ESMTP id t0CGY7hI019316 for ; Mon, 12 Jan 2015 16:34:07 GMT Received: (from smonroy@localhost) by sivswdev02.ir.intel.com with id t0CGY71m019312 for dev@dpdk.org; Mon, 12 Jan 2015 16:34:07 GMT From: Sergio Gonzalez Monroy To: dev@dpdk.org Date: Mon, 12 Jan 2015 16:33:57 +0000 Message-Id: <1421080446-19249-5-git-send-email-sergio.gonzalez.monroy@intel.com> X-Mailer: git-send-email 1.8.5.4 In-Reply-To: <1421080446-19249-1-git-send-email-sergio.gonzalez.monroy@intel.com> References: <1421080446-19249-1-git-send-email-sergio.gonzalez.monroy@intel.com> Subject: [dpdk-dev] [PATCH RFC 04/13] core: move librte_malloc to core subdir X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This is equivalent to: git mv lib/librte_malloc lib/core Signed-off-by: Sergio Gonzalez Monroy --- lib/core/librte_malloc/Makefile | 48 +++++ lib/core/librte_malloc/malloc_elem.c | 321 ++++++++++++++++++++++++++++++++ lib/core/librte_malloc/malloc_elem.h | 190 +++++++++++++++++++ lib/core/librte_malloc/malloc_heap.c | 210 +++++++++++++++++++++ lib/core/librte_malloc/malloc_heap.h | 65 +++++++ lib/core/librte_malloc/rte_malloc.c | 261 ++++++++++++++++++++++++++ lib/core/librte_malloc/rte_malloc.h | 342 +++++++++++++++++++++++++++++++++++ lib/librte_malloc/Makefile | 48 ----- lib/librte_malloc/malloc_elem.c | 321 -------------------------------- lib/librte_malloc/malloc_elem.h | 190 ------------------- lib/librte_malloc/malloc_heap.c | 210 --------------------- lib/librte_malloc/malloc_heap.h | 65 ------- lib/librte_malloc/rte_malloc.c | 261 -------------------------- lib/librte_malloc/rte_malloc.h | 342 ----------------------------------- 14 files changed, 1437 insertions(+), 1437 deletions(-) create mode 100644 lib/core/librte_malloc/Makefile create mode 100644 lib/core/librte_malloc/malloc_elem.c create mode 100644 lib/core/librte_malloc/malloc_elem.h create mode 100644 lib/core/librte_malloc/malloc_heap.c create mode 100644 lib/core/librte_malloc/malloc_heap.h create mode 100644 lib/core/librte_malloc/rte_malloc.c create mode 100644 lib/core/librte_malloc/rte_malloc.h delete mode 100644 lib/librte_malloc/Makefile delete mode 100644 lib/librte_malloc/malloc_elem.c delete mode 100644 lib/librte_malloc/malloc_elem.h delete mode 100644 lib/librte_malloc/malloc_heap.c delete mode 100644 lib/librte_malloc/malloc_heap.h delete mode 100644 lib/librte_malloc/rte_malloc.c delete mode 100644 lib/librte_malloc/rte_malloc.h diff --git a/lib/core/librte_malloc/Makefile b/lib/core/librte_malloc/Makefile new file mode 100644 index 0000000..ba87e34 --- /dev/null +++ b/lib/core/librte_malloc/Makefile @@ -0,0 +1,48 @@ +# BSD LICENSE +# +# Copyright(c) 2010-2014 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_malloc.a + +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_MALLOC) := rte_malloc.c malloc_elem.c malloc_heap.c + +# install includes +SYMLINK-$(CONFIG_RTE_LIBRTE_MALLOC)-include := rte_malloc.h + +# this lib needs eal +DEPDIRS-$(CONFIG_RTE_LIBRTE_MALLOC) += lib/librte_eal + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/core/librte_malloc/malloc_elem.c b/lib/core/librte_malloc/malloc_elem.c new file mode 100644 index 0000000..ef26e47 --- /dev/null +++ b/lib/core/librte_malloc/malloc_elem.c @@ -0,0 +1,321 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "malloc_elem.h" +#include "malloc_heap.h" + +#define MIN_DATA_SIZE (RTE_CACHE_LINE_SIZE) + +/* + * initialise a general malloc_elem header structure + */ +void +malloc_elem_init(struct malloc_elem *elem, + struct malloc_heap *heap, const struct rte_memzone *mz, size_t size) +{ + elem->heap = heap; + elem->mz = mz; + elem->prev = NULL; + memset(&elem->free_list, 0, sizeof(elem->free_list)); + elem->state = ELEM_FREE; + elem->size = size; + elem->pad = 0; + set_header(elem); + set_trailer(elem); +} + +/* + * initialise a dummy malloc_elem header for the end-of-memzone marker + */ +void +malloc_elem_mkend(struct malloc_elem *elem, struct malloc_elem *prev) +{ + malloc_elem_init(elem, prev->heap, prev->mz, 0); + elem->prev = prev; + elem->state = ELEM_BUSY; /* mark busy so its never merged */ +} + +/* + * calculate the starting point of where data of the requested size + * and alignment would fit in the current element. If the data doesn't + * fit, return NULL. + */ +static void * +elem_start_pt(struct malloc_elem *elem, size_t size, unsigned align) +{ + const uintptr_t end_pt = (uintptr_t)elem + + elem->size - MALLOC_ELEM_TRAILER_LEN; + const uintptr_t new_data_start = rte_align_floor_int((end_pt - size),align); + const uintptr_t new_elem_start = new_data_start - MALLOC_ELEM_HEADER_LEN; + + /* if the new start point is before the exist start, it won't fit */ + return (new_elem_start < (uintptr_t)elem) ? NULL : (void *)new_elem_start; +} + +/* + * use elem_start_pt to determine if we get meet the size and + * alignment request from the current element + */ +int +malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align) +{ + return elem_start_pt(elem, size, align) != NULL; +} + +/* + * split an existing element into two smaller elements at the given + * split_pt parameter. + */ +static void +split_elem(struct malloc_elem *elem, struct malloc_elem *split_pt) +{ + struct malloc_elem *next_elem = RTE_PTR_ADD(elem, elem->size); + const unsigned old_elem_size = (uintptr_t)split_pt - (uintptr_t)elem; + const unsigned new_elem_size = elem->size - old_elem_size; + + malloc_elem_init(split_pt, elem->heap, elem->mz, new_elem_size); + split_pt->prev = elem; + next_elem->prev = split_pt; + elem->size = old_elem_size; + set_trailer(elem); +} + +/* + * Given an element size, compute its freelist index. + * We free an element into the freelist containing similarly-sized elements. + * We try to allocate elements starting with the freelist containing + * similarly-sized elements, and if necessary, we search freelists + * containing larger elements. + * + * Example element size ranges for a heap with five free lists: + * heap->free_head[0] - (0 , 2^8] + * heap->free_head[1] - (2^8 , 2^10] + * heap->free_head[2] - (2^10 ,2^12] + * heap->free_head[3] - (2^12, 2^14] + * heap->free_head[4] - (2^14, MAX_SIZE] + */ +size_t +malloc_elem_free_list_index(size_t size) +{ +#define MALLOC_MINSIZE_LOG2 8 +#define MALLOC_LOG2_INCREMENT 2 + + size_t log2; + size_t index; + + if (size <= (1UL << MALLOC_MINSIZE_LOG2)) + return 0; + + /* Find next power of 2 >= size. */ + log2 = sizeof(size) * 8 - __builtin_clzl(size-1); + + /* Compute freelist index, based on log2(size). */ + index = (log2 - MALLOC_MINSIZE_LOG2 + MALLOC_LOG2_INCREMENT - 1) / + MALLOC_LOG2_INCREMENT; + + return (index <= RTE_HEAP_NUM_FREELISTS-1? + index: RTE_HEAP_NUM_FREELISTS-1); +} + +/* + * Add the specified element to its heap's free list. + */ +void +malloc_elem_free_list_insert(struct malloc_elem *elem) +{ + size_t idx = malloc_elem_free_list_index(elem->size - MALLOC_ELEM_HEADER_LEN); + + elem->state = ELEM_FREE; + LIST_INSERT_HEAD(&elem->heap->free_head[idx], elem, free_list); +} + +/* + * Remove the specified element from its heap's free list. + */ +static void +elem_free_list_remove(struct malloc_elem *elem) +{ + LIST_REMOVE(elem, free_list); +} + +/* + * reserve a block of data in an existing malloc_elem. If the malloc_elem + * is much larger than the data block requested, we split the element in two. + * This function is only called from malloc_heap_alloc so parameter checking + * is not done here, as it's done there previously. + */ +struct malloc_elem * +malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align) +{ + struct malloc_elem *new_elem = elem_start_pt(elem, size, align); + const unsigned old_elem_size = (uintptr_t)new_elem - (uintptr_t)elem; + + if (old_elem_size < MALLOC_ELEM_OVERHEAD + MIN_DATA_SIZE){ + /* don't split it, pad the element instead */ + elem->state = ELEM_BUSY; + elem->pad = old_elem_size; + + /* put a dummy header in padding, to point to real element header */ + if (elem->pad > 0){ /* pad will be at least 64-bytes, as everything + * is cache-line aligned */ + new_elem->pad = elem->pad; + new_elem->state = ELEM_PAD; + new_elem->size = elem->size - elem->pad; + set_header(new_elem); + } + /* remove element from free list */ + elem_free_list_remove(elem); + + return new_elem; + } + + /* we are going to split the element in two. The original element + * remains free, and the new element is the one allocated. + * Re-insert original element, in case its new size makes it + * belong on a different list. + */ + elem_free_list_remove(elem); + split_elem(elem, new_elem); + new_elem->state = ELEM_BUSY; + malloc_elem_free_list_insert(elem); + + return new_elem; +} + +/* + * joing two struct malloc_elem together. elem1 and elem2 must + * be contiguous in memory. + */ +static inline void +join_elem(struct malloc_elem *elem1, struct malloc_elem *elem2) +{ + struct malloc_elem *next = RTE_PTR_ADD(elem2, elem2->size); + elem1->size += elem2->size; + next->prev = elem1; +} + +/* + * free a malloc_elem block by adding it to the free list. If the + * blocks either immediately before or immediately after newly freed block + * are also free, the blocks are merged together. + */ +int +malloc_elem_free(struct malloc_elem *elem) +{ + if (!malloc_elem_cookies_ok(elem) || elem->state != ELEM_BUSY) + return -1; + + rte_spinlock_lock(&(elem->heap->lock)); + struct malloc_elem *next = RTE_PTR_ADD(elem, elem->size); + if (next->state == ELEM_FREE){ + /* remove from free list, join to this one */ + elem_free_list_remove(next); + join_elem(elem, next); + } + + /* check if previous element is free, if so join with it and return, + * need to re-insert in free list, as that element's size is changing + */ + if (elem->prev != NULL && elem->prev->state == ELEM_FREE) { + elem_free_list_remove(elem->prev); + join_elem(elem->prev, elem); + malloc_elem_free_list_insert(elem->prev); + } + /* otherwise add ourselves to the free list */ + else { + malloc_elem_free_list_insert(elem); + elem->pad = 0; + } + /* decrease heap's count of allocated elements */ + elem->heap->alloc_count--; + rte_spinlock_unlock(&(elem->heap->lock)); + + return 0; +} + +/* + * attempt to resize a malloc_elem by expanding into any free space + * immediately after it in memory. + */ +int +malloc_elem_resize(struct malloc_elem *elem, size_t size) +{ + const size_t new_size = size + MALLOC_ELEM_OVERHEAD; + /* if we request a smaller size, then always return ok */ + const size_t current_size = elem->size - elem->pad; + if (current_size >= new_size) + return 0; + + struct malloc_elem *next = RTE_PTR_ADD(elem, elem->size); + rte_spinlock_lock(&elem->heap->lock); + if (next ->state != ELEM_FREE) + goto err_return; + if (current_size + next->size < new_size) + goto err_return; + + /* we now know the element fits, so remove from free list, + * join the two + */ + elem_free_list_remove(next); + join_elem(elem, next); + + if (elem->size - new_size >= MIN_DATA_SIZE + MALLOC_ELEM_OVERHEAD){ + /* now we have a big block together. Lets cut it down a bit, by splitting */ + struct malloc_elem *split_pt = RTE_PTR_ADD(elem, new_size); + split_pt = RTE_PTR_ALIGN_CEIL(split_pt, RTE_CACHE_LINE_SIZE); + split_elem(elem, split_pt); + malloc_elem_free_list_insert(split_pt); + } + rte_spinlock_unlock(&elem->heap->lock); + return 0; + +err_return: + rte_spinlock_unlock(&elem->heap->lock); + return -1; +} diff --git a/lib/core/librte_malloc/malloc_elem.h b/lib/core/librte_malloc/malloc_elem.h new file mode 100644 index 0000000..9790b1a --- /dev/null +++ b/lib/core/librte_malloc/malloc_elem.h @@ -0,0 +1,190 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef MALLOC_ELEM_H_ +#define MALLOC_ELEM_H_ + +#include + +/* dummy definition of struct so we can use pointers to it in malloc_elem struct */ +struct malloc_heap; + +enum elem_state { + ELEM_FREE = 0, + ELEM_BUSY, + ELEM_PAD /* element is a padding-only header */ +}; + +struct malloc_elem { + struct malloc_heap *heap; + struct malloc_elem *volatile prev; /* points to prev elem in memzone */ + LIST_ENTRY(malloc_elem) free_list; /* list of free elements in heap */ + const struct rte_memzone *mz; + volatile enum elem_state state; + uint32_t pad; + size_t size; +#ifdef RTE_LIBRTE_MALLOC_DEBUG + uint64_t header_cookie; /* Cookie marking start of data */ + /* trailer cookie at start + size */ +#endif +} __rte_cache_aligned; + +#ifndef RTE_LIBRTE_MALLOC_DEBUG +static const unsigned MALLOC_ELEM_TRAILER_LEN = 0; + +/* dummy function - just check if pointer is non-null */ +static inline int +malloc_elem_cookies_ok(const struct malloc_elem *elem){ return elem != NULL; } + +/* dummy function - no header if malloc_debug is not enabled */ +static inline void +set_header(struct malloc_elem *elem __rte_unused){ } + +/* dummy function - no trailer if malloc_debug is not enabled */ +static inline void +set_trailer(struct malloc_elem *elem __rte_unused){ } + + +#else +static const unsigned MALLOC_ELEM_TRAILER_LEN = RTE_CACHE_LINE_SIZE; + +#define MALLOC_HEADER_COOKIE 0xbadbadbadadd2e55ULL /**< Header cookie. */ +#define MALLOC_TRAILER_COOKIE 0xadd2e55badbadbadULL /**< Trailer cookie.*/ + +/* define macros to make referencing the header and trailer cookies easier */ +#define MALLOC_ELEM_TRAILER(elem) (*((uint64_t*)RTE_PTR_ADD(elem, \ + elem->size - MALLOC_ELEM_TRAILER_LEN))) +#define MALLOC_ELEM_HEADER(elem) (elem->header_cookie) + +static inline void +set_header(struct malloc_elem *elem) +{ + if (elem != NULL) + MALLOC_ELEM_HEADER(elem) = MALLOC_HEADER_COOKIE; +} + +static inline void +set_trailer(struct malloc_elem *elem) +{ + if (elem != NULL) + MALLOC_ELEM_TRAILER(elem) = MALLOC_TRAILER_COOKIE; +} + +/* check that the header and trailer cookies are set correctly */ +static inline int +malloc_elem_cookies_ok(const struct malloc_elem *elem) +{ + return (elem != NULL && + MALLOC_ELEM_HEADER(elem) == MALLOC_HEADER_COOKIE && + MALLOC_ELEM_TRAILER(elem) == MALLOC_TRAILER_COOKIE); +} + +#endif + +static const unsigned MALLOC_ELEM_HEADER_LEN = sizeof(struct malloc_elem); +#define MALLOC_ELEM_OVERHEAD (MALLOC_ELEM_HEADER_LEN + MALLOC_ELEM_TRAILER_LEN) + +/* + * Given a pointer to the start of a memory block returned by malloc, get + * the actual malloc_elem header for that block. + */ +static inline struct malloc_elem * +malloc_elem_from_data(const void *data) +{ + if (data == NULL) + return NULL; + + struct malloc_elem *elem = RTE_PTR_SUB(data, MALLOC_ELEM_HEADER_LEN); + if (!malloc_elem_cookies_ok(elem)) + return NULL; + return elem->state != ELEM_PAD ? elem: RTE_PTR_SUB(elem, elem->pad); +} + +/* + * initialise a malloc_elem header + */ +void +malloc_elem_init(struct malloc_elem *elem, + struct malloc_heap *heap, + const struct rte_memzone *mz, + size_t size); + +/* + * initialise a dummy malloc_elem header for the end-of-memzone marker + */ +void +malloc_elem_mkend(struct malloc_elem *elem, + struct malloc_elem *prev_free); + +/* + * return true if the current malloc_elem can hold a block of data + * of the requested size and with the requested alignment + */ +int +malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align); + +/* + * reserve a block of data in an existing malloc_elem. If the malloc_elem + * is much larger than the data block requested, we split the element in two. + */ +struct malloc_elem * +malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align); + +/* + * free a malloc_elem block by adding it to the free list. If the + * blocks either immediately before or immediately after newly freed block + * are also free, the blocks are merged together. + */ +int +malloc_elem_free(struct malloc_elem *elem); + +/* + * attempt to resize a malloc_elem by expanding into any free space + * immediately after it in memory. + */ +int +malloc_elem_resize(struct malloc_elem *elem, size_t size); + +/* + * Given an element size, compute its freelist index. + */ +size_t +malloc_elem_free_list_index(size_t size); + +/* + * Add element to its heap's free list. + */ +void +malloc_elem_free_list_insert(struct malloc_elem *elem); + +#endif /* MALLOC_ELEM_H_ */ diff --git a/lib/core/librte_malloc/malloc_heap.c b/lib/core/librte_malloc/malloc_heap.c new file mode 100644 index 0000000..95fcfec --- /dev/null +++ b/lib/core/librte_malloc/malloc_heap.c @@ -0,0 +1,210 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "malloc_elem.h" +#include "malloc_heap.h" + +/* since the memzone size starts with a digit, it will appear unquoted in + * rte_config.h, so quote it so it can be passed to rte_str_to_size */ +#define MALLOC_MEMZONE_SIZE RTE_STR(RTE_MALLOC_MEMZONE_SIZE) + +/* + * returns the configuration setting for the memzone size as a size_t value + */ +static inline size_t +get_malloc_memzone_size(void) +{ + return rte_str_to_size(MALLOC_MEMZONE_SIZE); +} + +/* + * reserve an extra memory zone and make it available for use by a particular + * heap. This reserves the zone and sets a dummy malloc_elem header at the end + * to prevent overflow. The rest of the zone is added to free list as a single + * large free block + */ +static int +malloc_heap_add_memzone(struct malloc_heap *heap, size_t size, unsigned align) +{ + const unsigned mz_flags = 0; + const size_t block_size = get_malloc_memzone_size(); + /* ensure the data we want to allocate will fit in the memzone */ + const size_t min_size = size + align + MALLOC_ELEM_OVERHEAD * 2; + const struct rte_memzone *mz = NULL; + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; + unsigned numa_socket = heap - mcfg->malloc_heaps; + + size_t mz_size = min_size; + if (mz_size < block_size) + mz_size = block_size; + + char mz_name[RTE_MEMZONE_NAMESIZE]; + snprintf(mz_name, sizeof(mz_name), "MALLOC_S%u_HEAP_%u", + numa_socket, heap->mz_count++); + + /* try getting a block. if we fail and we don't need as big a block + * as given in the config, we can shrink our request and try again + */ + do { + mz = rte_memzone_reserve(mz_name, mz_size, numa_socket, + mz_flags); + if (mz == NULL) + mz_size /= 2; + } while (mz == NULL && mz_size > min_size); + if (mz == NULL) + return -1; + + /* allocate the memory block headers, one at end, one at start */ + struct malloc_elem *start_elem = (struct malloc_elem *)mz->addr; + struct malloc_elem *end_elem = RTE_PTR_ADD(mz->addr, + mz_size - MALLOC_ELEM_OVERHEAD); + end_elem = RTE_PTR_ALIGN_FLOOR(end_elem, RTE_CACHE_LINE_SIZE); + + const unsigned elem_size = (uintptr_t)end_elem - (uintptr_t)start_elem; + malloc_elem_init(start_elem, heap, mz, elem_size); + malloc_elem_mkend(end_elem, start_elem); + malloc_elem_free_list_insert(start_elem); + + /* increase heap total size by size of new memzone */ + heap->total_size+=mz_size - MALLOC_ELEM_OVERHEAD; + return 0; +} + +/* + * Iterates through the freelist for a heap to find a free element + * which can store data of the required size and with the requested alignment. + * Returns null on failure, or pointer to element on success. + */ +static struct malloc_elem * +find_suitable_element(struct malloc_heap *heap, size_t size, unsigned align) +{ + size_t idx; + struct malloc_elem *elem; + + for (idx = malloc_elem_free_list_index(size); + idx < RTE_HEAP_NUM_FREELISTS; idx++) + { + for (elem = LIST_FIRST(&heap->free_head[idx]); + !!elem; elem = LIST_NEXT(elem, free_list)) + { + if (malloc_elem_can_hold(elem, size, align)) + return elem; + } + } + return NULL; +} + +/* + * Main function called by malloc to allocate a block of memory from the + * heap. It locks the free list, scans it, and adds a new memzone if the + * scan fails. Once the new memzone is added, it re-scans and should return + * the new element after releasing the lock. + */ +void * +malloc_heap_alloc(struct malloc_heap *heap, + const char *type __attribute__((unused)), size_t size, unsigned align) +{ + size = RTE_CACHE_LINE_ROUNDUP(size); + align = RTE_CACHE_LINE_ROUNDUP(align); + rte_spinlock_lock(&heap->lock); + struct malloc_elem *elem = find_suitable_element(heap, size, align); + if (elem == NULL){ + if ((malloc_heap_add_memzone(heap, size, align)) == 0) + elem = find_suitable_element(heap, size, align); + } + + if (elem != NULL){ + elem = malloc_elem_alloc(elem, size, align); + /* increase heap's count of allocated elements */ + heap->alloc_count++; + } + rte_spinlock_unlock(&heap->lock); + return elem == NULL ? NULL : (void *)(&elem[1]); + +} + +/* + * Function to retrieve data for heap on given socket + */ +int +malloc_heap_get_stats(const struct malloc_heap *heap, + struct rte_malloc_socket_stats *socket_stats) +{ + size_t idx; + struct malloc_elem *elem; + + /* Initialise variables for heap */ + socket_stats->free_count = 0; + socket_stats->heap_freesz_bytes = 0; + socket_stats->greatest_free_size = 0; + + /* Iterate through free list */ + for (idx = 0; idx < RTE_HEAP_NUM_FREELISTS; idx++) { + for (elem = LIST_FIRST(&heap->free_head[idx]); + !!elem; elem = LIST_NEXT(elem, free_list)) + { + socket_stats->free_count++; + socket_stats->heap_freesz_bytes += elem->size; + if (elem->size > socket_stats->greatest_free_size) + socket_stats->greatest_free_size = elem->size; + } + } + /* Get stats on overall heap and allocated memory on this heap */ + socket_stats->heap_totalsz_bytes = heap->total_size; + socket_stats->heap_allocsz_bytes = (socket_stats->heap_totalsz_bytes - + socket_stats->heap_freesz_bytes); + socket_stats->alloc_count = heap->alloc_count; + return 0; +} + diff --git a/lib/core/librte_malloc/malloc_heap.h b/lib/core/librte_malloc/malloc_heap.h new file mode 100644 index 0000000..b4aec45 --- /dev/null +++ b/lib/core/librte_malloc/malloc_heap.h @@ -0,0 +1,65 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef MALLOC_HEAP_H_ +#define MALLOC_HEAP_H_ + +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +static inline unsigned +malloc_get_numa_socket(void) +{ + return rte_socket_id(); +} + +void * +malloc_heap_alloc(struct malloc_heap *heap, const char *type, + size_t size, unsigned align); + +int +malloc_heap_get_stats(const struct malloc_heap *heap, + struct rte_malloc_socket_stats *socket_stats); + +int +rte_eal_heap_memzone_init(void); + +#ifdef __cplusplus +} +#endif + +#endif /* MALLOC_HEAP_H_ */ diff --git a/lib/core/librte_malloc/rte_malloc.c b/lib/core/librte_malloc/rte_malloc.c new file mode 100644 index 0000000..b966fc7 --- /dev/null +++ b/lib/core/librte_malloc/rte_malloc.c @@ -0,0 +1,261 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include "malloc_elem.h" +#include "malloc_heap.h" + + +/* Free the memory space back to heap */ +void rte_free(void *addr) +{ + if (addr == NULL) return; + if (malloc_elem_free(malloc_elem_from_data(addr)) < 0) + rte_panic("Fatal error: Invalid memory\n"); +} + +/* + * Allocate memory on specified heap. + */ +void * +rte_malloc_socket(const char *type, size_t size, unsigned align, int socket_arg) +{ + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; + int socket, i; + void *ret; + + /* return NULL if size is 0 or alignment is not power-of-2 */ + if (size == 0 || !rte_is_power_of_2(align)) + return NULL; + + if (socket_arg == SOCKET_ID_ANY) + socket = malloc_get_numa_socket(); + else + socket = socket_arg; + + /* Check socket parameter */ + if (socket >= RTE_MAX_NUMA_NODES) + return NULL; + + ret = malloc_heap_alloc(&mcfg->malloc_heaps[socket], type, + size, align == 0 ? 1 : align); + if (ret != NULL || socket_arg != SOCKET_ID_ANY) + return ret; + + /* try other heaps */ + for (i = 0; i < RTE_MAX_NUMA_NODES; i++) { + /* we already tried this one */ + if (i == socket) + continue; + + ret = malloc_heap_alloc(&mcfg->malloc_heaps[i], type, + size, align == 0 ? 1 : align); + if (ret != NULL) + return ret; + } + + return NULL; +} + +/* + * Allocate memory on default heap. + */ +void * +rte_malloc(const char *type, size_t size, unsigned align) +{ + return rte_malloc_socket(type, size, align, SOCKET_ID_ANY); +} + +/* + * Allocate zero'd memory on specified heap. + */ +void * +rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket) +{ + void *ptr = rte_malloc_socket(type, size, align, socket); + + if (ptr != NULL) + memset(ptr, 0, size); + return ptr; +} + +/* + * Allocate zero'd memory on default heap. + */ +void * +rte_zmalloc(const char *type, size_t size, unsigned align) +{ + return rte_zmalloc_socket(type, size, align, SOCKET_ID_ANY); +} + +/* + * Allocate zero'd memory on specified heap. + */ +void * +rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket) +{ + return rte_zmalloc_socket(type, num * size, align, socket); +} + +/* + * Allocate zero'd memory on default heap. + */ +void * +rte_calloc(const char *type, size_t num, size_t size, unsigned align) +{ + return rte_zmalloc(type, num * size, align); +} + +/* + * Resize allocated memory. + */ +void * +rte_realloc(void *ptr, size_t size, unsigned align) +{ + if (ptr == NULL) + return rte_malloc(NULL, size, align); + + struct malloc_elem *elem = malloc_elem_from_data(ptr); + if (elem == NULL) + rte_panic("Fatal error: memory corruption detected\n"); + + size = RTE_CACHE_LINE_ROUNDUP(size), align = RTE_CACHE_LINE_ROUNDUP(align); + /* check alignment matches first, and if ok, see if we can resize block */ + if (RTE_PTR_ALIGN(ptr,align) == ptr && + malloc_elem_resize(elem, size) == 0) + return ptr; + + /* either alignment is off, or we have no room to expand, + * so move data. */ + void *new_ptr = rte_malloc(NULL, size, align); + if (new_ptr == NULL) + return NULL; + const unsigned old_size = elem->size - MALLOC_ELEM_OVERHEAD; + rte_memcpy(new_ptr, ptr, old_size < size ? old_size : size); + rte_free(ptr); + + return new_ptr; +} + +int +rte_malloc_validate(const void *ptr, size_t *size) +{ + const struct malloc_elem *elem = malloc_elem_from_data(ptr); + if (!malloc_elem_cookies_ok(elem)) + return -1; + if (size != NULL) + *size = elem->size - elem->pad - MALLOC_ELEM_OVERHEAD; + return 0; +} + +/* + * Function to retrieve data for heap on given socket + */ +int +rte_malloc_get_socket_stats(int socket, + struct rte_malloc_socket_stats *socket_stats) +{ + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; + + if (socket >= RTE_MAX_NUMA_NODES || socket < 0) + return -1; + + return malloc_heap_get_stats(&mcfg->malloc_heaps[socket], socket_stats); +} + +/* + * Print stats on memory type. If type is NULL, info on all types is printed + */ +void +rte_malloc_dump_stats(FILE *f, __rte_unused const char *type) +{ + unsigned int socket; + struct rte_malloc_socket_stats sock_stats; + /* Iterate through all initialised heaps */ + for (socket=0; socket< RTE_MAX_NUMA_NODES; socket++) { + if ((rte_malloc_get_socket_stats(socket, &sock_stats) < 0)) + continue; + + fprintf(f, "Socket:%u\n", socket); + fprintf(f, "\tHeap_size:%zu,\n", sock_stats.heap_totalsz_bytes); + fprintf(f, "\tFree_size:%zu,\n", sock_stats.heap_freesz_bytes); + fprintf(f, "\tAlloc_size:%zu,\n", sock_stats.heap_allocsz_bytes); + fprintf(f, "\tGreatest_free_size:%zu,\n", + sock_stats.greatest_free_size); + fprintf(f, "\tAlloc_count:%u,\n",sock_stats.alloc_count); + fprintf(f, "\tFree_count:%u,\n", sock_stats.free_count); + } + return; +} + +/* + * TODO: Set limit to memory that can be allocated to memory type + */ +int +rte_malloc_set_limit(__rte_unused const char *type, + __rte_unused size_t max) +{ + return 0; +} + +/* + * Return the physical address of a virtual address obtained through rte_malloc + */ +phys_addr_t +rte_malloc_virt2phy(const void *addr) +{ + const struct malloc_elem *elem = malloc_elem_from_data(addr); + if (elem == NULL) + return 0; + return elem->mz->phys_addr + ((uintptr_t)addr - (uintptr_t)elem->mz->addr); +} diff --git a/lib/core/librte_malloc/rte_malloc.h b/lib/core/librte_malloc/rte_malloc.h new file mode 100644 index 0000000..74bb78c --- /dev/null +++ b/lib/core/librte_malloc/rte_malloc.h @@ -0,0 +1,342 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_MALLOC_H_ +#define _RTE_MALLOC_H_ + +/** + * @file + * RTE Malloc. This library provides methods for dynamically allocating memory + * from hugepages. + */ + +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Structure to hold heap statistics obtained from rte_malloc_get_socket_stats function. + */ +struct rte_malloc_socket_stats { + size_t heap_totalsz_bytes; /**< Total bytes on heap */ + size_t heap_freesz_bytes; /**< Total free bytes on heap */ + size_t greatest_free_size; /**< Size in bytes of largest free block */ + unsigned free_count; /**< Number of free elements on heap */ + unsigned alloc_count; /**< Number of allocated elements on heap */ + size_t heap_allocsz_bytes; /**< Total allocated bytes on heap */ +}; + +/** + * This function allocates memory from the huge-page area of memory. The memory + * is not cleared. In NUMA systems, the memory allocated resides on the same + * NUMA socket as the core that calls this function. + * + * @param type + * A string identifying the type of allocated objects (useful for debug + * purposes, such as identifying the cause of a memory leak). Can be NULL. + * @param size + * Size (in bytes) to be allocated. + * @param align + * If 0, the return is a pointer that is suitably aligned for any kind of + * variable (in the same manner as malloc()). + * Otherwise, the return is a pointer that is a multiple of *align*. In + * this case, it must be a power of two. (Minimum alignment is the + * cacheline size, i.e. 64-bytes) + * @return + * - NULL on error. Not enough memory, or invalid arguments (size is 0, + * align is not a power of two). + * - Otherwise, the pointer to the allocated object. + */ +void * +rte_malloc(const char *type, size_t size, unsigned align); + +/** + * Allocate zero'ed memory from the heap. + * + * Equivalent to rte_malloc() except that the memory zone is + * initialised with zeros. In NUMA systems, the memory allocated resides on the + * same NUMA socket as the core that calls this function. + * + * @param type + * A string identifying the type of allocated objects (useful for debug + * purposes, such as identifying the cause of a memory leak). Can be NULL. + * @param size + * Size (in bytes) to be allocated. + * @param align + * If 0, the return is a pointer that is suitably aligned for any kind of + * variable (in the same manner as malloc()). + * Otherwise, the return is a pointer that is a multiple of *align*. In + * this case, it must obviously be a power of two. (Minimum alignment is the + * cacheline size, i.e. 64-bytes) + * @return + * - NULL on error. Not enough memory, or invalid arguments (size is 0, + * align is not a power of two). + * - Otherwise, the pointer to the allocated object. + */ +void * +rte_zmalloc(const char *type, size_t size, unsigned align); + +/** + * Replacement function for calloc(), using huge-page memory. Memory area is + * initialised with zeros. In NUMA systems, the memory allocated resides on the + * same NUMA socket as the core that calls this function. + * + * @param type + * A string identifying the type of allocated objects (useful for debug + * purposes, such as identifying the cause of a memory leak). Can be NULL. + * @param num + * Number of elements to be allocated. + * @param size + * Size (in bytes) of a single element. + * @param align + * If 0, the return is a pointer that is suitably aligned for any kind of + * variable (in the same manner as malloc()). + * Otherwise, the return is a pointer that is a multiple of *align*. In + * this case, it must obviously be a power of two. (Minimum alignment is the + * cacheline size, i.e. 64-bytes) + * @return + * - NULL on error. Not enough memory, or invalid arguments (size is 0, + * align is not a power of two). + * - Otherwise, the pointer to the allocated object. + */ +void * +rte_calloc(const char *type, size_t num, size_t size, unsigned align); + +/** + * Replacement function for realloc(), using huge-page memory. Reserved area + * memory is resized, preserving contents. In NUMA systems, the new area + * resides on the same NUMA socket as the old area. + * + * @param ptr + * Pointer to already allocated memory + * @param size + * Size (in bytes) of new area. If this is 0, memory is freed. + * @param align + * If 0, the return is a pointer that is suitably aligned for any kind of + * variable (in the same manner as malloc()). + * Otherwise, the return is a pointer that is a multiple of *align*. In + * this case, it must obviously be a power of two. (Minimum alignment is the + * cacheline size, i.e. 64-bytes) + * @return + * - NULL on error. Not enough memory, or invalid arguments (size is 0, + * align is not a power of two). + * - Otherwise, the pointer to the reallocated memory. + */ +void * +rte_realloc(void *ptr, size_t size, unsigned align); + +/** + * This function allocates memory from the huge-page area of memory. The memory + * is not cleared. + * + * @param type + * A string identifying the type of allocated objects (useful for debug + * purposes, such as identifying the cause of a memory leak). Can be NULL. + * @param size + * Size (in bytes) to be allocated. + * @param align + * If 0, the return is a pointer that is suitably aligned for any kind of + * variable (in the same manner as malloc()). + * Otherwise, the return is a pointer that is a multiple of *align*. In + * this case, it must be a power of two. (Minimum alignment is the + * cacheline size, i.e. 64-bytes) + * @param socket + * NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function + * will behave the same as rte_malloc(). + * @return + * - NULL on error. Not enough memory, or invalid arguments (size is 0, + * align is not a power of two). + * - Otherwise, the pointer to the allocated object. + */ +void * +rte_malloc_socket(const char *type, size_t size, unsigned align, int socket); + +/** + * Allocate zero'ed memory from the heap. + * + * Equivalent to rte_malloc() except that the memory zone is + * initialised with zeros. + * + * @param type + * A string identifying the type of allocated objects (useful for debug + * purposes, such as identifying the cause of a memory leak). Can be NULL. + * @param size + * Size (in bytes) to be allocated. + * @param align + * If 0, the return is a pointer that is suitably aligned for any kind of + * variable (in the same manner as malloc()). + * Otherwise, the return is a pointer that is a multiple of *align*. In + * this case, it must obviously be a power of two. (Minimum alignment is the + * cacheline size, i.e. 64-bytes) + * @param socket + * NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function + * will behave the same as rte_zmalloc(). + * @return + * - NULL on error. Not enough memory, or invalid arguments (size is 0, + * align is not a power of two). + * - Otherwise, the pointer to the allocated object. + */ +void * +rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket); + +/** + * Replacement function for calloc(), using huge-page memory. Memory area is + * initialised with zeros. + * + * @param type + * A string identifying the type of allocated objects (useful for debug + * purposes, such as identifying the cause of a memory leak). Can be NULL. + * @param num + * Number of elements to be allocated. + * @param size + * Size (in bytes) of a single element. + * @param align + * If 0, the return is a pointer that is suitably aligned for any kind of + * variable (in the same manner as malloc()). + * Otherwise, the return is a pointer that is a multiple of *align*. In + * this case, it must obviously be a power of two. (Minimum alignment is the + * cacheline size, i.e. 64-bytes) + * @param socket + * NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function + * will behave the same as rte_calloc(). + * @return + * - NULL on error. Not enough memory, or invalid arguments (size is 0, + * align is not a power of two). + * - Otherwise, the pointer to the allocated object. + */ +void * +rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket); + +/** + * Frees the memory space pointed to by the provided pointer. + * + * This pointer must have been returned by a previous call to + * rte_malloc(), rte_zmalloc(), rte_calloc() or rte_realloc(). The behaviour of + * rte_free() is undefined if the pointer does not match this requirement. + * + * If the pointer is NULL, the function does nothing. + * + * @param ptr + * The pointer to memory to be freed. + */ +void +rte_free(void *ptr); + +/** + * If malloc debug is enabled, check a memory block for header + * and trailer markers to indicate that all is well with the block. + * If size is non-null, also return the size of the block. + * + * @param ptr + * pointer to the start of a data block, must have been returned + * by a previous call to rte_malloc(), rte_zmalloc(), rte_calloc() + * or rte_realloc() + * @param size + * if non-null, and memory block pointer is valid, returns the size + * of the memory block + * @return + * -1 on error, invalid pointer passed or header and trailer markers + * are missing or corrupted + * 0 on success + */ +int +rte_malloc_validate(const void *ptr, size_t *size); + +/** + * Get heap statistics for the specified heap. + * + * @param socket + * An unsigned integer specifying the socket to get heap statistics for + * @param socket_stats + * A structure which provides memory to store statistics + * @return + * Null on error + * Pointer to structure storing statistics on success + */ +int +rte_malloc_get_socket_stats(int socket, + struct rte_malloc_socket_stats *socket_stats); + +/** + * Dump statistics. + * + * Dump for the specified type to the console. If the type argument is + * NULL, all memory types will be dumped. + * + * @param f + * A pointer to a file for output + * @param type + * A string identifying the type of objects to dump, or NULL + * to dump all objects. + */ +void +rte_malloc_dump_stats(FILE *f, const char *type); + +/** + * Set the maximum amount of allocated memory for this type. + * + * This is not yet implemented + * + * @param type + * A string identifying the type of allocated objects. + * @param max + * The maximum amount of allocated bytes for this type. + * @return + * - 0: Success. + * - (-1): Error. + */ +int +rte_malloc_set_limit(const char *type, size_t max); + +/** + * Return the physical address of a virtual address obtained through + * rte_malloc + * + * @param addr + * Adress obtained from a previous rte_malloc call + * @return + * NULL on error + * otherwise return physical address of the buffer + */ +phys_addr_t +rte_malloc_virt2phy(const void *addr); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_MALLOC_H_ */ diff --git a/lib/librte_malloc/Makefile b/lib/librte_malloc/Makefile deleted file mode 100644 index ba87e34..0000000 --- a/lib/librte_malloc/Makefile +++ /dev/null @@ -1,48 +0,0 @@ -# BSD LICENSE -# -# Copyright(c) 2010-2014 Intel Corporation. All rights reserved. -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# * Neither the name of Intel Corporation nor the names of its -# contributors may be used to endorse or promote products derived -# from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -include $(RTE_SDK)/mk/rte.vars.mk - -# library name -LIB = librte_malloc.a - -CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 - -# all source are stored in SRCS-y -SRCS-$(CONFIG_RTE_LIBRTE_MALLOC) := rte_malloc.c malloc_elem.c malloc_heap.c - -# install includes -SYMLINK-$(CONFIG_RTE_LIBRTE_MALLOC)-include := rte_malloc.h - -# this lib needs eal -DEPDIRS-$(CONFIG_RTE_LIBRTE_MALLOC) += lib/librte_eal - -include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_malloc/malloc_elem.c b/lib/librte_malloc/malloc_elem.c deleted file mode 100644 index ef26e47..0000000 --- a/lib/librte_malloc/malloc_elem.c +++ /dev/null @@ -1,321 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "malloc_elem.h" -#include "malloc_heap.h" - -#define MIN_DATA_SIZE (RTE_CACHE_LINE_SIZE) - -/* - * initialise a general malloc_elem header structure - */ -void -malloc_elem_init(struct malloc_elem *elem, - struct malloc_heap *heap, const struct rte_memzone *mz, size_t size) -{ - elem->heap = heap; - elem->mz = mz; - elem->prev = NULL; - memset(&elem->free_list, 0, sizeof(elem->free_list)); - elem->state = ELEM_FREE; - elem->size = size; - elem->pad = 0; - set_header(elem); - set_trailer(elem); -} - -/* - * initialise a dummy malloc_elem header for the end-of-memzone marker - */ -void -malloc_elem_mkend(struct malloc_elem *elem, struct malloc_elem *prev) -{ - malloc_elem_init(elem, prev->heap, prev->mz, 0); - elem->prev = prev; - elem->state = ELEM_BUSY; /* mark busy so its never merged */ -} - -/* - * calculate the starting point of where data of the requested size - * and alignment would fit in the current element. If the data doesn't - * fit, return NULL. - */ -static void * -elem_start_pt(struct malloc_elem *elem, size_t size, unsigned align) -{ - const uintptr_t end_pt = (uintptr_t)elem + - elem->size - MALLOC_ELEM_TRAILER_LEN; - const uintptr_t new_data_start = rte_align_floor_int((end_pt - size),align); - const uintptr_t new_elem_start = new_data_start - MALLOC_ELEM_HEADER_LEN; - - /* if the new start point is before the exist start, it won't fit */ - return (new_elem_start < (uintptr_t)elem) ? NULL : (void *)new_elem_start; -} - -/* - * use elem_start_pt to determine if we get meet the size and - * alignment request from the current element - */ -int -malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align) -{ - return elem_start_pt(elem, size, align) != NULL; -} - -/* - * split an existing element into two smaller elements at the given - * split_pt parameter. - */ -static void -split_elem(struct malloc_elem *elem, struct malloc_elem *split_pt) -{ - struct malloc_elem *next_elem = RTE_PTR_ADD(elem, elem->size); - const unsigned old_elem_size = (uintptr_t)split_pt - (uintptr_t)elem; - const unsigned new_elem_size = elem->size - old_elem_size; - - malloc_elem_init(split_pt, elem->heap, elem->mz, new_elem_size); - split_pt->prev = elem; - next_elem->prev = split_pt; - elem->size = old_elem_size; - set_trailer(elem); -} - -/* - * Given an element size, compute its freelist index. - * We free an element into the freelist containing similarly-sized elements. - * We try to allocate elements starting with the freelist containing - * similarly-sized elements, and if necessary, we search freelists - * containing larger elements. - * - * Example element size ranges for a heap with five free lists: - * heap->free_head[0] - (0 , 2^8] - * heap->free_head[1] - (2^8 , 2^10] - * heap->free_head[2] - (2^10 ,2^12] - * heap->free_head[3] - (2^12, 2^14] - * heap->free_head[4] - (2^14, MAX_SIZE] - */ -size_t -malloc_elem_free_list_index(size_t size) -{ -#define MALLOC_MINSIZE_LOG2 8 -#define MALLOC_LOG2_INCREMENT 2 - - size_t log2; - size_t index; - - if (size <= (1UL << MALLOC_MINSIZE_LOG2)) - return 0; - - /* Find next power of 2 >= size. */ - log2 = sizeof(size) * 8 - __builtin_clzl(size-1); - - /* Compute freelist index, based on log2(size). */ - index = (log2 - MALLOC_MINSIZE_LOG2 + MALLOC_LOG2_INCREMENT - 1) / - MALLOC_LOG2_INCREMENT; - - return (index <= RTE_HEAP_NUM_FREELISTS-1? - index: RTE_HEAP_NUM_FREELISTS-1); -} - -/* - * Add the specified element to its heap's free list. - */ -void -malloc_elem_free_list_insert(struct malloc_elem *elem) -{ - size_t idx = malloc_elem_free_list_index(elem->size - MALLOC_ELEM_HEADER_LEN); - - elem->state = ELEM_FREE; - LIST_INSERT_HEAD(&elem->heap->free_head[idx], elem, free_list); -} - -/* - * Remove the specified element from its heap's free list. - */ -static void -elem_free_list_remove(struct malloc_elem *elem) -{ - LIST_REMOVE(elem, free_list); -} - -/* - * reserve a block of data in an existing malloc_elem. If the malloc_elem - * is much larger than the data block requested, we split the element in two. - * This function is only called from malloc_heap_alloc so parameter checking - * is not done here, as it's done there previously. - */ -struct malloc_elem * -malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align) -{ - struct malloc_elem *new_elem = elem_start_pt(elem, size, align); - const unsigned old_elem_size = (uintptr_t)new_elem - (uintptr_t)elem; - - if (old_elem_size < MALLOC_ELEM_OVERHEAD + MIN_DATA_SIZE){ - /* don't split it, pad the element instead */ - elem->state = ELEM_BUSY; - elem->pad = old_elem_size; - - /* put a dummy header in padding, to point to real element header */ - if (elem->pad > 0){ /* pad will be at least 64-bytes, as everything - * is cache-line aligned */ - new_elem->pad = elem->pad; - new_elem->state = ELEM_PAD; - new_elem->size = elem->size - elem->pad; - set_header(new_elem); - } - /* remove element from free list */ - elem_free_list_remove(elem); - - return new_elem; - } - - /* we are going to split the element in two. The original element - * remains free, and the new element is the one allocated. - * Re-insert original element, in case its new size makes it - * belong on a different list. - */ - elem_free_list_remove(elem); - split_elem(elem, new_elem); - new_elem->state = ELEM_BUSY; - malloc_elem_free_list_insert(elem); - - return new_elem; -} - -/* - * joing two struct malloc_elem together. elem1 and elem2 must - * be contiguous in memory. - */ -static inline void -join_elem(struct malloc_elem *elem1, struct malloc_elem *elem2) -{ - struct malloc_elem *next = RTE_PTR_ADD(elem2, elem2->size); - elem1->size += elem2->size; - next->prev = elem1; -} - -/* - * free a malloc_elem block by adding it to the free list. If the - * blocks either immediately before or immediately after newly freed block - * are also free, the blocks are merged together. - */ -int -malloc_elem_free(struct malloc_elem *elem) -{ - if (!malloc_elem_cookies_ok(elem) || elem->state != ELEM_BUSY) - return -1; - - rte_spinlock_lock(&(elem->heap->lock)); - struct malloc_elem *next = RTE_PTR_ADD(elem, elem->size); - if (next->state == ELEM_FREE){ - /* remove from free list, join to this one */ - elem_free_list_remove(next); - join_elem(elem, next); - } - - /* check if previous element is free, if so join with it and return, - * need to re-insert in free list, as that element's size is changing - */ - if (elem->prev != NULL && elem->prev->state == ELEM_FREE) { - elem_free_list_remove(elem->prev); - join_elem(elem->prev, elem); - malloc_elem_free_list_insert(elem->prev); - } - /* otherwise add ourselves to the free list */ - else { - malloc_elem_free_list_insert(elem); - elem->pad = 0; - } - /* decrease heap's count of allocated elements */ - elem->heap->alloc_count--; - rte_spinlock_unlock(&(elem->heap->lock)); - - return 0; -} - -/* - * attempt to resize a malloc_elem by expanding into any free space - * immediately after it in memory. - */ -int -malloc_elem_resize(struct malloc_elem *elem, size_t size) -{ - const size_t new_size = size + MALLOC_ELEM_OVERHEAD; - /* if we request a smaller size, then always return ok */ - const size_t current_size = elem->size - elem->pad; - if (current_size >= new_size) - return 0; - - struct malloc_elem *next = RTE_PTR_ADD(elem, elem->size); - rte_spinlock_lock(&elem->heap->lock); - if (next ->state != ELEM_FREE) - goto err_return; - if (current_size + next->size < new_size) - goto err_return; - - /* we now know the element fits, so remove from free list, - * join the two - */ - elem_free_list_remove(next); - join_elem(elem, next); - - if (elem->size - new_size >= MIN_DATA_SIZE + MALLOC_ELEM_OVERHEAD){ - /* now we have a big block together. Lets cut it down a bit, by splitting */ - struct malloc_elem *split_pt = RTE_PTR_ADD(elem, new_size); - split_pt = RTE_PTR_ALIGN_CEIL(split_pt, RTE_CACHE_LINE_SIZE); - split_elem(elem, split_pt); - malloc_elem_free_list_insert(split_pt); - } - rte_spinlock_unlock(&elem->heap->lock); - return 0; - -err_return: - rte_spinlock_unlock(&elem->heap->lock); - return -1; -} diff --git a/lib/librte_malloc/malloc_elem.h b/lib/librte_malloc/malloc_elem.h deleted file mode 100644 index 9790b1a..0000000 --- a/lib/librte_malloc/malloc_elem.h +++ /dev/null @@ -1,190 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef MALLOC_ELEM_H_ -#define MALLOC_ELEM_H_ - -#include - -/* dummy definition of struct so we can use pointers to it in malloc_elem struct */ -struct malloc_heap; - -enum elem_state { - ELEM_FREE = 0, - ELEM_BUSY, - ELEM_PAD /* element is a padding-only header */ -}; - -struct malloc_elem { - struct malloc_heap *heap; - struct malloc_elem *volatile prev; /* points to prev elem in memzone */ - LIST_ENTRY(malloc_elem) free_list; /* list of free elements in heap */ - const struct rte_memzone *mz; - volatile enum elem_state state; - uint32_t pad; - size_t size; -#ifdef RTE_LIBRTE_MALLOC_DEBUG - uint64_t header_cookie; /* Cookie marking start of data */ - /* trailer cookie at start + size */ -#endif -} __rte_cache_aligned; - -#ifndef RTE_LIBRTE_MALLOC_DEBUG -static const unsigned MALLOC_ELEM_TRAILER_LEN = 0; - -/* dummy function - just check if pointer is non-null */ -static inline int -malloc_elem_cookies_ok(const struct malloc_elem *elem){ return elem != NULL; } - -/* dummy function - no header if malloc_debug is not enabled */ -static inline void -set_header(struct malloc_elem *elem __rte_unused){ } - -/* dummy function - no trailer if malloc_debug is not enabled */ -static inline void -set_trailer(struct malloc_elem *elem __rte_unused){ } - - -#else -static const unsigned MALLOC_ELEM_TRAILER_LEN = RTE_CACHE_LINE_SIZE; - -#define MALLOC_HEADER_COOKIE 0xbadbadbadadd2e55ULL /**< Header cookie. */ -#define MALLOC_TRAILER_COOKIE 0xadd2e55badbadbadULL /**< Trailer cookie.*/ - -/* define macros to make referencing the header and trailer cookies easier */ -#define MALLOC_ELEM_TRAILER(elem) (*((uint64_t*)RTE_PTR_ADD(elem, \ - elem->size - MALLOC_ELEM_TRAILER_LEN))) -#define MALLOC_ELEM_HEADER(elem) (elem->header_cookie) - -static inline void -set_header(struct malloc_elem *elem) -{ - if (elem != NULL) - MALLOC_ELEM_HEADER(elem) = MALLOC_HEADER_COOKIE; -} - -static inline void -set_trailer(struct malloc_elem *elem) -{ - if (elem != NULL) - MALLOC_ELEM_TRAILER(elem) = MALLOC_TRAILER_COOKIE; -} - -/* check that the header and trailer cookies are set correctly */ -static inline int -malloc_elem_cookies_ok(const struct malloc_elem *elem) -{ - return (elem != NULL && - MALLOC_ELEM_HEADER(elem) == MALLOC_HEADER_COOKIE && - MALLOC_ELEM_TRAILER(elem) == MALLOC_TRAILER_COOKIE); -} - -#endif - -static const unsigned MALLOC_ELEM_HEADER_LEN = sizeof(struct malloc_elem); -#define MALLOC_ELEM_OVERHEAD (MALLOC_ELEM_HEADER_LEN + MALLOC_ELEM_TRAILER_LEN) - -/* - * Given a pointer to the start of a memory block returned by malloc, get - * the actual malloc_elem header for that block. - */ -static inline struct malloc_elem * -malloc_elem_from_data(const void *data) -{ - if (data == NULL) - return NULL; - - struct malloc_elem *elem = RTE_PTR_SUB(data, MALLOC_ELEM_HEADER_LEN); - if (!malloc_elem_cookies_ok(elem)) - return NULL; - return elem->state != ELEM_PAD ? elem: RTE_PTR_SUB(elem, elem->pad); -} - -/* - * initialise a malloc_elem header - */ -void -malloc_elem_init(struct malloc_elem *elem, - struct malloc_heap *heap, - const struct rte_memzone *mz, - size_t size); - -/* - * initialise a dummy malloc_elem header for the end-of-memzone marker - */ -void -malloc_elem_mkend(struct malloc_elem *elem, - struct malloc_elem *prev_free); - -/* - * return true if the current malloc_elem can hold a block of data - * of the requested size and with the requested alignment - */ -int -malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align); - -/* - * reserve a block of data in an existing malloc_elem. If the malloc_elem - * is much larger than the data block requested, we split the element in two. - */ -struct malloc_elem * -malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align); - -/* - * free a malloc_elem block by adding it to the free list. If the - * blocks either immediately before or immediately after newly freed block - * are also free, the blocks are merged together. - */ -int -malloc_elem_free(struct malloc_elem *elem); - -/* - * attempt to resize a malloc_elem by expanding into any free space - * immediately after it in memory. - */ -int -malloc_elem_resize(struct malloc_elem *elem, size_t size); - -/* - * Given an element size, compute its freelist index. - */ -size_t -malloc_elem_free_list_index(size_t size); - -/* - * Add element to its heap's free list. - */ -void -malloc_elem_free_list_insert(struct malloc_elem *elem); - -#endif /* MALLOC_ELEM_H_ */ diff --git a/lib/librte_malloc/malloc_heap.c b/lib/librte_malloc/malloc_heap.c deleted file mode 100644 index 95fcfec..0000000 --- a/lib/librte_malloc/malloc_heap.c +++ /dev/null @@ -1,210 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "malloc_elem.h" -#include "malloc_heap.h" - -/* since the memzone size starts with a digit, it will appear unquoted in - * rte_config.h, so quote it so it can be passed to rte_str_to_size */ -#define MALLOC_MEMZONE_SIZE RTE_STR(RTE_MALLOC_MEMZONE_SIZE) - -/* - * returns the configuration setting for the memzone size as a size_t value - */ -static inline size_t -get_malloc_memzone_size(void) -{ - return rte_str_to_size(MALLOC_MEMZONE_SIZE); -} - -/* - * reserve an extra memory zone and make it available for use by a particular - * heap. This reserves the zone and sets a dummy malloc_elem header at the end - * to prevent overflow. The rest of the zone is added to free list as a single - * large free block - */ -static int -malloc_heap_add_memzone(struct malloc_heap *heap, size_t size, unsigned align) -{ - const unsigned mz_flags = 0; - const size_t block_size = get_malloc_memzone_size(); - /* ensure the data we want to allocate will fit in the memzone */ - const size_t min_size = size + align + MALLOC_ELEM_OVERHEAD * 2; - const struct rte_memzone *mz = NULL; - struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; - unsigned numa_socket = heap - mcfg->malloc_heaps; - - size_t mz_size = min_size; - if (mz_size < block_size) - mz_size = block_size; - - char mz_name[RTE_MEMZONE_NAMESIZE]; - snprintf(mz_name, sizeof(mz_name), "MALLOC_S%u_HEAP_%u", - numa_socket, heap->mz_count++); - - /* try getting a block. if we fail and we don't need as big a block - * as given in the config, we can shrink our request and try again - */ - do { - mz = rte_memzone_reserve(mz_name, mz_size, numa_socket, - mz_flags); - if (mz == NULL) - mz_size /= 2; - } while (mz == NULL && mz_size > min_size); - if (mz == NULL) - return -1; - - /* allocate the memory block headers, one at end, one at start */ - struct malloc_elem *start_elem = (struct malloc_elem *)mz->addr; - struct malloc_elem *end_elem = RTE_PTR_ADD(mz->addr, - mz_size - MALLOC_ELEM_OVERHEAD); - end_elem = RTE_PTR_ALIGN_FLOOR(end_elem, RTE_CACHE_LINE_SIZE); - - const unsigned elem_size = (uintptr_t)end_elem - (uintptr_t)start_elem; - malloc_elem_init(start_elem, heap, mz, elem_size); - malloc_elem_mkend(end_elem, start_elem); - malloc_elem_free_list_insert(start_elem); - - /* increase heap total size by size of new memzone */ - heap->total_size+=mz_size - MALLOC_ELEM_OVERHEAD; - return 0; -} - -/* - * Iterates through the freelist for a heap to find a free element - * which can store data of the required size and with the requested alignment. - * Returns null on failure, or pointer to element on success. - */ -static struct malloc_elem * -find_suitable_element(struct malloc_heap *heap, size_t size, unsigned align) -{ - size_t idx; - struct malloc_elem *elem; - - for (idx = malloc_elem_free_list_index(size); - idx < RTE_HEAP_NUM_FREELISTS; idx++) - { - for (elem = LIST_FIRST(&heap->free_head[idx]); - !!elem; elem = LIST_NEXT(elem, free_list)) - { - if (malloc_elem_can_hold(elem, size, align)) - return elem; - } - } - return NULL; -} - -/* - * Main function called by malloc to allocate a block of memory from the - * heap. It locks the free list, scans it, and adds a new memzone if the - * scan fails. Once the new memzone is added, it re-scans and should return - * the new element after releasing the lock. - */ -void * -malloc_heap_alloc(struct malloc_heap *heap, - const char *type __attribute__((unused)), size_t size, unsigned align) -{ - size = RTE_CACHE_LINE_ROUNDUP(size); - align = RTE_CACHE_LINE_ROUNDUP(align); - rte_spinlock_lock(&heap->lock); - struct malloc_elem *elem = find_suitable_element(heap, size, align); - if (elem == NULL){ - if ((malloc_heap_add_memzone(heap, size, align)) == 0) - elem = find_suitable_element(heap, size, align); - } - - if (elem != NULL){ - elem = malloc_elem_alloc(elem, size, align); - /* increase heap's count of allocated elements */ - heap->alloc_count++; - } - rte_spinlock_unlock(&heap->lock); - return elem == NULL ? NULL : (void *)(&elem[1]); - -} - -/* - * Function to retrieve data for heap on given socket - */ -int -malloc_heap_get_stats(const struct malloc_heap *heap, - struct rte_malloc_socket_stats *socket_stats) -{ - size_t idx; - struct malloc_elem *elem; - - /* Initialise variables for heap */ - socket_stats->free_count = 0; - socket_stats->heap_freesz_bytes = 0; - socket_stats->greatest_free_size = 0; - - /* Iterate through free list */ - for (idx = 0; idx < RTE_HEAP_NUM_FREELISTS; idx++) { - for (elem = LIST_FIRST(&heap->free_head[idx]); - !!elem; elem = LIST_NEXT(elem, free_list)) - { - socket_stats->free_count++; - socket_stats->heap_freesz_bytes += elem->size; - if (elem->size > socket_stats->greatest_free_size) - socket_stats->greatest_free_size = elem->size; - } - } - /* Get stats on overall heap and allocated memory on this heap */ - socket_stats->heap_totalsz_bytes = heap->total_size; - socket_stats->heap_allocsz_bytes = (socket_stats->heap_totalsz_bytes - - socket_stats->heap_freesz_bytes); - socket_stats->alloc_count = heap->alloc_count; - return 0; -} - diff --git a/lib/librte_malloc/malloc_heap.h b/lib/librte_malloc/malloc_heap.h deleted file mode 100644 index b4aec45..0000000 --- a/lib/librte_malloc/malloc_heap.h +++ /dev/null @@ -1,65 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef MALLOC_HEAP_H_ -#define MALLOC_HEAP_H_ - -#include -#include - -#ifdef __cplusplus -extern "C" { -#endif - -static inline unsigned -malloc_get_numa_socket(void) -{ - return rte_socket_id(); -} - -void * -malloc_heap_alloc(struct malloc_heap *heap, const char *type, - size_t size, unsigned align); - -int -malloc_heap_get_stats(const struct malloc_heap *heap, - struct rte_malloc_socket_stats *socket_stats); - -int -rte_eal_heap_memzone_init(void); - -#ifdef __cplusplus -} -#endif - -#endif /* MALLOC_HEAP_H_ */ diff --git a/lib/librte_malloc/rte_malloc.c b/lib/librte_malloc/rte_malloc.c deleted file mode 100644 index b966fc7..0000000 --- a/lib/librte_malloc/rte_malloc.c +++ /dev/null @@ -1,261 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include "malloc_elem.h" -#include "malloc_heap.h" - - -/* Free the memory space back to heap */ -void rte_free(void *addr) -{ - if (addr == NULL) return; - if (malloc_elem_free(malloc_elem_from_data(addr)) < 0) - rte_panic("Fatal error: Invalid memory\n"); -} - -/* - * Allocate memory on specified heap. - */ -void * -rte_malloc_socket(const char *type, size_t size, unsigned align, int socket_arg) -{ - struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; - int socket, i; - void *ret; - - /* return NULL if size is 0 or alignment is not power-of-2 */ - if (size == 0 || !rte_is_power_of_2(align)) - return NULL; - - if (socket_arg == SOCKET_ID_ANY) - socket = malloc_get_numa_socket(); - else - socket = socket_arg; - - /* Check socket parameter */ - if (socket >= RTE_MAX_NUMA_NODES) - return NULL; - - ret = malloc_heap_alloc(&mcfg->malloc_heaps[socket], type, - size, align == 0 ? 1 : align); - if (ret != NULL || socket_arg != SOCKET_ID_ANY) - return ret; - - /* try other heaps */ - for (i = 0; i < RTE_MAX_NUMA_NODES; i++) { - /* we already tried this one */ - if (i == socket) - continue; - - ret = malloc_heap_alloc(&mcfg->malloc_heaps[i], type, - size, align == 0 ? 1 : align); - if (ret != NULL) - return ret; - } - - return NULL; -} - -/* - * Allocate memory on default heap. - */ -void * -rte_malloc(const char *type, size_t size, unsigned align) -{ - return rte_malloc_socket(type, size, align, SOCKET_ID_ANY); -} - -/* - * Allocate zero'd memory on specified heap. - */ -void * -rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket) -{ - void *ptr = rte_malloc_socket(type, size, align, socket); - - if (ptr != NULL) - memset(ptr, 0, size); - return ptr; -} - -/* - * Allocate zero'd memory on default heap. - */ -void * -rte_zmalloc(const char *type, size_t size, unsigned align) -{ - return rte_zmalloc_socket(type, size, align, SOCKET_ID_ANY); -} - -/* - * Allocate zero'd memory on specified heap. - */ -void * -rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket) -{ - return rte_zmalloc_socket(type, num * size, align, socket); -} - -/* - * Allocate zero'd memory on default heap. - */ -void * -rte_calloc(const char *type, size_t num, size_t size, unsigned align) -{ - return rte_zmalloc(type, num * size, align); -} - -/* - * Resize allocated memory. - */ -void * -rte_realloc(void *ptr, size_t size, unsigned align) -{ - if (ptr == NULL) - return rte_malloc(NULL, size, align); - - struct malloc_elem *elem = malloc_elem_from_data(ptr); - if (elem == NULL) - rte_panic("Fatal error: memory corruption detected\n"); - - size = RTE_CACHE_LINE_ROUNDUP(size), align = RTE_CACHE_LINE_ROUNDUP(align); - /* check alignment matches first, and if ok, see if we can resize block */ - if (RTE_PTR_ALIGN(ptr,align) == ptr && - malloc_elem_resize(elem, size) == 0) - return ptr; - - /* either alignment is off, or we have no room to expand, - * so move data. */ - void *new_ptr = rte_malloc(NULL, size, align); - if (new_ptr == NULL) - return NULL; - const unsigned old_size = elem->size - MALLOC_ELEM_OVERHEAD; - rte_memcpy(new_ptr, ptr, old_size < size ? old_size : size); - rte_free(ptr); - - return new_ptr; -} - -int -rte_malloc_validate(const void *ptr, size_t *size) -{ - const struct malloc_elem *elem = malloc_elem_from_data(ptr); - if (!malloc_elem_cookies_ok(elem)) - return -1; - if (size != NULL) - *size = elem->size - elem->pad - MALLOC_ELEM_OVERHEAD; - return 0; -} - -/* - * Function to retrieve data for heap on given socket - */ -int -rte_malloc_get_socket_stats(int socket, - struct rte_malloc_socket_stats *socket_stats) -{ - struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; - - if (socket >= RTE_MAX_NUMA_NODES || socket < 0) - return -1; - - return malloc_heap_get_stats(&mcfg->malloc_heaps[socket], socket_stats); -} - -/* - * Print stats on memory type. If type is NULL, info on all types is printed - */ -void -rte_malloc_dump_stats(FILE *f, __rte_unused const char *type) -{ - unsigned int socket; - struct rte_malloc_socket_stats sock_stats; - /* Iterate through all initialised heaps */ - for (socket=0; socket< RTE_MAX_NUMA_NODES; socket++) { - if ((rte_malloc_get_socket_stats(socket, &sock_stats) < 0)) - continue; - - fprintf(f, "Socket:%u\n", socket); - fprintf(f, "\tHeap_size:%zu,\n", sock_stats.heap_totalsz_bytes); - fprintf(f, "\tFree_size:%zu,\n", sock_stats.heap_freesz_bytes); - fprintf(f, "\tAlloc_size:%zu,\n", sock_stats.heap_allocsz_bytes); - fprintf(f, "\tGreatest_free_size:%zu,\n", - sock_stats.greatest_free_size); - fprintf(f, "\tAlloc_count:%u,\n",sock_stats.alloc_count); - fprintf(f, "\tFree_count:%u,\n", sock_stats.free_count); - } - return; -} - -/* - * TODO: Set limit to memory that can be allocated to memory type - */ -int -rte_malloc_set_limit(__rte_unused const char *type, - __rte_unused size_t max) -{ - return 0; -} - -/* - * Return the physical address of a virtual address obtained through rte_malloc - */ -phys_addr_t -rte_malloc_virt2phy(const void *addr) -{ - const struct malloc_elem *elem = malloc_elem_from_data(addr); - if (elem == NULL) - return 0; - return elem->mz->phys_addr + ((uintptr_t)addr - (uintptr_t)elem->mz->addr); -} diff --git a/lib/librte_malloc/rte_malloc.h b/lib/librte_malloc/rte_malloc.h deleted file mode 100644 index 74bb78c..0000000 --- a/lib/librte_malloc/rte_malloc.h +++ /dev/null @@ -1,342 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef _RTE_MALLOC_H_ -#define _RTE_MALLOC_H_ - -/** - * @file - * RTE Malloc. This library provides methods for dynamically allocating memory - * from hugepages. - */ - -#include -#include -#include - -#ifdef __cplusplus -extern "C" { -#endif - -/** - * Structure to hold heap statistics obtained from rte_malloc_get_socket_stats function. - */ -struct rte_malloc_socket_stats { - size_t heap_totalsz_bytes; /**< Total bytes on heap */ - size_t heap_freesz_bytes; /**< Total free bytes on heap */ - size_t greatest_free_size; /**< Size in bytes of largest free block */ - unsigned free_count; /**< Number of free elements on heap */ - unsigned alloc_count; /**< Number of allocated elements on heap */ - size_t heap_allocsz_bytes; /**< Total allocated bytes on heap */ -}; - -/** - * This function allocates memory from the huge-page area of memory. The memory - * is not cleared. In NUMA systems, the memory allocated resides on the same - * NUMA socket as the core that calls this function. - * - * @param type - * A string identifying the type of allocated objects (useful for debug - * purposes, such as identifying the cause of a memory leak). Can be NULL. - * @param size - * Size (in bytes) to be allocated. - * @param align - * If 0, the return is a pointer that is suitably aligned for any kind of - * variable (in the same manner as malloc()). - * Otherwise, the return is a pointer that is a multiple of *align*. In - * this case, it must be a power of two. (Minimum alignment is the - * cacheline size, i.e. 64-bytes) - * @return - * - NULL on error. Not enough memory, or invalid arguments (size is 0, - * align is not a power of two). - * - Otherwise, the pointer to the allocated object. - */ -void * -rte_malloc(const char *type, size_t size, unsigned align); - -/** - * Allocate zero'ed memory from the heap. - * - * Equivalent to rte_malloc() except that the memory zone is - * initialised with zeros. In NUMA systems, the memory allocated resides on the - * same NUMA socket as the core that calls this function. - * - * @param type - * A string identifying the type of allocated objects (useful for debug - * purposes, such as identifying the cause of a memory leak). Can be NULL. - * @param size - * Size (in bytes) to be allocated. - * @param align - * If 0, the return is a pointer that is suitably aligned for any kind of - * variable (in the same manner as malloc()). - * Otherwise, the return is a pointer that is a multiple of *align*. In - * this case, it must obviously be a power of two. (Minimum alignment is the - * cacheline size, i.e. 64-bytes) - * @return - * - NULL on error. Not enough memory, or invalid arguments (size is 0, - * align is not a power of two). - * - Otherwise, the pointer to the allocated object. - */ -void * -rte_zmalloc(const char *type, size_t size, unsigned align); - -/** - * Replacement function for calloc(), using huge-page memory. Memory area is - * initialised with zeros. In NUMA systems, the memory allocated resides on the - * same NUMA socket as the core that calls this function. - * - * @param type - * A string identifying the type of allocated objects (useful for debug - * purposes, such as identifying the cause of a memory leak). Can be NULL. - * @param num - * Number of elements to be allocated. - * @param size - * Size (in bytes) of a single element. - * @param align - * If 0, the return is a pointer that is suitably aligned for any kind of - * variable (in the same manner as malloc()). - * Otherwise, the return is a pointer that is a multiple of *align*. In - * this case, it must obviously be a power of two. (Minimum alignment is the - * cacheline size, i.e. 64-bytes) - * @return - * - NULL on error. Not enough memory, or invalid arguments (size is 0, - * align is not a power of two). - * - Otherwise, the pointer to the allocated object. - */ -void * -rte_calloc(const char *type, size_t num, size_t size, unsigned align); - -/** - * Replacement function for realloc(), using huge-page memory. Reserved area - * memory is resized, preserving contents. In NUMA systems, the new area - * resides on the same NUMA socket as the old area. - * - * @param ptr - * Pointer to already allocated memory - * @param size - * Size (in bytes) of new area. If this is 0, memory is freed. - * @param align - * If 0, the return is a pointer that is suitably aligned for any kind of - * variable (in the same manner as malloc()). - * Otherwise, the return is a pointer that is a multiple of *align*. In - * this case, it must obviously be a power of two. (Minimum alignment is the - * cacheline size, i.e. 64-bytes) - * @return - * - NULL on error. Not enough memory, or invalid arguments (size is 0, - * align is not a power of two). - * - Otherwise, the pointer to the reallocated memory. - */ -void * -rte_realloc(void *ptr, size_t size, unsigned align); - -/** - * This function allocates memory from the huge-page area of memory. The memory - * is not cleared. - * - * @param type - * A string identifying the type of allocated objects (useful for debug - * purposes, such as identifying the cause of a memory leak). Can be NULL. - * @param size - * Size (in bytes) to be allocated. - * @param align - * If 0, the return is a pointer that is suitably aligned for any kind of - * variable (in the same manner as malloc()). - * Otherwise, the return is a pointer that is a multiple of *align*. In - * this case, it must be a power of two. (Minimum alignment is the - * cacheline size, i.e. 64-bytes) - * @param socket - * NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function - * will behave the same as rte_malloc(). - * @return - * - NULL on error. Not enough memory, or invalid arguments (size is 0, - * align is not a power of two). - * - Otherwise, the pointer to the allocated object. - */ -void * -rte_malloc_socket(const char *type, size_t size, unsigned align, int socket); - -/** - * Allocate zero'ed memory from the heap. - * - * Equivalent to rte_malloc() except that the memory zone is - * initialised with zeros. - * - * @param type - * A string identifying the type of allocated objects (useful for debug - * purposes, such as identifying the cause of a memory leak). Can be NULL. - * @param size - * Size (in bytes) to be allocated. - * @param align - * If 0, the return is a pointer that is suitably aligned for any kind of - * variable (in the same manner as malloc()). - * Otherwise, the return is a pointer that is a multiple of *align*. In - * this case, it must obviously be a power of two. (Minimum alignment is the - * cacheline size, i.e. 64-bytes) - * @param socket - * NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function - * will behave the same as rte_zmalloc(). - * @return - * - NULL on error. Not enough memory, or invalid arguments (size is 0, - * align is not a power of two). - * - Otherwise, the pointer to the allocated object. - */ -void * -rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket); - -/** - * Replacement function for calloc(), using huge-page memory. Memory area is - * initialised with zeros. - * - * @param type - * A string identifying the type of allocated objects (useful for debug - * purposes, such as identifying the cause of a memory leak). Can be NULL. - * @param num - * Number of elements to be allocated. - * @param size - * Size (in bytes) of a single element. - * @param align - * If 0, the return is a pointer that is suitably aligned for any kind of - * variable (in the same manner as malloc()). - * Otherwise, the return is a pointer that is a multiple of *align*. In - * this case, it must obviously be a power of two. (Minimum alignment is the - * cacheline size, i.e. 64-bytes) - * @param socket - * NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function - * will behave the same as rte_calloc(). - * @return - * - NULL on error. Not enough memory, or invalid arguments (size is 0, - * align is not a power of two). - * - Otherwise, the pointer to the allocated object. - */ -void * -rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket); - -/** - * Frees the memory space pointed to by the provided pointer. - * - * This pointer must have been returned by a previous call to - * rte_malloc(), rte_zmalloc(), rte_calloc() or rte_realloc(). The behaviour of - * rte_free() is undefined if the pointer does not match this requirement. - * - * If the pointer is NULL, the function does nothing. - * - * @param ptr - * The pointer to memory to be freed. - */ -void -rte_free(void *ptr); - -/** - * If malloc debug is enabled, check a memory block for header - * and trailer markers to indicate that all is well with the block. - * If size is non-null, also return the size of the block. - * - * @param ptr - * pointer to the start of a data block, must have been returned - * by a previous call to rte_malloc(), rte_zmalloc(), rte_calloc() - * or rte_realloc() - * @param size - * if non-null, and memory block pointer is valid, returns the size - * of the memory block - * @return - * -1 on error, invalid pointer passed or header and trailer markers - * are missing or corrupted - * 0 on success - */ -int -rte_malloc_validate(const void *ptr, size_t *size); - -/** - * Get heap statistics for the specified heap. - * - * @param socket - * An unsigned integer specifying the socket to get heap statistics for - * @param socket_stats - * A structure which provides memory to store statistics - * @return - * Null on error - * Pointer to structure storing statistics on success - */ -int -rte_malloc_get_socket_stats(int socket, - struct rte_malloc_socket_stats *socket_stats); - -/** - * Dump statistics. - * - * Dump for the specified type to the console. If the type argument is - * NULL, all memory types will be dumped. - * - * @param f - * A pointer to a file for output - * @param type - * A string identifying the type of objects to dump, or NULL - * to dump all objects. - */ -void -rte_malloc_dump_stats(FILE *f, const char *type); - -/** - * Set the maximum amount of allocated memory for this type. - * - * This is not yet implemented - * - * @param type - * A string identifying the type of allocated objects. - * @param max - * The maximum amount of allocated bytes for this type. - * @return - * - 0: Success. - * - (-1): Error. - */ -int -rte_malloc_set_limit(const char *type, size_t max); - -/** - * Return the physical address of a virtual address obtained through - * rte_malloc - * - * @param addr - * Adress obtained from a previous rte_malloc call - * @return - * NULL on error - * otherwise return physical address of the buffer - */ -phys_addr_t -rte_malloc_virt2phy(const void *addr); - -#ifdef __cplusplus -} -#endif - -#endif /* _RTE_MALLOC_H_ */