From patchwork Wed Jul 11 10:59:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 42828 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1A1851B519; Wed, 11 Jul 2018 12:59:28 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 0808F1B454 for ; Wed, 11 Jul 2018 12:59:26 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id CDB2DB40068; Wed, 11 Jul 2018 10:59:24 +0000 (UTC) Received: from sfocexch01r.SolarFlarecom.com (10.20.40.34) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 11 Jul 2018 03:59:22 -0700 Received: from ocex03.SolarFlarecom.com (10.20.40.36) by sfocexch01r.SolarFlarecom.com (10.20.40.34) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 11 Jul 2018 03:59:18 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25 via Frontend Transport; Wed, 11 Jul 2018 03:59:18 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w6BAxHU8007443; Wed, 11 Jul 2018 11:59:17 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 1807816546A; Wed, 11 Jul 2018 11:59:17 +0100 (BST) From: Andrew Rybchenko To: CC: Olivier Matz Date: Wed, 11 Jul 2018 11:59:09 +0100 Message-ID: <1531306750-9844-1-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 X-MDID: 1531306765-u1YaSsDgZ9UJ Subject: [dpdk-dev] [PATCH 1/2] mempool: remove deprecated functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Functions rte_mempool_populate_phys(), rte_mempool_virt2phy() and rte_mempool_populate_phys_tab() are just wrappers for corresponding IOVA functions and were deprecated in v17.11. Functions rte_mempool_xmem_create(), rte_mempool_xmem_size(), rte_mempool_xmem_usage() and rte_mempool_populate_iova_tab() were deprecated in v18.05 and removal was announced earlier in v18.02. Signed-off-by: Andrew Rybchenko --- lib/librte_mempool/Makefile | 3 - lib/librte_mempool/meson.build | 4 - lib/librte_mempool/rte_mempool.c | 181 +-------------------- lib/librte_mempool/rte_mempool.h | 179 -------------------- lib/librte_mempool/rte_mempool_version.map | 6 - 5 files changed, 1 insertion(+), 372 deletions(-) diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile index e3c32b14f..1d4c21c0e 100644 --- a/lib/librte_mempool/Makefile +++ b/lib/librte_mempool/Makefile @@ -7,9 +7,6 @@ include $(RTE_SDK)/mk/rte.vars.mk LIB = librte_mempool.a CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -# Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab() -# from earlier deprecated rte_mempool_populate_phys_tab() -CFLAGS += -Wno-deprecated-declarations CFLAGS += -DALLOW_EXPERIMENTAL_API LDLIBS += -lrte_eal -lrte_ring diff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build index d507e5511..5d4ab4169 100644 --- a/lib/librte_mempool/meson.build +++ b/lib/librte_mempool/meson.build @@ -5,10 +5,6 @@ allow_experimental_apis = true extra_flags = [] -# Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab() -# from earlier deprecated rte_mempool_populate_phys_tab() -extra_flags += '-Wno-deprecated-declarations' - foreach flag: extra_flags if cc.has_argument(flag) cflags += flag diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 8c8b9f809..d48e53c7e 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -227,9 +227,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, /* - * Internal function to calculate required memory chunk size shared - * by default implementation of the corresponding callback and - * deprecated external function. + * Internal function to calculate required memory chunk size. */ size_t rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz, @@ -252,66 +250,6 @@ rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz, return pg_num << pg_shift; } -/* - * Calculate maximum amount of memory required to store given number of objects. - */ -size_t -rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift, - __rte_unused unsigned int flags) -{ - return rte_mempool_calc_mem_size_helper(elt_num, total_elt_sz, - pg_shift); -} - -/* - * Calculate how much memory would be actually required with the - * given memory footprint to store required number of elements. - */ -ssize_t -rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num, - size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num, - uint32_t pg_shift, __rte_unused unsigned int flags) -{ - uint32_t elt_cnt = 0; - rte_iova_t start, end; - uint32_t iova_idx; - size_t pg_sz = (size_t)1 << pg_shift; - - /* if iova is NULL, assume contiguous memory */ - if (iova == NULL) { - start = 0; - end = pg_sz * pg_num; - iova_idx = pg_num; - } else { - start = iova[0]; - end = iova[0] + pg_sz; - iova_idx = 1; - } - while (elt_cnt < elt_num) { - - if (end - start >= total_elt_sz) { - /* enough contiguous memory, add an object */ - start += total_elt_sz; - elt_cnt++; - } else if (iova_idx < pg_num) { - /* no room to store one obj, add a page */ - if (end == iova[iova_idx]) { - end += pg_sz; - } else { - start = iova[iova_idx]; - end = iova[iova_idx] + pg_sz; - } - iova_idx++; - - } else { - /* no more page, return how many elements fit */ - return -(size_t)elt_cnt; - } - } - - return (size_t)iova_idx << pg_shift; -} - /* free a memchunk allocated with rte_memzone_reserve() */ static void rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr, @@ -423,63 +361,6 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, return ret; } -int -rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr, - phys_addr_t paddr, size_t len, rte_mempool_memchunk_free_cb_t *free_cb, - void *opaque) -{ - return rte_mempool_populate_iova(mp, vaddr, paddr, len, free_cb, opaque); -} - -/* Add objects in the pool, using a table of physical pages. Return the - * number of objects added, or a negative value on error. - */ -int -rte_mempool_populate_iova_tab(struct rte_mempool *mp, char *vaddr, - const rte_iova_t iova[], uint32_t pg_num, uint32_t pg_shift, - rte_mempool_memchunk_free_cb_t *free_cb, void *opaque) -{ - uint32_t i, n; - int ret, cnt = 0; - size_t pg_sz = (size_t)1 << pg_shift; - - /* mempool must not be populated */ - if (mp->nb_mem_chunks != 0) - return -EEXIST; - - if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) - return rte_mempool_populate_iova(mp, vaddr, RTE_BAD_IOVA, - pg_num * pg_sz, free_cb, opaque); - - for (i = 0; i < pg_num && mp->populated_size < mp->size; i += n) { - - /* populate with the largest group of contiguous pages */ - for (n = 1; (i + n) < pg_num && - iova[i + n - 1] + pg_sz == iova[i + n]; n++) - ; - - ret = rte_mempool_populate_iova(mp, vaddr + i * pg_sz, - iova[i], n * pg_sz, free_cb, opaque); - if (ret < 0) { - rte_mempool_free_memchunks(mp); - return ret; - } - /* no need to call the free callback for next chunks */ - free_cb = NULL; - cnt += ret; - } - return cnt; -} - -int -rte_mempool_populate_phys_tab(struct rte_mempool *mp, char *vaddr, - const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift, - rte_mempool_memchunk_free_cb_t *free_cb, void *opaque) -{ - return rte_mempool_populate_iova_tab(mp, vaddr, paddr, pg_num, pg_shift, - free_cb, opaque); -} - /* Populate the mempool with a virtual area. Return the number of * objects added, or a negative value on error. */ @@ -1065,66 +946,6 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size, return NULL; } -/* - * Create the mempool over already allocated chunk of memory. - * That external memory buffer can consists of physically disjoint pages. - * Setting vaddr to NULL, makes mempool to fallback to rte_mempool_create() - * behavior. - */ -struct rte_mempool * -rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, - unsigned cache_size, unsigned private_data_size, - rte_mempool_ctor_t *mp_init, void *mp_init_arg, - rte_mempool_obj_cb_t *obj_init, void *obj_init_arg, - int socket_id, unsigned flags, void *vaddr, - const rte_iova_t iova[], uint32_t pg_num, uint32_t pg_shift) -{ - struct rte_mempool *mp = NULL; - int ret; - - /* no virtual address supplied, use rte_mempool_create() */ - if (vaddr == NULL) - return rte_mempool_create(name, n, elt_size, cache_size, - private_data_size, mp_init, mp_init_arg, - obj_init, obj_init_arg, socket_id, flags); - - /* check that we have both VA and PA */ - if (iova == NULL) { - rte_errno = EINVAL; - return NULL; - } - - /* Check that pg_shift parameter is valid. */ - if (pg_shift > MEMPOOL_PG_SHIFT_MAX) { - rte_errno = EINVAL; - return NULL; - } - - mp = rte_mempool_create_empty(name, n, elt_size, cache_size, - private_data_size, socket_id, flags); - if (mp == NULL) - return NULL; - - /* call the mempool priv initializer */ - if (mp_init) - mp_init(mp, mp_init_arg); - - ret = rte_mempool_populate_iova_tab(mp, vaddr, iova, pg_num, pg_shift, - NULL, NULL); - if (ret < 0 || ret != (int)mp->size) - goto fail; - - /* call the object initializers */ - if (obj_init) - rte_mempool_obj_iter(mp, obj_init, obj_init_arg); - - return mp; - - fail: - rte_mempool_free(mp); - return NULL; -} - /* Return the number of entries in the mempool */ unsigned int rte_mempool_avail_count(const struct rte_mempool *mp) diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 1f59553b3..5d1602555 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -973,74 +973,6 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size, rte_mempool_obj_cb_t *obj_init, void *obj_init_arg, int socket_id, unsigned flags); -/** - * @deprecated - * Create a new mempool named *name* in memory. - * - * The pool contains n elements of elt_size. Its size is set to n. - * This function uses ``memzone_reserve()`` to allocate the mempool header - * (and the objects if vaddr is NULL). - * Depending on the input parameters, mempool elements can be either allocated - * together with the mempool header, or an externally provided memory buffer - * could be used to store mempool objects. In later case, that external - * memory buffer can consist of set of disjoint physical pages. - * - * @param name - * The name of the mempool. - * @param n - * The number of elements in the mempool. The optimum size (in terms of - * memory usage) for a mempool is when n is a power of two minus one: - * n = (2^q - 1). - * @param elt_size - * The size of each element. - * @param cache_size - * Size of the cache. See rte_mempool_create() for details. - * @param private_data_size - * The size of the private data appended after the mempool - * structure. This is useful for storing some private data after the - * mempool structure, as is done for rte_mbuf_pool for example. - * @param mp_init - * A function pointer that is called for initialization of the pool, - * before object initialization. The user can initialize the private - * data in this function if needed. This parameter can be NULL if - * not needed. - * @param mp_init_arg - * An opaque pointer to data that can be used in the mempool - * constructor function. - * @param obj_init - * A function called for each object at initialization of the pool. - * See rte_mempool_create() for details. - * @param obj_init_arg - * An opaque pointer passed to the object constructor function. - * @param socket_id - * The *socket_id* argument is the socket identifier in the case of - * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA - * constraint for the reserved zone. - * @param flags - * Flags controlling the behavior of the mempool. See - * rte_mempool_create() for details. - * @param vaddr - * Virtual address of the externally allocated memory buffer. - * Will be used to store mempool objects. - * @param iova - * Array of IO addresses of the pages that comprises given memory buffer. - * @param pg_num - * Number of elements in the iova array. - * @param pg_shift - * LOG2 of the physical pages size. - * @return - * The pointer to the new allocated mempool, on success. NULL on error - * with rte_errno set appropriately. See rte_mempool_create() for details. - */ -__rte_deprecated -struct rte_mempool * -rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size, - unsigned cache_size, unsigned private_data_size, - rte_mempool_ctor_t *mp_init, void *mp_init_arg, - rte_mempool_obj_cb_t *obj_init, void *obj_init_arg, - int socket_id, unsigned flags, void *vaddr, - const rte_iova_t iova[], uint32_t pg_num, uint32_t pg_shift); - /** * Create an empty mempool * @@ -1123,48 +1055,6 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, rte_iova_t iova, size_t len, rte_mempool_memchunk_free_cb_t *free_cb, void *opaque); -__rte_deprecated -int rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr, - phys_addr_t paddr, size_t len, rte_mempool_memchunk_free_cb_t *free_cb, - void *opaque); - -/** - * @deprecated - * Add physical memory for objects in the pool at init - * - * Add a virtually contiguous memory chunk in the pool where objects can - * be instantiated. The IO addresses corresponding to the virtual - * area are described in iova[], pg_num, pg_shift. - * - * @param mp - * A pointer to the mempool structure. - * @param vaddr - * The virtual address of memory that should be used to store objects. - * @param iova - * An array of IO addresses of each page composing the virtual area. - * @param pg_num - * Number of elements in the iova array. - * @param pg_shift - * LOG2 of the physical pages size. - * @param free_cb - * The callback used to free this chunk when destroying the mempool. - * @param opaque - * An opaque argument passed to free_cb. - * @return - * The number of objects added on success. - * On error, the chunks are not added in the memory list of the - * mempool and a negative errno is returned. - */ -__rte_deprecated -int rte_mempool_populate_iova_tab(struct rte_mempool *mp, char *vaddr, - const rte_iova_t iova[], uint32_t pg_num, uint32_t pg_shift, - rte_mempool_memchunk_free_cb_t *free_cb, void *opaque); - -__rte_deprecated -int rte_mempool_populate_phys_tab(struct rte_mempool *mp, char *vaddr, - const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift, - rte_mempool_memchunk_free_cb_t *free_cb, void *opaque); - /** * Add virtually contiguous memory for objects in the pool at init * @@ -1746,13 +1636,6 @@ rte_mempool_virt2iova(const void *elt) return hdr->iova; } -__rte_deprecated -static inline phys_addr_t -rte_mempool_virt2phy(__rte_unused const struct rte_mempool *mp, const void *elt) -{ - return rte_mempool_virt2iova(elt); -} - /** * Check the consistency of mempool objects. * @@ -1821,68 +1704,6 @@ struct rte_mempool *rte_mempool_lookup(const char *name); uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, struct rte_mempool_objsz *sz); -/** - * @deprecated - * Get the size of memory required to store mempool elements. - * - * Calculate the maximum amount of memory required to store given number - * of objects. Assume that the memory buffer will be aligned at page - * boundary. - * - * Note that if object size is bigger than page size, then it assumes - * that pages are grouped in subsets of physically continuous pages big - * enough to store at least one object. - * - * @param elt_num - * Number of elements. - * @param total_elt_sz - * The size of each element, including header and trailer, as returned - * by rte_mempool_calc_obj_size(). - * @param pg_shift - * LOG2 of the physical pages size. If set to 0, ignore page boundaries. - * @param flags - * The mempool flags. - * @return - * Required memory size aligned at page boundary. - */ -__rte_deprecated -size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, - uint32_t pg_shift, unsigned int flags); - -/** - * @deprecated - * Get the size of memory required to store mempool elements. - * - * Calculate how much memory would be actually required with the given - * memory footprint to store required number of objects. - * - * @param vaddr - * Virtual address of the externally allocated memory buffer. - * Will be used to store mempool objects. - * @param elt_num - * Number of elements. - * @param total_elt_sz - * The size of each element, including header and trailer, as returned - * by rte_mempool_calc_obj_size(). - * @param iova - * Array of IO addresses of the pages that comprises given memory buffer. - * @param pg_num - * Number of elements in the iova array. - * @param pg_shift - * LOG2 of the physical pages size. - * @param flags - * The mempool flags. - * @return - * On success, the number of bytes needed to store given number of - * objects, aligned to the given page size. If the provided memory - * buffer is too small, return a negative value whose absolute value - * is the actual number of elements that can be stored in that buffer. - */ -__rte_deprecated -ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, - size_t total_elt_sz, const rte_iova_t iova[], uint32_t pg_num, - uint32_t pg_shift, unsigned int flags); - /** * Walk list of all memory pools * diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index 7091b954b..17cbca460 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -8,9 +8,6 @@ DPDK_2.0 { rte_mempool_list_dump; rte_mempool_lookup; rte_mempool_walk; - rte_mempool_xmem_create; - rte_mempool_xmem_size; - rte_mempool_xmem_usage; local: *; }; @@ -34,8 +31,6 @@ DPDK_16.07 { rte_mempool_ops_table; rte_mempool_populate_anon; rte_mempool_populate_default; - rte_mempool_populate_phys; - rte_mempool_populate_phys_tab; rte_mempool_populate_virt; rte_mempool_register_ops; rte_mempool_set_ops_byname; @@ -46,7 +41,6 @@ DPDK_17.11 { global: rte_mempool_populate_iova; - rte_mempool_populate_iova_tab; } DPDK_16.07; From patchwork Wed Jul 11 10:59:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 42827 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5A8A31B4FB; Wed, 11 Jul 2018 12:59:26 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 61C3D1B454 for ; Wed, 11 Jul 2018 12:59:24 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id F334EB40062; Wed, 11 Jul 2018 10:59:22 +0000 (UTC) Received: from sfocexch01r.SolarFlarecom.com (10.20.40.34) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 11 Jul 2018 03:59:20 -0700 Received: from ocex03.SolarFlarecom.com (10.20.40.36) by sfocexch01r.SolarFlarecom.com (10.20.40.34) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 11 Jul 2018 03:59:18 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25 via Frontend Transport; Wed, 11 Jul 2018 03:59:18 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w6BAxHki007445; Wed, 11 Jul 2018 11:59:17 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 23CBC16546B; Wed, 11 Jul 2018 11:59:17 +0100 (BST) From: Andrew Rybchenko To: CC: Olivier Matz Date: Wed, 11 Jul 2018 11:59:10 +0100 Message-ID: <1531306750-9844-2-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1531306750-9844-1-git-send-email-arybchenko@solarflare.com> References: <1531306750-9844-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-MDID: 1531306763-v4nuj_VZmslu Subject: [dpdk-dev] [PATCH 2/2] mempool: fold memory size calculation helper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_mempool_calc_mem_size_helper() was introduced to avoid code duplication and used in deprecated rte_mempool_mem_size() and rte_mempool_op_calc_mem_size_default(). Now the first one is removed and it is better to fold the helper into the second one to make it more readable. Signed-off-by: Andrew Rybchenko --- lib/librte_mempool/rte_mempool.c | 25 -------------------- lib/librte_mempool/rte_mempool.h | 22 ----------------- lib/librte_mempool/rte_mempool_ops_default.c | 25 +++++++++++++++++--- 3 files changed, 22 insertions(+), 50 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index d48e53c7e..52bc97a75 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -225,31 +225,6 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, return sz->total_size; } - -/* - * Internal function to calculate required memory chunk size. - */ -size_t -rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz, - uint32_t pg_shift) -{ - size_t obj_per_page, pg_num, pg_sz; - - if (total_elt_sz == 0) - return 0; - - if (pg_shift == 0) - return total_elt_sz * elt_num; - - pg_sz = (size_t)1 << pg_shift; - obj_per_page = pg_sz / total_elt_sz; - if (obj_per_page == 0) - return RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * elt_num; - - pg_num = (elt_num + obj_per_page - 1) / obj_per_page; - return pg_num << pg_shift; -} - /* free a memchunk allocated with rte_memzone_reserve() */ static void rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr, diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 5d1602555..7c9cd9a2f 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -487,28 +487,6 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, size_t *min_chunk_size, size_t *align); -/** - * @internal Helper function to calculate memory size required to store - * specified number of objects in assumption that the memory buffer will - * be aligned at page boundary. - * - * Note that if object size is bigger than page size, then it assumes - * that pages are grouped in subsets of physically continuous pages big - * enough to store at least one object. - * - * @param elt_num - * Number of elements. - * @param total_elt_sz - * The size of each element, including header and trailer, as returned - * by rte_mempool_calc_obj_size(). - * @param pg_shift - * LOG2 of the physical pages size. If set to 0, ignore page boundaries. - * @return - * Required memory size aligned at page boundary. - */ -size_t rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz, - uint32_t pg_shift); - /** * Function to be called for each populated object. * diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index fd63ca137..4e2bfc82d 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -12,12 +12,31 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, size_t *min_chunk_size, size_t *align) { size_t total_elt_sz; + size_t obj_per_page, pg_num, pg_sz; size_t mem_size; total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; - - mem_size = rte_mempool_calc_mem_size_helper(obj_num, total_elt_sz, - pg_shift); + if (total_elt_sz == 0) { + mem_size = 0; + } else if (pg_shift == 0) { + mem_size = total_elt_sz * obj_num; + } else { + pg_sz = (size_t)1 << pg_shift; + obj_per_page = pg_sz / total_elt_sz; + if (obj_per_page == 0) { + /* + * Note that if object size is bigger than page size, + * then it is assumed that pages are grouped in subsets + * of physically continuous pages big enough to store + * at least one object. + */ + mem_size = + RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * obj_num; + } else { + pg_num = (obj_num + obj_per_page - 1) / obj_per_page; + mem_size = pg_num << pg_shift; + } + } *min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);