From patchwork Wed Jul 11 10:59:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 42827 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5A8A31B4FB; Wed, 11 Jul 2018 12:59:26 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 61C3D1B454 for ; Wed, 11 Jul 2018 12:59:24 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id F334EB40062; Wed, 11 Jul 2018 10:59:22 +0000 (UTC) Received: from sfocexch01r.SolarFlarecom.com (10.20.40.34) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 11 Jul 2018 03:59:20 -0700 Received: from ocex03.SolarFlarecom.com (10.20.40.36) by sfocexch01r.SolarFlarecom.com (10.20.40.34) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 11 Jul 2018 03:59:18 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25 via Frontend Transport; Wed, 11 Jul 2018 03:59:18 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w6BAxHki007445; Wed, 11 Jul 2018 11:59:17 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 23CBC16546B; Wed, 11 Jul 2018 11:59:17 +0100 (BST) From: Andrew Rybchenko To: CC: Olivier Matz Date: Wed, 11 Jul 2018 11:59:10 +0100 Message-ID: <1531306750-9844-2-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1531306750-9844-1-git-send-email-arybchenko@solarflare.com> References: <1531306750-9844-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-MDID: 1531306763-v4nuj_VZmslu Subject: [dpdk-dev] [PATCH 2/2] mempool: fold memory size calculation helper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_mempool_calc_mem_size_helper() was introduced to avoid code duplication and used in deprecated rte_mempool_mem_size() and rte_mempool_op_calc_mem_size_default(). Now the first one is removed and it is better to fold the helper into the second one to make it more readable. Signed-off-by: Andrew Rybchenko --- lib/librte_mempool/rte_mempool.c | 25 -------------------- lib/librte_mempool/rte_mempool.h | 22 ----------------- lib/librte_mempool/rte_mempool_ops_default.c | 25 +++++++++++++++++--- 3 files changed, 22 insertions(+), 50 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index d48e53c7e..52bc97a75 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -225,31 +225,6 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, return sz->total_size; } - -/* - * Internal function to calculate required memory chunk size. - */ -size_t -rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz, - uint32_t pg_shift) -{ - size_t obj_per_page, pg_num, pg_sz; - - if (total_elt_sz == 0) - return 0; - - if (pg_shift == 0) - return total_elt_sz * elt_num; - - pg_sz = (size_t)1 << pg_shift; - obj_per_page = pg_sz / total_elt_sz; - if (obj_per_page == 0) - return RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * elt_num; - - pg_num = (elt_num + obj_per_page - 1) / obj_per_page; - return pg_num << pg_shift; -} - /* free a memchunk allocated with rte_memzone_reserve() */ static void rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr, diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 5d1602555..7c9cd9a2f 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -487,28 +487,6 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, size_t *min_chunk_size, size_t *align); -/** - * @internal Helper function to calculate memory size required to store - * specified number of objects in assumption that the memory buffer will - * be aligned at page boundary. - * - * Note that if object size is bigger than page size, then it assumes - * that pages are grouped in subsets of physically continuous pages big - * enough to store at least one object. - * - * @param elt_num - * Number of elements. - * @param total_elt_sz - * The size of each element, including header and trailer, as returned - * by rte_mempool_calc_obj_size(). - * @param pg_shift - * LOG2 of the physical pages size. If set to 0, ignore page boundaries. - * @return - * Required memory size aligned at page boundary. - */ -size_t rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz, - uint32_t pg_shift); - /** * Function to be called for each populated object. * diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index fd63ca137..4e2bfc82d 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -12,12 +12,31 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, size_t *min_chunk_size, size_t *align) { size_t total_elt_sz; + size_t obj_per_page, pg_num, pg_sz; size_t mem_size; total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; - - mem_size = rte_mempool_calc_mem_size_helper(obj_num, total_elt_sz, - pg_shift); + if (total_elt_sz == 0) { + mem_size = 0; + } else if (pg_shift == 0) { + mem_size = total_elt_sz * obj_num; + } else { + pg_sz = (size_t)1 << pg_shift; + obj_per_page = pg_sz / total_elt_sz; + if (obj_per_page == 0) { + /* + * Note that if object size is bigger than page size, + * then it is assumed that pages are grouped in subsets + * of physically continuous pages big enough to store + * at least one object. + */ + mem_size = + RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * obj_num; + } else { + pg_num = (obj_num + obj_per_page - 1) / obj_per_page; + mem_size = pg_num << pg_shift; + } + } *min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz);