From patchwork Mon Jun 22 18:58:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cyril Chemparathy X-Patchwork-Id: 5683 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id ADDF7C9BE; Mon, 22 Jun 2015 20:59:12 +0200 (CEST) Received: from sclab-apps-2.localdomain (sc-fw1.tilera.com [12.218.212.162]) by dpdk.org (Postfix) with ESMTP id 0D36EC8EE for ; Mon, 22 Jun 2015 20:58:57 +0200 (CEST) X-CheckPoint: {55885AF0-1-A3D4DA0C-C0000002} Received: by sclab-apps-2.localdomain (Postfix, from userid 1318) id B200A220501; Mon, 22 Jun 2015 11:58:45 -0700 (PDT) From: Cyril Chemparathy To: dev@dpdk.org Date: Mon, 22 Jun 2015 11:58:39 -0700 Message-Id: <1434999524-26528-8-git-send-email-cchemparathy@ezchip.com> X-Mailer: git-send-email 2.1.2 In-Reply-To: <1434999524-26528-1-git-send-email-cchemparathy@ezchip.com> References: <1434999524-26528-1-git-send-email-cchemparathy@ezchip.com> Subject: [dpdk-dev] [PATCH v2 07/12] memzone: allow multiple pagesizes to be requested X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch extends the memzone allocator to remove the restriction that prevented callers from specifying multiple page sizes in the flags argument. In doing so, we also sanitize the free segment matching logic to get rid of architecture specific disjunctions (2MB vs 1GB on x86, and 16MB vs 16GB on PPC), thereby allowing for a broader range of hugepages on architectures that support it. Change-Id: Ic3713f61da49629a570fe4de34a8aaf5e2e0a19b Signed-off-by: Cyril Chemparathy --- lib/librte_eal/common/eal_common_memzone.c | 58 ++++++++++++++---------------- 1 file changed, 27 insertions(+), 31 deletions(-) diff --git a/lib/librte_eal/common/eal_common_memzone.c b/lib/librte_eal/common/eal_common_memzone.c index 87057d4..84b773c 100644 --- a/lib/librte_eal/common/eal_common_memzone.c +++ b/lib/librte_eal/common/eal_common_memzone.c @@ -113,7 +113,8 @@ align_phys_boundary(const struct rte_memseg *ms, size_t len, size_t align, static const struct rte_memzone * memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, - int socket_id, unsigned flags, unsigned align, unsigned bound) + int socket_id, uint64_t size_mask, unsigned align, + unsigned bound) { struct rte_mem_config *mcfg; unsigned i = 0; @@ -201,18 +202,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, if ((requested_len + addr_offset) > free_memseg[i].len) continue; - /* check flags for hugepage sizes */ - if ((flags & RTE_MEMZONE_2MB) && - free_memseg[i].hugepage_sz == RTE_PGSIZE_1G) - continue; - if ((flags & RTE_MEMZONE_1GB) && - free_memseg[i].hugepage_sz == RTE_PGSIZE_2M) - continue; - if ((flags & RTE_MEMZONE_16MB) && - free_memseg[i].hugepage_sz == RTE_PGSIZE_16G) - continue; - if ((flags & RTE_MEMZONE_16GB) && - free_memseg[i].hugepage_sz == RTE_PGSIZE_16M) + if ((size_mask & free_memseg[i].hugepage_sz) == 0) continue; /* this segment is the best until now */ @@ -244,16 +234,6 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* no segment found */ if (memseg_idx == -1) { - /* - * If RTE_MEMZONE_SIZE_HINT_ONLY flag is specified, - * try allocating again without the size parameter otherwise -fail. - */ - if ((flags & RTE_MEMZONE_SIZE_HINT_ONLY) && - ((flags & RTE_MEMZONE_1GB) || (flags & RTE_MEMZONE_2MB) - || (flags & RTE_MEMZONE_16MB) || (flags & RTE_MEMZONE_16GB))) - return memzone_reserve_aligned_thread_unsafe(name, - len, socket_id, 0, align, bound); - rte_errno = ENOMEM; return NULL; } @@ -302,13 +282,18 @@ rte_memzone_reserve_thread_safe(const char *name, size_t len, { struct rte_mem_config *mcfg; const struct rte_memzone *mz = NULL; - - /* both sizes cannot be explicitly called for */ - if (((flags & RTE_MEMZONE_1GB) && (flags & RTE_MEMZONE_2MB)) - || ((flags & RTE_MEMZONE_16MB) && (flags & RTE_MEMZONE_16GB))) { - rte_errno = EINVAL; - return NULL; - } + uint64_t size_mask = 0; + + if (flags & RTE_MEMZONE_2MB) + size_mask |= RTE_PGSIZE_2M; + if (flags & RTE_MEMZONE_16MB) + size_mask |= RTE_PGSIZE_16M; + if (flags & RTE_MEMZONE_1GB) + size_mask |= RTE_PGSIZE_1G; + if (flags & RTE_MEMZONE_16GB) + size_mask |= RTE_PGSIZE_16G; + if (!size_mask) + size_mask = UINT64_MAX; /* get pointer to global configuration */ mcfg = rte_eal_get_configuration()->mem_config; @@ -316,7 +301,18 @@ rte_memzone_reserve_thread_safe(const char *name, size_t len, rte_rwlock_write_lock(&mcfg->mlock); mz = memzone_reserve_aligned_thread_unsafe( - name, len, socket_id, flags, align, bound); + name, len, socket_id, size_mask, align, bound); + + /* + * If we failed to allocate the requested page size, and the + * RTE_MEMZONE_SIZE_HINT_ONLY flag is specified, try allocating + * again. + */ + if (!mz && rte_errno == ENOMEM && size_mask != UINT64_MAX && + flags & RTE_MEMZONE_SIZE_HINT_ONLY) { + mz = memzone_reserve_aligned_thread_unsafe( + name, len, socket_id, UINT64_MAX, align, bound); + } rte_rwlock_write_unlock(&mcfg->mlock);