From patchwork Fri Feb 22 16:14:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anatoly Burakov X-Patchwork-Id: 50467 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F2F422C60; Fri, 22 Feb 2019 17:14:09 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 368302C2B for ; Fri, 22 Feb 2019 17:14:06 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Feb 2019 08:14:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,400,1544515200"; d="scan'208";a="321257117" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by fmsmga006.fm.intel.com with ESMTP; 22 Feb 2019 08:14:04 -0800 Received: from sivswdev05.ir.intel.com (sivswdev05.ir.intel.com [10.243.17.64]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id x1MGE3rH000669; Fri, 22 Feb 2019 16:14:03 GMT Received: from sivswdev05.ir.intel.com (localhost [127.0.0.1]) by sivswdev05.ir.intel.com with ESMTP id x1MGE34P010511; Fri, 22 Feb 2019 16:14:03 GMT Received: (from aburakov@localhost) by sivswdev05.ir.intel.com with LOCAL id x1MGE3EE010507; Fri, 22 Feb 2019 16:14:03 GMT From: Anatoly Burakov To: dev@dpdk.org Cc: maxime.leroy@6wind.com Date: Fri, 22 Feb 2019 16:14:02 +0000 Message-Id: <64201abe22bfa2726cd312b774861b9aba8625c6.1550851998.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH 2/3] memalloc: improve best-effort allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Previously, when using non-exact allocation, we were requesting N pages to be allocated, but allowed the memory subsystem to allocate less than requested. However, we were still expecting to see N contigous free pages in the memseg list. This presents a problem because there is no way to try and allocate as many pages as possible, even if there isn't enough contiguous free entries in the list. To address this, use the new "find biggest" fbarray API's when allocating non-exact number of pages. This way, we will first check how many entries in the list are actually available, and then try to allocate up to that number. Signed-off-by: Anatoly Burakov --- lib/librte_eal/linuxapp/eal/eal_memalloc.c | 33 ++++++++++++++++++---- 1 file changed, 28 insertions(+), 5 deletions(-) diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c b/lib/librte_eal/linuxapp/eal/eal_memalloc.c index b6fb183db..14c3ea838 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memalloc.c +++ b/lib/librte_eal/linuxapp/eal/eal_memalloc.c @@ -874,10 +874,32 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) need = wa->n_segs; /* try finding space in memseg list */ - cur_idx = rte_fbarray_find_next_n_free(&cur_msl->memseg_arr, 0, need); - if (cur_idx < 0) - return 0; - start_idx = cur_idx; + if (wa->exact) { + /* if we require exact number of pages in a list, find them */ + cur_idx = rte_fbarray_find_next_n_free(&cur_msl->memseg_arr, 0, + need); + if (cur_idx < 0) + return 0; + start_idx = cur_idx; + } else { + int cur_len; + + /* we don't require exact number of pages, so we're going to go + * for best-effort allocation. that means finding the biggest + * unused block, and going with that. + */ + cur_idx = rte_fbarray_find_biggest_free(&cur_msl->memseg_arr, + 0); + if (cur_idx < 0) + return 0; + start_idx = cur_idx; + /* adjust the size to possibly be smaller than original + * request, but do not allow it to be bigger. + */ + cur_len = rte_fbarray_find_contig_free(&cur_msl->memseg_arr, + cur_idx); + need = RTE_MIN(need, (unsigned int)cur_len); + } /* do not allow any page allocations during the time we're allocating, * because file creation and locking operations are not atomic, @@ -954,7 +976,8 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) cur_msl->version++; if (dir_fd >= 0) close(dir_fd); - return 1; + /* if we didn't allocate any segments, move on to the next list */ + return i > 0; } struct free_walk_param {