[dpdk-dev,3/3] mem: improve autodetection of hugepage counts on 32-bit

Message ID 231a97e7fa536d4f3c6bbe2a645a71ed64b5b9cd.1524235066.git.anatoly.burakov@intel.com (mailing list archive)
State Superseded, archived
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Anatoly Burakov April 20, 2018, 2:41 p.m. UTC
  For non-legacy mode, we are preallocating space for hugepages, so
we know in advance which pages we will be able to allocate, and
which we won't. However, the init procedure was using hugepage
counts gathered from sysfs and paid no attention to hugepage
sizes that were actually available for reservation, and failed
on attempts to reserve unavailable pages.

Fix this by limiting total page counts by number of pages
actually preallocated.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 lib/librte_eal/linuxapp/eal/eal_memory.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)
  

Comments

Anatoly Burakov April 20, 2018, 3:12 p.m. UTC | #1
On 20-Apr-18 3:41 PM, Anatoly Burakov wrote:
> For non-legacy mode, we are preallocating space for hugepages, so
> we know in advance which pages we will be able to allocate, and
> which we won't. However, the init procedure was using hugepage
> counts gathered from sysfs and paid no attention to hugepage
> sizes that were actually available for reservation, and failed
> on attempts to reserve unavailable pages.
> 
> Fix this by limiting total page counts by number of pages
> actually preallocated.
> 
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---

Oops, didn't update the patch after fixing build.
  

Patch

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index fadc1de..8eb60cb 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -1603,6 +1603,18 @@  eal_legacy_hugepage_init(void)
 	return -1;
 }
 
+static int __rte_unused
+hugepage_count_walk(const struct rte_memseg_list *msl, void *arg)
+{
+	struct hugepage_info *hpi = arg;
+
+	if (msl->page_sz != hpi->hugepage_sz)
+		return 0;
+
+	hpi->num_pages[msl->socket_id] += msl->memseg_arr.len;
+	return 0;
+}
+
 static int
 eal_hugepage_init(void)
 {
@@ -1617,10 +1629,29 @@  eal_hugepage_init(void)
 	for (hp_sz_idx = 0;
 			hp_sz_idx < (int) internal_config.num_hugepage_sizes;
 			hp_sz_idx++) {
+#ifndef RTE_ARCH_64
+		struct hugepage_info dummy;
+		unsigned int i;
+#endif
 		/* also initialize used_hp hugepage sizes in used_hp */
 		struct hugepage_info *hpi;
 		hpi = &internal_config.hugepage_info[hp_sz_idx];
 		used_hp[hp_sz_idx].hugepage_sz = hpi->hugepage_sz;
+
+#ifndef RTE_ARCH_64
+		/* for 32-bit, limit number of pages on socket to whatever we've
+		 * preallocated, as we cannot allocate more.
+		 */
+		memset(dummy, 0, sizeof(dummy));
+		dummy.hugepage_sz = hpi->hugepage_sz;
+		if (rte_memseg_list_walk(hugepage_count_walk, hpi) < 0)
+			return -1;
+
+		for (i = 0; i < RTE_DIM(dummy.num_pages); i++) {
+			hpi->num_pages[i] = RTE_MIN(hpi->num_pages[i],
+					dummy.num_pages[i]);
+		}
+#endif
 	}
 
 	/* make a copy of socket_mem, needed for balanced allocation. */