[dpdk-dev,v2,2/2] mem: unmap unneeded space

Message ID 73e35ca9ce462ab097c6ca8e7cbfdd5b8bc06c1d.1525086045.git.anatoly.burakov@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Anatoly Burakov April 30, 2018, 11:21 a.m. UTC
  When we ask to reserve virtual areas, we usually include
alignment in the mapping size, and that memory ends up
being wasted. Wasting a gigabyte of VA space while trying to
reserve one gigabyte is pretty expensive on 32-bit, so after
we're done mapping, unmap unneeded space.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---

Notes:
    v2:
    - Split fix for size_t overflow into separate patch
    - Improve readability of unmapping code
    - Added comment explaining why unmapping is done

 lib/librte_eal/common/eal_common_memory.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)
  

Comments

Bruce Richardson April 30, 2018, 12:50 p.m. UTC | #1
On Mon, Apr 30, 2018 at 12:21:43PM +0100, Anatoly Burakov wrote:
> When we ask to reserve virtual areas, we usually include
> alignment in the mapping size, and that memory ends up
> being wasted. Wasting a gigabyte of VA space while trying to
> reserve one gigabyte is pretty expensive on 32-bit, so after
> we're done mapping, unmap unneeded space.
> 
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
> 
> Notes:
>     v2:
>     - Split fix for size_t overflow into separate patch
>     - Improve readability of unmapping code
>     - Added comment explaining why unmapping is done
> 
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
  
Thomas Monjalon May 2, 2018, 9 p.m. UTC | #2
30/04/2018 14:50, Bruce Richardson:
> On Mon, Apr 30, 2018 at 12:21:43PM +0100, Anatoly Burakov wrote:
> > When we ask to reserve virtual areas, we usually include
> > alignment in the mapping size, and that memory ends up
> > being wasted. Wasting a gigabyte of VA space while trying to
> > reserve one gigabyte is pretty expensive on 32-bit, so after
> > we're done mapping, unmap unneeded space.
> > 
> > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > ---
> > 
> > Notes:
> >     v2:
> >     - Split fix for size_t overflow into separate patch
> >     - Improve readability of unmapping code
> >     - Added comment explaining why unmapping is done
> > 
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>

I am not confident pushing this change post-rc1.
Please can we have more validation tests with this patch?
  
Thomas Monjalon May 8, 2018, 8:20 p.m. UTC | #3
02/05/2018 23:00, Thomas Monjalon:
> 30/04/2018 14:50, Bruce Richardson:
> > On Mon, Apr 30, 2018 at 12:21:43PM +0100, Anatoly Burakov wrote:
> > > When we ask to reserve virtual areas, we usually include
> > > alignment in the mapping size, and that memory ends up
> > > being wasted. Wasting a gigabyte of VA space while trying to
> > > reserve one gigabyte is pretty expensive on 32-bit, so after
> > > we're done mapping, unmap unneeded space.
> > > 
> > > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > ---
> > > 
> > > Notes:
> > >     v2:
> > >     - Split fix for size_t overflow into separate patch
> > >     - Improve readability of unmapping code
> > >     - Added comment explaining why unmapping is done
> > > 
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> 
> I am not confident pushing this change post-rc1.
> Please can we have more validation tests with this patch?

Any news about validation tests?
  
Thomas Monjalon May 13, 2018, 11:33 p.m. UTC | #4
08/05/2018 22:20, Thomas Monjalon:
> 02/05/2018 23:00, Thomas Monjalon:
> > 30/04/2018 14:50, Bruce Richardson:
> > > On Mon, Apr 30, 2018 at 12:21:43PM +0100, Anatoly Burakov wrote:
> > > > When we ask to reserve virtual areas, we usually include
> > > > alignment in the mapping size, and that memory ends up
> > > > being wasted. Wasting a gigabyte of VA space while trying to
> > > > reserve one gigabyte is pretty expensive on 32-bit, so after
> > > > we're done mapping, unmap unneeded space.
> > > > 
> > > > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > > ---
> > > > 
> > > > Notes:
> > > >     v2:
> > > >     - Split fix for size_t overflow into separate patch
> > > >     - Improve readability of unmapping code
> > > >     - Added comment explaining why unmapping is done
> > > > 
> > > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > 
> > I am not confident pushing this change post-rc1.
> > Please can we have more validation tests with this patch?
> 
> Any news about validation tests?

No news, but I've decided to push it anyway.

Series applied
  

Patch

diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index 0ac7b33..60aed4a 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -121,8 +121,32 @@  eal_get_virtual_area(void *requested_addr, size_t *size,
 	RTE_LOG(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)\n",
 		aligned_addr, *size);
 
-	if (unmap)
+	if (unmap) {
 		munmap(mapped_addr, map_sz);
+	} else if (!no_align) {
+		void *map_end, *aligned_end;
+		size_t before_len, after_len;
+
+		/* when we reserve space with alignment, we add alignment to
+		 * mapping size. On 32-bit, if 1GB alignment was requested, this
+		 * would waste 1GB of address space, which is a luxury we cannot
+		 * afford. so, if alignment was performed, check if any unneeded
+		 * address space can be unmapped back.
+		 */
+
+		map_end = RTE_PTR_ADD(mapped_addr, (size_t)map_sz);
+		aligned_end = RTE_PTR_ADD(aligned_addr, *size);
+
+		/* unmap space before aligned mmap address */
+		before_len = RTE_PTR_DIFF(aligned_addr, mapped_addr);
+		if (before_len > 0)
+			munmap(mapped_addr, before_len);
+
+		/* unmap space after aligned end mmap address */
+		after_len = RTE_PTR_DIFF(map_end, aligned_end);
+		if (after_len > 0)
+			munmap(aligned_end, after_len);
+	}
 
 	baseaddr_offset += *size;