eal: fixes the bug where rte_malloc() fails to allocates memory

Message ID 20220525051837.247255-1-fidaullah.noonari@emumba.com (mailing list archive)
State Accepted, archived
Delegated to: David Marchand
Headers
Series eal: fixes the bug where rte_malloc() fails to allocates memory |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/github-robot: build success github build: passed
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-abi-testing success Testing PASS

Commit Message

Fidaullah Noonari May 25, 2022, 5:18 a.m. UTC
  if rte malloc is called to allocate memory of size
is between multiple of hugepage size minus malloc_header_len
and hugepage size rte_malloc fails to allocate memory.
this fix replaces malloc_elem_trailer_len with
malloc_elem_overhead in try_expand_heap() to include
malloc_elem_header_len when calculating n_seg.

Bugzilla ID: 800

Signed-off-by: Fidaullah Noonari <fidaullah.noonari@emumba.com>
---
 lib/eal/common/malloc_heap.c | 2 +-
 lib/eal/common/malloc_mp.c   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
  

Comments

David Marchand June 16, 2022, 7:53 a.m. UTC | #1
On Wed, May 25, 2022 at 7:18 AM Fidaullah Noonari
<fidaullah.noonari@emumba.com> wrote:
>
> if rte malloc is called to allocate memory of size
> is between multiple of hugepage size minus malloc_header_len
> and hugepage size rte_malloc fails to allocate memory.
> this fix replaces malloc_elem_trailer_len with
> malloc_elem_overhead in try_expand_heap() to include
> malloc_elem_header_len when calculating n_seg.
>
> Bugzilla ID: 800
>
> Signed-off-by: Fidaullah Noonari <fidaullah.noonari@emumba.com>

Anatoly, Dmitry, can you have a look?
Thanks.
  
Dmitry Kozlyuk June 18, 2022, 11:29 a.m. UTC | #2
Hi Fidaullah,

Thanks for the fix,
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>


Anatoly, I noticed a couple of other things while testing this.

1. Consider:

elt_size = pg_sz - MALLOC_ELEM_OVERHEAD
rte_malloc(align=0) which is converted to align = 1.

Obviously, such an element fits into one page, however:

alloc_sz = RTE_ALIGN_CEIL(1 + pg_sz +
                (MALLOC_ELEM_OVERHEAD - MALLOC_ELEM_OVERHEAD),
                pg_sz) == 2 * pg_sz.

This can unnecessarily hit an allocation limit from the system or EAL.
I suggest, in both places:

alloc_sz = RTE_ALIGN_CEIL(RTE_ALIGN_CEIL(elt_size, align) +
                MALLOC_ELEM_OVERHEAD, pg_sz);

This would be symmetric with malloc_elem_can_hold().

2. Alignment calculation depends on whether we allocated new pages or not:

malloc_heap_alloc_on_heap_id(align = 0) ->
heap_alloc(align = 1) ->
find_suitable_element(align = RTE_CACHE_LINE_ROUNDUP(align))

malloc_heap_alloc_on_heap_id(align = 0) ->
alloc_more_mem_on_socket(align = 1) ->
try_expand_heap() -> ... ->
alloc_pages_on_heap(align = 1) ->
find_suitable_element(align = 1)

Why do we call find_suitable_element() directly and not just return
and repeat the heap_alloc() attempt?
  
David Marchand June 22, 2022, 5:03 p.m. UTC | #3
On Sat, Jun 18, 2022 at 1:29 PM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:
>
> Hi Fidaullah,
>
> Thanks for the fix,
> Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>

This seems backport material.
Dmitry, Anatoly, do you agree? If so I'll mark it when applying.

As for a Fixes: line, the closer commit touching this part is
07dcbfe0101f ("malloc: support multiprocess memory hotplug") but I
wonder if this bug predates this commit.
  
Dmitry Kozlyuk June 22, 2022, 9:18 p.m. UTC | #4
2022-06-22 19:03 (UTC+0200), David Marchand:
> On Sat, Jun 18, 2022 at 1:29 PM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:
> >
> > Hi Fidaullah,
> >
> > Thanks for the fix,
> > Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>  
> 
> This seems backport material.
> Dmitry, Anatoly, do you agree? If so I'll mark it when applying.
> 
> As for a Fixes: line, the closer commit touching this part is
> 07dcbfe0101f ("malloc: support multiprocess memory hotplug") but I
> wonder if this bug predates this commit.

I agree and the offending commit seems the right one.
Before that commit the calculation accounted for the header,
with 07dcbfe0101f it was lost:

	align = RTE_MAX(align, MALLOC_ELEM_HEADER_LEN);
	map_len = RTE_ALIGN_CEIL(align + elt_size + MALLOC_ELEM_TRAILER_LEN,
			pg_sz);
  
David Marchand June 23, 2022, 7:50 a.m. UTC | #5
On Wed, May 25, 2022 at 7:18 AM Fidaullah Noonari
<fidaullah.noonari@emumba.com> wrote:
>
> if rte malloc is called to allocate memory of size
> is between multiple of hugepage size minus malloc_header_len
> and hugepage size rte_malloc fails to allocate memory.
> this fix replaces malloc_elem_trailer_len with
> malloc_elem_overhead in try_expand_heap() to include
> malloc_elem_header_len when calculating n_seg.
>
> Bugzilla ID: 800
Fixes: 07dcbfe0101f ("malloc: support multiprocess memory hotplug")
Cc: stable@dpdk.org

>
> Signed-off-by: Fidaullah Noonari <fidaullah.noonari@emumba.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>

Applied, thanks.
  
Fidaullah Noonari July 5, 2022, 5:04 a.m. UTC | #6
Hi Dmitry,

> alloc_sz = RTE_ALIGN_CEIL(RTE_ALIGN_CEIL(elt_size, align) +
>                 MALLOC_ELEM_OVERHEAD, pg_sz);
>
I am submitting a patch regarding this

> 2. Alignment calculation depends on whether we allocated new pages or not:
>
> malloc_heap_alloc_on_heap_id(align = 0) ->
> heap_alloc(align = 1) ->
> find_suitable_element(align = RTE_CACHE_LINE_ROUNDUP(align))
>
> malloc_heap_alloc_on_heap_id(align = 0) ->
> alloc_more_mem_on_socket(align = 1) ->
> try_expand_heap() -> ... ->
> alloc_pages_on_heap(align = 1) ->
> find_suitable_element(align = 1)

I saw alloc_pages_on_heap() has a rollback if find_suitable_element()
fails to find element. Now if we remove find_suitable_element(), don't
we need to rollback, wouldn't it mean unnecessary allocation or is it
handled somewhere that I didn't understand.

On Sat, Jun 18, 2022 at 4:29 PM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:
>
> Hi Fidaullah,
>
> Thanks for the fix,
> Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
>
>
> Anatoly, I noticed a couple of other things while testing this.
>
> 1. Consider:
>
> elt_size = pg_sz - MALLOC_ELEM_OVERHEAD
> rte_malloc(align=0) which is converted to align = 1.
>
> Obviously, such an element fits into one page, however:
>
> alloc_sz = RTE_ALIGN_CEIL(1 + pg_sz +
>                 (MALLOC_ELEM_OVERHEAD - MALLOC_ELEM_OVERHEAD),
>                 pg_sz) == 2 * pg_sz.
>
> This can unnecessarily hit an allocation limit from the system or EAL.
> I suggest, in both places:
>
> alloc_sz = RTE_ALIGN_CEIL(RTE_ALIGN_CEIL(elt_size, align) +
>                 MALLOC_ELEM_OVERHEAD, pg_sz);
>
> This would be symmetric with malloc_elem_can_hold().
>
> 2. Alignment calculation depends on whether we allocated new pages or not:
>
> malloc_heap_alloc_on_heap_id(align = 0) ->
> heap_alloc(align = 1) ->
> find_suitable_element(align = RTE_CACHE_LINE_ROUNDUP(align))
>
> malloc_heap_alloc_on_heap_id(align = 0) ->
> alloc_more_mem_on_socket(align = 1) ->
> try_expand_heap() -> ... ->
> alloc_pages_on_heap(align = 1) ->
> find_suitable_element(align = 1)
>
> Why do we call find_suitable_element() directly and not just return
> and repeat the heap_alloc() attempt?
  

Patch

diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index a3d26fcbea..27a52266ad 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -403,7 +403,7 @@  try_expand_heap_primary(struct malloc_heap *heap, uint64_t pg_sz,
 	bool callback_triggered = false;
 
 	alloc_sz = RTE_ALIGN_CEIL(align + elt_size +
-			MALLOC_ELEM_TRAILER_LEN, pg_sz);
+			MALLOC_ELEM_OVERHEAD, pg_sz);
 	n_segs = alloc_sz / pg_sz;
 
 	/* we can't know in advance how many pages we'll need, so we malloc */
diff --git a/lib/eal/common/malloc_mp.c b/lib/eal/common/malloc_mp.c
index 207b90847e..2b8eb51067 100644
--- a/lib/eal/common/malloc_mp.c
+++ b/lib/eal/common/malloc_mp.c
@@ -250,7 +250,7 @@  handle_alloc_request(const struct malloc_mp_req *m,
 	}
 
 	alloc_sz = RTE_ALIGN_CEIL(ar->align + ar->elt_size +
-			MALLOC_ELEM_TRAILER_LEN, ar->page_sz);
+			MALLOC_ELEM_OVERHEAD, ar->page_sz);
 	n_segs = alloc_sz / ar->page_sz;
 
 	/* we can't know in advance how many pages we'll need, so we malloc */