malloc: respect SIZE_HINT_ONLY when looking for the biggest free elem

Message ID 20181007193147.123868-1-dariusz.stojaczyk@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers
Series malloc: respect SIZE_HINT_ONLY when looking for the biggest free elem |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Stojaczyk, Dariusz Oct. 7, 2018, 7:31 p.m. UTC
  RTE_MEMZONE_SIZE_HINT_ONLY wasn't checked in any way,
causing size hints to be parsed as hard requirements.
This resulted in some allocations being failed prematurely.

Fixes: 68b6092bd3c7 ("malloc: allow reserving biggest element")
Cc: anatoly.burakov@intel.com
Cc: stable@dpdk.org

Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
---
 lib/librte_eal/common/malloc_heap.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
  

Comments

Anatoly Burakov Oct. 8, 2018, 9:02 a.m. UTC | #1
On 07-Oct-18 8:31 PM, Darek Stojaczyk wrote:
> RTE_MEMZONE_SIZE_HINT_ONLY wasn't checked in any way,
> causing size hints to be parsed as hard requirements.
> This resulted in some allocations being failed prematurely.
> 
> Fixes: 68b6092bd3c7 ("malloc: allow reserving biggest element")
> Cc: anatoly.burakov@intel.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
> ---
>   lib/librte_eal/common/malloc_heap.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
> index ac7bbb3ba..d2a8bd8dc 100644
> --- a/lib/librte_eal/common/malloc_heap.c
> +++ b/lib/librte_eal/common/malloc_heap.c
> @@ -165,7 +165,9 @@ find_biggest_element(struct malloc_heap *heap, size_t *size,
>   		for (elem = LIST_FIRST(&heap->free_head[idx]);
>   				!!elem; elem = LIST_NEXT(elem, free_list)) {
>   			size_t cur_size;
> -			if (!check_hugepage_sz(flags, elem->msl->page_sz))
> +			if ((flags & RTE_MEMZONE_SIZE_HINT_ONLY) == 0 &&
> +					!check_hugepage_sz(flags,
> +						elem->msl->page_sz))
>   				continue;

Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>

Although to be frank, the whole concept of "reserving biggest available 
memzone" is currently broken because of dynamic memory allocation. There 
is currently no way to allocate "as many hugepages as you can", so we're 
only looking at memory already allocated, which in the general case is 
less than page size long (unless you use legacy mode or memory 
preallocation switches).
  
Thomas Monjalon Oct. 28, 2018, 10:50 a.m. UTC | #2
08/10/2018 11:02, Burakov, Anatoly:
> On 07-Oct-18 8:31 PM, Darek Stojaczyk wrote:
> > RTE_MEMZONE_SIZE_HINT_ONLY wasn't checked in any way,
> > causing size hints to be parsed as hard requirements.
> > This resulted in some allocations being failed prematurely.
> > 
> > Fixes: 68b6092bd3c7 ("malloc: allow reserving biggest element")
> > Cc: anatoly.burakov@intel.com
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
> > ---
> >   lib/librte_eal/common/malloc_heap.c | 4 +++-
> >   1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
> > index ac7bbb3ba..d2a8bd8dc 100644
> > --- a/lib/librte_eal/common/malloc_heap.c
> > +++ b/lib/librte_eal/common/malloc_heap.c
> > @@ -165,7 +165,9 @@ find_biggest_element(struct malloc_heap *heap, size_t *size,
> >   		for (elem = LIST_FIRST(&heap->free_head[idx]);
> >   				!!elem; elem = LIST_NEXT(elem, free_list)) {
> >   			size_t cur_size;
> > -			if (!check_hugepage_sz(flags, elem->msl->page_sz))
> > +			if ((flags & RTE_MEMZONE_SIZE_HINT_ONLY) == 0 &&
> > +					!check_hugepage_sz(flags,
> > +						elem->msl->page_sz))
> >   				continue;
> 
> Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
> 
> Although to be frank, the whole concept of "reserving biggest available 
> memzone" is currently broken because of dynamic memory allocation. There 
> is currently no way to allocate "as many hugepages as you can", so we're 
> only looking at memory already allocated, which in the general case is 
> less than page size long (unless you use legacy mode or memory 
> preallocation switches).

Applied anyway, thanks
  

Patch

diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index ac7bbb3ba..d2a8bd8dc 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -165,7 +165,9 @@  find_biggest_element(struct malloc_heap *heap, size_t *size,
 		for (elem = LIST_FIRST(&heap->free_head[idx]);
 				!!elem; elem = LIST_NEXT(elem, free_list)) {
 			size_t cur_size;
-			if (!check_hugepage_sz(flags, elem->msl->page_sz))
+			if ((flags & RTE_MEMZONE_SIZE_HINT_ONLY) == 0 &&
+					!check_hugepage_sz(flags,
+						elem->msl->page_sz))
 				continue;
 			if (contig) {
 				cur_size =