[RFC,2/4] mempool: unalign size when calculating required mem amount
diff mbox series

Message ID 20190719133845.32432-3-olivier.matz@6wind.com
State Superseded, archived
Delegated to: Thomas Monjalon
Headers show
Series
  • mempool: avoid objects allocations across pages
Related show

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Olivier Matz July 19, 2019, 1:38 p.m. UTC
The size returned by rte_mempool_op_calc_mem_size_default() is aligned
to the specified page size. This means that with big pages, the returned
amount is more that what we really need to populate the mempool.

This problem is tempered by the allocation method of
rte_mempool_populate_default(): in some conditions (when
try_iova_contig_mempool=true), it first tries to allocate all objs
memory in an iova contiguous area, without the alignment constraint. If
it fails, it fallbacks to the big aligned allocation, that can also
fallback into several smaller allocations.

This commit changes rte_mempool_op_calc_mem_size_default() to return the
unaligned amount of memory (the alignment constraint is still returned
via the *align argument), and removes the optimistic contiguous
allocation done when try_iova_contig_mempool=true.

This will make the amount of allocated memory more predictible: it will
be more than the optimistic contiguous allocation, but less than the big
aligned allocation.

This opens the door for the next commits that will try to prevent objets
from being located across pages.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_mempool/rte_mempool.c     | 44 ++++--------------------------------
 lib/librte_mempool/rte_mempool.h     |  2 +-
 lib/librte_mempool/rte_mempool_ops.c |  4 +++-
 3 files changed, 9 insertions(+), 41 deletions(-)

Comments

Andrew Rybchenko Aug. 7, 2019, 3:21 p.m. UTC | #1
On 7/19/19 4:38 PM, Olivier Matz wrote:
> The size returned by rte_mempool_op_calc_mem_size_default() is aligned
> to the specified page size. This means that with big pages, the returned
> amount is more that what we really need to populate the mempool.
>
> This problem is tempered by the allocation method of
> rte_mempool_populate_default(): in some conditions (when
> try_iova_contig_mempool=true), it first tries to allocate all objs
> memory in an iova contiguous area, without the alignment constraint. If
> it fails, it fallbacks to the big aligned allocation, that can also
> fallback into several smaller allocations.
>
> This commit changes rte_mempool_op_calc_mem_size_default() to return the
> unaligned amount of memory (the alignment constraint is still returned
> via the *align argument),

It does not as far as I can see. There is no changes in
lib/librte_mempool/rte_mempool_ops_default.c

> and removes the optimistic contiguous
> allocation done when try_iova_contig_mempool=true.

Unfortunately I don't understand why these 2 changes are combined
into to one patch. The first looks good to me, the second is
questionable. It can impact performance.

> This will make the amount of allocated memory more predictible: it will
> be more than the optimistic contiguous allocation, but less than the big
> aligned allocation.
>
> This opens the door for the next commits that will try to prevent objets
> from being located across pages.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>   lib/librte_mempool/rte_mempool.c     | 44 ++++--------------------------------
>   lib/librte_mempool/rte_mempool.h     |  2 +-
>   lib/librte_mempool/rte_mempool_ops.c |  4 +++-
>   3 files changed, 9 insertions(+), 41 deletions(-)
>
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 0f29e8712..335032dc8 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -430,7 +430,6 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   	unsigned mz_id, n;
>   	int ret;
>   	bool need_iova_contig_obj;
> -	bool try_iova_contig_mempool;
>   	bool alloc_in_ext_mem;
>   
>   	ret = mempool_ops_alloc_once(mp);
> @@ -477,18 +476,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   	 * wasting some space this way, but it's much nicer than looping around
>   	 * trying to reserve each and every page size.
>   	 *
> -	 * However, since size calculation will produce page-aligned sizes, it
> -	 * makes sense to first try and see if we can reserve the entire memzone
> -	 * in one contiguous chunk as well (otherwise we might end up wasting a
> -	 * 1G page on a 10MB memzone). If we fail to get enough contiguous
> -	 * memory, then we'll go and reserve space page-by-page.
> -	 *
>   	 * We also have to take into account the fact that memory that we're
>   	 * going to allocate from can belong to an externally allocated memory
>   	 * area, in which case the assumption of IOVA as VA mode being
> -	 * synonymous with IOVA contiguousness will not hold. We should also try
> -	 * to go for contiguous memory even if we're in no-huge mode, because
> -	 * external memory may in fact be IOVA-contiguous.
> +	 * synonymous with IOVA contiguousness will not hold.
>   	 */
>   
>   	/* check if we can retrieve a valid socket ID */
> @@ -497,7 +488,6 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   		return -EINVAL;
>   	alloc_in_ext_mem = (ret == 1);
>   	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
> -	try_iova_contig_mempool = false;
>   
>   	if (!need_iova_contig_obj) {
>   		pg_sz = 0;
> @@ -506,7 +496,6 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   		pg_sz = 0;
>   		pg_shift = 0;
>   	} else if (rte_eal_has_hugepages() || alloc_in_ext_mem) {
> -		try_iova_contig_mempool = true;
>   		pg_sz = get_min_page_size(mp->socket_id);
>   		pg_shift = rte_bsf32(pg_sz);
>   	} else {
> @@ -518,12 +507,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   		size_t min_chunk_size;
>   		unsigned int flags;
>   
> -		if (try_iova_contig_mempool || pg_sz == 0)
> -			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> -					0, &min_chunk_size, &align);
> -		else
> -			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> -					pg_shift, &min_chunk_size, &align);
> +		mem_size = rte_mempool_ops_calc_mem_size(
> +			mp, n, pg_shift, &min_chunk_size, &align);
>   
>   		if (mem_size < 0) {
>   			ret = mem_size;
> @@ -542,31 +527,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   		/* if we're trying to reserve contiguous memory, add appropriate
>   		 * memzone flag.
>   		 */
> -		if (try_iova_contig_mempool)
> +		if (min_chunk_size == (size_t)mem_size)
>   			flags |= RTE_MEMZONE_IOVA_CONTIG;
>   
>   		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
>   				mp->socket_id, flags, align);
>   
> -		/* if we were trying to allocate contiguous memory, failed and
> -		 * minimum required contiguous chunk fits minimum page, adjust
> -		 * memzone size to the page size, and try again.
> -		 */
> -		if (mz == NULL && try_iova_contig_mempool &&
> -				min_chunk_size <= pg_sz) {
> -			try_iova_contig_mempool = false;
> -			flags &= ~RTE_MEMZONE_IOVA_CONTIG;
> -
> -			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> -					pg_shift, &min_chunk_size, &align);
> -			if (mem_size < 0) {
> -				ret = mem_size;
> -				goto fail;
> -			}
> -
> -			mz = rte_memzone_reserve_aligned(mz_name, mem_size,
> -				mp->socket_id, flags, align);
> -		}
>   		/* don't try reserving with 0 size if we were asked to reserve
>   		 * IOVA-contiguous memory.
>   		 */
> @@ -594,7 +560,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
>   		else
>   			iova = RTE_BAD_IOVA;
>   
> -		if (try_iova_contig_mempool || pg_sz == 0)
> +		if (pg_sz == 0)
>   			ret = rte_mempool_populate_iova(mp, mz->addr,
>   				iova, mz->len,
>   				rte_mempool_memchunk_mz_free,
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 8053f7a04..7bc10e699 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -458,7 +458,7 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
>    * @param[out] align
>    *   Location for required memory chunk alignment.
>    * @return
> - *   Required memory size aligned at page boundary.
> + *   Required memory size.
>    */
>   typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
>   		uint32_t obj_num,  uint32_t pg_shift,
> diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
> index e02eb702c..22c5251eb 100644
> --- a/lib/librte_mempool/rte_mempool_ops.c
> +++ b/lib/librte_mempool/rte_mempool_ops.c
> @@ -100,7 +100,9 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
>   	return ops->get_count(mp);
>   }
>   
> -/* wrapper to notify new memory area to external mempool */
> +/* wrapper to calculate the memory size required to store given number
> + * of objects
> + */
>   ssize_t
>   rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
>   				uint32_t obj_num, uint32_t pg_shift,
Olivier Matz Oct. 28, 2019, 2:06 p.m. UTC | #2
On Wed, Aug 07, 2019 at 06:21:20PM +0300, Andrew Rybchenko wrote:
> On 7/19/19 4:38 PM, Olivier Matz wrote:
> > The size returned by rte_mempool_op_calc_mem_size_default() is aligned
> > to the specified page size. This means that with big pages, the returned
> > amount is more that what we really need to populate the mempool.
> > 
> > This problem is tempered by the allocation method of
> > rte_mempool_populate_default(): in some conditions (when
> > try_iova_contig_mempool=true), it first tries to allocate all objs
> > memory in an iova contiguous area, without the alignment constraint. If
> > it fails, it fallbacks to the big aligned allocation, that can also
> > fallback into several smaller allocations.
> > 
> > This commit changes rte_mempool_op_calc_mem_size_default() to return the
> > unaligned amount of memory (the alignment constraint is still returned
> > via the *align argument),
> 
> It does not as far as I can see. There is no changes in
> lib/librte_mempool/rte_mempool_ops_default.c

True, sorry. This is fixed in the new version.

> > and removes the optimistic contiguous
> > allocation done when try_iova_contig_mempool=true.
> 
> Unfortunately I don't understand why these 2 changes are combined
> into to one patch. The first looks good to me, the second is
> questionable. It can impact performance.

I splited this change into 2 patches, that's indeed clearer.

However I didn't get what kind do performance impact do you expect?

> > This will make the amount of allocated memory more predictible: it will
> > be more than the optimistic contiguous allocation, but less than the big
> > aligned allocation.
> > 
> > This opens the door for the next commits that will try to prevent objets
> > from being located across pages.
> > 
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> >   lib/librte_mempool/rte_mempool.c     | 44 ++++--------------------------------
> >   lib/librte_mempool/rte_mempool.h     |  2 +-
> >   lib/librte_mempool/rte_mempool_ops.c |  4 +++-
> >   3 files changed, 9 insertions(+), 41 deletions(-)
> > 
> > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> > index 0f29e8712..335032dc8 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -430,7 +430,6 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >   	unsigned mz_id, n;
> >   	int ret;
> >   	bool need_iova_contig_obj;
> > -	bool try_iova_contig_mempool;
> >   	bool alloc_in_ext_mem;
> >   	ret = mempool_ops_alloc_once(mp);
> > @@ -477,18 +476,10 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >   	 * wasting some space this way, but it's much nicer than looping around
> >   	 * trying to reserve each and every page size.
> >   	 *
> > -	 * However, since size calculation will produce page-aligned sizes, it
> > -	 * makes sense to first try and see if we can reserve the entire memzone
> > -	 * in one contiguous chunk as well (otherwise we might end up wasting a
> > -	 * 1G page on a 10MB memzone). If we fail to get enough contiguous
> > -	 * memory, then we'll go and reserve space page-by-page.
> > -	 *
> >   	 * We also have to take into account the fact that memory that we're
> >   	 * going to allocate from can belong to an externally allocated memory
> >   	 * area, in which case the assumption of IOVA as VA mode being
> > -	 * synonymous with IOVA contiguousness will not hold. We should also try
> > -	 * to go for contiguous memory even if we're in no-huge mode, because
> > -	 * external memory may in fact be IOVA-contiguous.
> > +	 * synonymous with IOVA contiguousness will not hold.
> >   	 */
> >   	/* check if we can retrieve a valid socket ID */
> > @@ -497,7 +488,6 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >   		return -EINVAL;
> >   	alloc_in_ext_mem = (ret == 1);
> >   	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
> > -	try_iova_contig_mempool = false;
> >   	if (!need_iova_contig_obj) {
> >   		pg_sz = 0;
> > @@ -506,7 +496,6 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >   		pg_sz = 0;
> >   		pg_shift = 0;
> >   	} else if (rte_eal_has_hugepages() || alloc_in_ext_mem) {
> > -		try_iova_contig_mempool = true;
> >   		pg_sz = get_min_page_size(mp->socket_id);
> >   		pg_shift = rte_bsf32(pg_sz);
> >   	} else {
> > @@ -518,12 +507,8 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >   		size_t min_chunk_size;
> >   		unsigned int flags;
> > -		if (try_iova_contig_mempool || pg_sz == 0)
> > -			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> > -					0, &min_chunk_size, &align);
> > -		else
> > -			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> > -					pg_shift, &min_chunk_size, &align);
> > +		mem_size = rte_mempool_ops_calc_mem_size(
> > +			mp, n, pg_shift, &min_chunk_size, &align);
> >   		if (mem_size < 0) {
> >   			ret = mem_size;
> > @@ -542,31 +527,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >   		/* if we're trying to reserve contiguous memory, add appropriate
> >   		 * memzone flag.
> >   		 */
> > -		if (try_iova_contig_mempool)
> > +		if (min_chunk_size == (size_t)mem_size)
> >   			flags |= RTE_MEMZONE_IOVA_CONTIG;
> >   		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
> >   				mp->socket_id, flags, align);
> > -		/* if we were trying to allocate contiguous memory, failed and
> > -		 * minimum required contiguous chunk fits minimum page, adjust
> > -		 * memzone size to the page size, and try again.
> > -		 */
> > -		if (mz == NULL && try_iova_contig_mempool &&
> > -				min_chunk_size <= pg_sz) {
> > -			try_iova_contig_mempool = false;
> > -			flags &= ~RTE_MEMZONE_IOVA_CONTIG;
> > -
> > -			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> > -					pg_shift, &min_chunk_size, &align);
> > -			if (mem_size < 0) {
> > -				ret = mem_size;
> > -				goto fail;
> > -			}
> > -
> > -			mz = rte_memzone_reserve_aligned(mz_name, mem_size,
> > -				mp->socket_id, flags, align);
> > -		}
> >   		/* don't try reserving with 0 size if we were asked to reserve
> >   		 * IOVA-contiguous memory.
> >   		 */
> > @@ -594,7 +560,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >   		else
> >   			iova = RTE_BAD_IOVA;
> > -		if (try_iova_contig_mempool || pg_sz == 0)
> > +		if (pg_sz == 0)
> >   			ret = rte_mempool_populate_iova(mp, mz->addr,
> >   				iova, mz->len,
> >   				rte_mempool_memchunk_mz_free,
> > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> > index 8053f7a04..7bc10e699 100644
> > --- a/lib/librte_mempool/rte_mempool.h
> > +++ b/lib/librte_mempool/rte_mempool.h
> > @@ -458,7 +458,7 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
> >    * @param[out] align
> >    *   Location for required memory chunk alignment.
> >    * @return
> > - *   Required memory size aligned at page boundary.
> > + *   Required memory size.
> >    */
> >   typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
> >   		uint32_t obj_num,  uint32_t pg_shift,
> > diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
> > index e02eb702c..22c5251eb 100644
> > --- a/lib/librte_mempool/rte_mempool_ops.c
> > +++ b/lib/librte_mempool/rte_mempool_ops.c
> > @@ -100,7 +100,9 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp)
> >   	return ops->get_count(mp);
> >   }
> > -/* wrapper to notify new memory area to external mempool */
> > +/* wrapper to calculate the memory size required to store given number
> > + * of objects
> > + */
> >   ssize_t
> >   rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
> >   				uint32_t obj_num, uint32_t pg_shift,
>

Patch
diff mbox series

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 0f29e8712..335032dc8 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -430,7 +430,6 @@  rte_mempool_populate_default(struct rte_mempool *mp)
 	unsigned mz_id, n;
 	int ret;
 	bool need_iova_contig_obj;
-	bool try_iova_contig_mempool;
 	bool alloc_in_ext_mem;
 
 	ret = mempool_ops_alloc_once(mp);
@@ -477,18 +476,10 @@  rte_mempool_populate_default(struct rte_mempool *mp)
 	 * wasting some space this way, but it's much nicer than looping around
 	 * trying to reserve each and every page size.
 	 *
-	 * However, since size calculation will produce page-aligned sizes, it
-	 * makes sense to first try and see if we can reserve the entire memzone
-	 * in one contiguous chunk as well (otherwise we might end up wasting a
-	 * 1G page on a 10MB memzone). If we fail to get enough contiguous
-	 * memory, then we'll go and reserve space page-by-page.
-	 *
 	 * We also have to take into account the fact that memory that we're
 	 * going to allocate from can belong to an externally allocated memory
 	 * area, in which case the assumption of IOVA as VA mode being
-	 * synonymous with IOVA contiguousness will not hold. We should also try
-	 * to go for contiguous memory even if we're in no-huge mode, because
-	 * external memory may in fact be IOVA-contiguous.
+	 * synonymous with IOVA contiguousness will not hold.
 	 */
 
 	/* check if we can retrieve a valid socket ID */
@@ -497,7 +488,6 @@  rte_mempool_populate_default(struct rte_mempool *mp)
 		return -EINVAL;
 	alloc_in_ext_mem = (ret == 1);
 	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
-	try_iova_contig_mempool = false;
 
 	if (!need_iova_contig_obj) {
 		pg_sz = 0;
@@ -506,7 +496,6 @@  rte_mempool_populate_default(struct rte_mempool *mp)
 		pg_sz = 0;
 		pg_shift = 0;
 	} else if (rte_eal_has_hugepages() || alloc_in_ext_mem) {
-		try_iova_contig_mempool = true;
 		pg_sz = get_min_page_size(mp->socket_id);
 		pg_shift = rte_bsf32(pg_sz);
 	} else {
@@ -518,12 +507,8 @@  rte_mempool_populate_default(struct rte_mempool *mp)
 		size_t min_chunk_size;
 		unsigned int flags;
 
-		if (try_iova_contig_mempool || pg_sz == 0)
-			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
-					0, &min_chunk_size, &align);
-		else
-			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
-					pg_shift, &min_chunk_size, &align);
+		mem_size = rte_mempool_ops_calc_mem_size(
+			mp, n, pg_shift, &min_chunk_size, &align);
 
 		if (mem_size < 0) {
 			ret = mem_size;
@@ -542,31 +527,12 @@  rte_mempool_populate_default(struct rte_mempool *mp)
 		/* if we're trying to reserve contiguous memory, add appropriate
 		 * memzone flag.
 		 */
-		if (try_iova_contig_mempool)
+		if (min_chunk_size == (size_t)mem_size)
 			flags |= RTE_MEMZONE_IOVA_CONTIG;
 
 		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
 				mp->socket_id, flags, align);
 
-		/* if we were trying to allocate contiguous memory, failed and
-		 * minimum required contiguous chunk fits minimum page, adjust
-		 * memzone size to the page size, and try again.
-		 */
-		if (mz == NULL && try_iova_contig_mempool &&
-				min_chunk_size <= pg_sz) {
-			try_iova_contig_mempool = false;
-			flags &= ~RTE_MEMZONE_IOVA_CONTIG;
-
-			mem_size = rte_mempool_ops_calc_mem_size(mp, n,
-					pg_shift, &min_chunk_size, &align);
-			if (mem_size < 0) {
-				ret = mem_size;
-				goto fail;
-			}
-
-			mz = rte_memzone_reserve_aligned(mz_name, mem_size,
-				mp->socket_id, flags, align);
-		}
 		/* don't try reserving with 0 size if we were asked to reserve
 		 * IOVA-contiguous memory.
 		 */
@@ -594,7 +560,7 @@  rte_mempool_populate_default(struct rte_mempool *mp)
 		else
 			iova = RTE_BAD_IOVA;
 
-		if (try_iova_contig_mempool || pg_sz == 0)
+		if (pg_sz == 0)
 			ret = rte_mempool_populate_iova(mp, mz->addr,
 				iova, mz->len,
 				rte_mempool_memchunk_mz_free,
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 8053f7a04..7bc10e699 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -458,7 +458,7 @@  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
  * @param[out] align
  *   Location for required memory chunk alignment.
  * @return
- *   Required memory size aligned at page boundary.
+ *   Required memory size.
  */
 typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp,
 		uint32_t obj_num,  uint32_t pg_shift,
diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
index e02eb702c..22c5251eb 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -100,7 +100,9 @@  rte_mempool_ops_get_count(const struct rte_mempool *mp)
 	return ops->get_count(mp);
 }
 
-/* wrapper to notify new memory area to external mempool */
+/* wrapper to calculate the memory size required to store given number
+ * of objects
+ */
 ssize_t
 rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp,
 				uint32_t obj_num, uint32_t pg_shift,