[v9,1/5] mempool: populate mempool with the page sized chunks of memory

Message ID 20190729121313.30639-2-vattunuru@marvell.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series kni: add IOVA=VA support |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Performance-Testing fail build patch failure
ci/Intel-compilation fail Compilation issues

Commit Message

Vamsi Krishna Attunuru July 29, 2019, 12:13 p.m. UTC
  From: Vamsi Attunuru <vattunuru@marvell.com>

Patch adds a routine to populate mempool from page aligned and
page sized chunks of memory to ensure memory objs do not fall
across the page boundaries. It's useful for applications that
require physically contiguous mbuf memory while running in
IOVA=VA mode.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
 lib/librte_mempool/rte_mempool.c           | 64 ++++++++++++++++++++++++++++++
 lib/librte_mempool/rte_mempool.h           | 17 ++++++++
 lib/librte_mempool/rte_mempool_version.map |  1 +
 3 files changed, 82 insertions(+)
  

Comments

Andrew Rybchenko July 29, 2019, 12:41 p.m. UTC | #1
On 7/29/19 3:13 PM, vattunuru@marvell.com wrote:
> From: Vamsi Attunuru <vattunuru@marvell.com>
>
> Patch adds a routine to populate mempool from page aligned and
> page sized chunks of memory to ensure memory objs do not fall
> across the page boundaries. It's useful for applications that
> require physically contiguous mbuf memory while running in
> IOVA=VA mode.
>
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>

When two below issues fixed:
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>

As I understand it is likely to be a temporary solution until the problem
is fixed in a generic way.

> ---
>   lib/librte_mempool/rte_mempool.c           | 64 ++++++++++++++++++++++++++++++
>   lib/librte_mempool/rte_mempool.h           | 17 ++++++++
>   lib/librte_mempool/rte_mempool_version.map |  1 +
>   3 files changed, 82 insertions(+)
>
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index 7260ce0..00619bd 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -414,6 +414,70 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>   	return ret;
>   }
>   
> +/* Function to populate mempool from page sized mem chunks, allocate page size
> + * of memory in memzone and populate them. Return the number of objects added,
> + * or a negative value on error.
> + */
> +int
> +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp)
> +{
> +	char mz_name[RTE_MEMZONE_NAMESIZE];
> +	size_t align, pg_sz, pg_shift;
> +	const struct rte_memzone *mz;
> +	unsigned int mz_id, n;
> +	size_t min_chunk_size;
> +	int ret;
> +
> +	ret = mempool_ops_alloc_once(mp);
> +	if (ret != 0)
> +		return ret;
> +
> +	if (mp->nb_mem_chunks != 0)
> +		return -EEXIST;
> +
> +	pg_sz = get_min_page_size(mp->socket_id);
> +	pg_shift = rte_bsf32(pg_sz);
> +
> +	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
> +
> +		ret = rte_mempool_ops_calc_mem_size(mp, n,
> +				pg_shift, &min_chunk_size, &align);
> +
> +		if (ret < 0 || min_chunk_size > pg_sz)

If min_chunk_size is greater than pg_sz, ret is 0 and function returns 
success.

> +			goto fail;
> +
> +		ret = snprintf(mz_name, sizeof(mz_name),
> +			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
> +		if (ret < 0 || ret >= (int)sizeof(mz_name)) {
> +			ret = -ENAMETOOLONG;
> +			goto fail;
> +		}
> +
> +		mz = rte_memzone_reserve_aligned(mz_name, min_chunk_size,
> +				mp->socket_id, 0, align);
> +
> +		if (mz == NULL) {
> +			ret = -rte_errno;
> +			goto fail;
> +		}
> +
> +		ret = rte_mempool_populate_iova(mp, mz->addr,
> +				mz->iova, mz->len,
> +				rte_mempool_memchunk_mz_free,
> +				(void *)(uintptr_t)mz);
> +		if (ret < 0) {
> +			rte_memzone_free(mz);
> +			goto fail;
> +		}
> +	}
> +
> +	return mp->size;
> +
> +fail:
> +	rte_mempool_free_memchunks(mp);
> +	return ret;
> +}
> +
>   /* Default function to populate the mempool: allocate memory in memzones,
>    * and populate them. Return the number of objects added, or a negative
>    * value on error.
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 8053f7a..3046e4f 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1062,6 +1062,23 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
>   	void *opaque);
>   
>   /**

  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.

is missing

> + * Add memory from page sized memzones for objects in the pool at init
> + *
> + * This is the function used to populate the mempool with page aligned and
> + * page sized memzone memory to avoid spreading object memory across two pages
> + * and to ensure all mempool objects reside on the page memory.
> + *
> + * @param mp
> + *   A pointer to the mempool structure.
> + * @return
> + *   The number of objects added on success.
> + *   On error, the chunk is not added in the memory list of the
> + *   mempool and a negative errno is returned.
> + */
> +__rte_experimental
> +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp);
> +
> +/**
>    * Add memory for objects in the pool at init
>    *
>    * This is the default function used by rte_mempool_create() to populate
> diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
> index 17cbca4..9a6fe65 100644
> --- a/lib/librte_mempool/rte_mempool_version.map
> +++ b/lib/librte_mempool/rte_mempool_version.map
> @@ -57,4 +57,5 @@ EXPERIMENTAL {
>   	global:
>   
>   	rte_mempool_ops_get_info;
> +	rte_mempool_populate_from_pg_sz_chunks;
>   };
  
Vamsi Krishna Attunuru July 29, 2019, 1:33 p.m. UTC | #2
> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> Sent: Monday, July 29, 2019 6:12 PM
> To: Vamsi Krishna Attunuru <vattunuru@marvell.com>; dev@dpdk.org
> Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> olivier.matz@6wind.com; ferruh.yigit@intel.com;
> anatoly.burakov@intel.com; Kiran Kumar Kokkilagadda
> <kirankumark@marvell.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v9 1/5] mempool: populate mempool
> with the page sized chunks of memory
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 7/29/19 3:13 PM, vattunuru@marvell.com wrote:
> > From: Vamsi Attunuru <vattunuru@marvell.com>
> >
> > Patch adds a routine to populate mempool from page aligned and page
> > sized chunks of memory to ensure memory objs do not fall across the
> > page boundaries. It's useful for applications that require physically
> > contiguous mbuf memory while running in IOVA=VA mode.
> >
> > Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> > Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
> 
> When two below issues fixed:
> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
> 
> As I understand it is likely to be a temporary solution until the problem is
> fixed in a generic way.
> 
> > ---
> >   lib/librte_mempool/rte_mempool.c           | 64
> ++++++++++++++++++++++++++++++
> >   lib/librte_mempool/rte_mempool.h           | 17 ++++++++
> >   lib/librte_mempool/rte_mempool_version.map |  1 +
> >   3 files changed, 82 insertions(+)
> >
> > diff --git a/lib/librte_mempool/rte_mempool.c
> > b/lib/librte_mempool/rte_mempool.c
> > index 7260ce0..00619bd 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -414,6 +414,70 @@ rte_mempool_populate_virt(struct rte_mempool
> *mp, char *addr,
> >   	return ret;
> >   }
> >
> > +/* Function to populate mempool from page sized mem chunks, allocate
> > +page size
> > + * of memory in memzone and populate them. Return the number of
> > +objects added,
> > + * or a negative value on error.
> > + */
> > +int
> > +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) {
> > +	char mz_name[RTE_MEMZONE_NAMESIZE];
> > +	size_t align, pg_sz, pg_shift;
> > +	const struct rte_memzone *mz;
> > +	unsigned int mz_id, n;
> > +	size_t min_chunk_size;
> > +	int ret;
> > +
> > +	ret = mempool_ops_alloc_once(mp);
> > +	if (ret != 0)
> > +		return ret;
> > +
> > +	if (mp->nb_mem_chunks != 0)
> > +		return -EEXIST;
> > +
> > +	pg_sz = get_min_page_size(mp->socket_id);
> > +	pg_shift = rte_bsf32(pg_sz);
> > +
> > +	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
> > +
> > +		ret = rte_mempool_ops_calc_mem_size(mp, n,
> > +				pg_shift, &min_chunk_size, &align);
> > +
> > +		if (ret < 0 || min_chunk_size > pg_sz)
> 
> If min_chunk_size is greater than pg_sz, ret is 0 and function returns success.

Ack, will fix it in next version.

> 
> > +			goto fail;
> > +
> > +		ret = snprintf(mz_name, sizeof(mz_name),
> > +			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name,
> mz_id);
> > +		if (ret < 0 || ret >= (int)sizeof(mz_name)) {
> > +			ret = -ENAMETOOLONG;
> > +			goto fail;
> > +		}
> > +
> > +		mz = rte_memzone_reserve_aligned(mz_name,
> min_chunk_size,
> > +				mp->socket_id, 0, align);
> > +
> > +		if (mz == NULL) {
> > +			ret = -rte_errno;
> > +			goto fail;
> > +		}
> > +
> > +		ret = rte_mempool_populate_iova(mp, mz->addr,
> > +				mz->iova, mz->len,
> > +				rte_mempool_memchunk_mz_free,
> > +				(void *)(uintptr_t)mz);
> > +		if (ret < 0) {
> > +			rte_memzone_free(mz);
> > +			goto fail;
> > +		}
> > +	}
> > +
> > +	return mp->size;
> > +
> > +fail:
> > +	rte_mempool_free_memchunks(mp);
> > +	return ret;
> > +}
> > +
> >   /* Default function to populate the mempool: allocate memory in
> memzones,
> >    * and populate them. Return the number of objects added, or a negative
> >    * value on error.
> > diff --git a/lib/librte_mempool/rte_mempool.h
> > b/lib/librte_mempool/rte_mempool.h
> > index 8053f7a..3046e4f 100644
> > --- a/lib/librte_mempool/rte_mempool.h
> > +++ b/lib/librte_mempool/rte_mempool.h
> > @@ -1062,6 +1062,23 @@ rte_mempool_populate_virt(struct
> rte_mempool *mp, char *addr,
> >   	void *opaque);
> >
> >   /**
> 
>   * @warning
>   * @b EXPERIMENTAL: this API may change without prior notice.
> 
> is missing

Ack

> 
> > + * Add memory from page sized memzones for objects in the pool at
> > +init
> > + *
> > + * This is the function used to populate the mempool with page
> > +aligned and
> > + * page sized memzone memory to avoid spreading object memory across
> > +two pages
> > + * and to ensure all mempool objects reside on the page memory.
> > + *
> > + * @param mp
> > + *   A pointer to the mempool structure.
> > + * @return
> > + *   The number of objects added on success.
> > + *   On error, the chunk is not added in the memory list of the
> > + *   mempool and a negative errno is returned.
> > + */
> > +__rte_experimental
> > +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool
> *mp);
> > +
> > +/**
> >    * Add memory for objects in the pool at init
> >    *
> >    * This is the default function used by rte_mempool_create() to
> > populate diff --git a/lib/librte_mempool/rte_mempool_version.map
> > b/lib/librte_mempool/rte_mempool_version.map
> > index 17cbca4..9a6fe65 100644
> > --- a/lib/librte_mempool/rte_mempool_version.map
> > +++ b/lib/librte_mempool/rte_mempool_version.map
> > @@ -57,4 +57,5 @@ EXPERIMENTAL {
> >   	global:
> >
> >   	rte_mempool_ops_get_info;
> > +	rte_mempool_populate_from_pg_sz_chunks;
> >   };
  

Patch

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 7260ce0..00619bd 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -414,6 +414,70 @@  rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	return ret;
 }
 
+/* Function to populate mempool from page sized mem chunks, allocate page size
+ * of memory in memzone and populate them. Return the number of objects added,
+ * or a negative value on error.
+ */
+int
+rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	size_t align, pg_sz, pg_shift;
+	const struct rte_memzone *mz;
+	unsigned int mz_id, n;
+	size_t min_chunk_size;
+	int ret;
+
+	ret = mempool_ops_alloc_once(mp);
+	if (ret != 0)
+		return ret;
+
+	if (mp->nb_mem_chunks != 0)
+		return -EEXIST;
+
+	pg_sz = get_min_page_size(mp->socket_id);
+	pg_shift = rte_bsf32(pg_sz);
+
+	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
+
+		ret = rte_mempool_ops_calc_mem_size(mp, n,
+				pg_shift, &min_chunk_size, &align);
+
+		if (ret < 0 || min_chunk_size > pg_sz)
+			goto fail;
+
+		ret = snprintf(mz_name, sizeof(mz_name),
+			RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id);
+		if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+			ret = -ENAMETOOLONG;
+			goto fail;
+		}
+
+		mz = rte_memzone_reserve_aligned(mz_name, min_chunk_size,
+				mp->socket_id, 0, align);
+
+		if (mz == NULL) {
+			ret = -rte_errno;
+			goto fail;
+		}
+
+		ret = rte_mempool_populate_iova(mp, mz->addr,
+				mz->iova, mz->len,
+				rte_mempool_memchunk_mz_free,
+				(void *)(uintptr_t)mz);
+		if (ret < 0) {
+			rte_memzone_free(mz);
+			goto fail;
+		}
+	}
+
+	return mp->size;
+
+fail:
+	rte_mempool_free_memchunks(mp);
+	return ret;
+}
+
 /* Default function to populate the mempool: allocate memory in memzones,
  * and populate them. Return the number of objects added, or a negative
  * value on error.
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 8053f7a..3046e4f 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -1062,6 +1062,23 @@  rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	void *opaque);
 
 /**
+ * Add memory from page sized memzones for objects in the pool at init
+ *
+ * This is the function used to populate the mempool with page aligned and
+ * page sized memzone memory to avoid spreading object memory across two pages
+ * and to ensure all mempool objects reside on the page memory.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   The number of objects added on success.
+ *   On error, the chunk is not added in the memory list of the
+ *   mempool and a negative errno is returned.
+ */
+__rte_experimental
+int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp);
+
+/**
  * Add memory for objects in the pool at init
  *
  * This is the default function used by rte_mempool_create() to populate
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 17cbca4..9a6fe65 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -57,4 +57,5 @@  EXPERIMENTAL {
 	global:
 
 	rte_mempool_ops_get_info;
+	rte_mempool_populate_from_pg_sz_chunks;
 };