[v2,2/4] mempool: use generic memory management
diff mbox series

Message ID 20200601103139.8612-3-fady@mellanox.com
State Superseded
Delegated to: Thomas Monjalon
Headers show
Series
  • build mempool on Windows
Related show

Checks

Context Check Description
ci/Intel-compilation fail apply issues
ci/checkpatch success coding style OK

Commit Message

Fady Bader June 1, 2020, 10:31 a.m. UTC
mempool used Unix memory management calls, which are not supported
on Windows.
Used generic memory management instead.

Signed-off-by: Fady Bader <fady@mellanox.com>
---
 lib/librte_mempool/rte_mempool.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

Comments

Dmitry Kozlyuk June 1, 2020, 7:59 p.m. UTC | #1
On Mon,  1 Jun 2020 13:31:37 +0300
Fady Bader <fady@mellanox.com> wrote:

[snip]
>  /* populate the mempool with an anonymous mapping */
> @@ -740,20 +739,20 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
>  	}
>  
>  	/* get chunk of virtually continuous memory */
> -	addr = mmap(NULL, size, PROT_READ | PROT_WRITE,
> -		MAP_SHARED | MAP_ANONYMOUS, -1, 0);
> -	if (addr == MAP_FAILED) {
> +	addr = rte_mem_map(NULL, size, RTE_PROT_READ | RTE_PROT_WRITE,
> +		RTE_MAP_SHARED | RTE_MAP_ANONYMOUS, -1, 0);
> +	if (addr == NULL) {
>  		rte_errno = errno;

rte_mem_map() sets rte_errno, on Windows using errno here is invalid.

>  		return 0;
>  	}
>  	/* can't use MMAP_LOCKED, it does not exist on BSD */
> -	if (mlock(addr, size) < 0) {
> +	if (rte_mem_lock(addr, size) < 0) {
>  		rte_errno = errno;

Same as above.

[snip]

Two more things:

1. What do you think about changing rte_ to rte_eal_ prefix for memory
management wrappers in MM series as Andrew Rybchenko suggested for v1? Since
the functions are DPDK-internal, this sounds reasonable to me.

2. Please use --in-reply-to for v2 and on.
Ranjit Menon June 1, 2020, 8:47 p.m. UTC | #2
On 6/1/2020 12:59 PM, Dmitry Kozlyuk wrote:
> On Mon,  1 Jun 2020 13:31:37 +0300
> Fady Bader <fady@mellanox.com> wrote:
>
> [snip]
>>   /* populate the mempool with an anonymous mapping */
>> @@ -740,20 +739,20 @@ rte_mempool_populate_anon(struct rte_mempool *mp)
>>   	}
>>   
>>   	/* get chunk of virtually continuous memory */
>> -	addr = mmap(NULL, size, PROT_READ | PROT_WRITE,
>> -		MAP_SHARED | MAP_ANONYMOUS, -1, 0);
>> -	if (addr == MAP_FAILED) {
>> +	addr = rte_mem_map(NULL, size, RTE_PROT_READ | RTE_PROT_WRITE,
>> +		RTE_MAP_SHARED | RTE_MAP_ANONYMOUS, -1, 0);
>> +	if (addr == NULL) {
>>   		rte_errno = errno;
> rte_mem_map() sets rte_errno, on Windows using errno here is invalid.
>
>>   		return 0;
>>   	}
>>   	/* can't use MMAP_LOCKED, it does not exist on BSD */
>> -	if (mlock(addr, size) < 0) {
>> +	if (rte_mem_lock(addr, size) < 0) {
>>   		rte_errno = errno;
> Same as above.
>
> [snip]
>
> Two more things:
>
> 1. What do you think about changing rte_ to rte_eal_ prefix for memory
> management wrappers in MM series as Andrew Rybchenko suggested for v1? Since
> the functions are DPDK-internal, this sounds reasonable to me.

I fully support this.

ranjit m.
Thomas Monjalon June 1, 2020, 9:14 p.m. UTC | #3
01/06/2020 21:59, Dmitry Kozlyuk:
> 1. What do you think about changing rte_ to rte_eal_ prefix for memory
> management wrappers in MM series as Andrew Rybchenko suggested for v1? Since
> the functions are DPDK-internal, this sounds reasonable to me.

For lib-internal function, the prefix should not start with rte_.
For exported function (even if internal), the prefix should be rte_[component]_.
For memory related functions, rte_mem_ is better than rte_eal_.
Andrew Rybchenko June 2, 2020, 7:40 a.m. UTC | #4
On 6/2/20 12:14 AM, Thomas Monjalon wrote:
> 01/06/2020 21:59, Dmitry Kozlyuk:
>> 1. What do you think about changing rte_ to rte_eal_ prefix for memory
>> management wrappers in MM series as Andrew Rybchenko suggested for v1? Since
>> the functions are DPDK-internal, this sounds reasonable to me.
> 
> For lib-internal function, the prefix should not start with rte_.
> For exported function (even if internal), the prefix should be rte_[component]_.
> For memory related functions, rte_mem_ is better than rte_eal_.

Yes, I agree.

Patch
diff mbox series

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 0bde995b5..c59f37a4e 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -12,7 +12,6 @@ 
 #include <inttypes.h>
 #include <errno.h>
 #include <sys/queue.h>
-#include <sys/mman.h>
 
 #include <rte_common.h>
 #include <rte_log.h>
@@ -148,7 +147,7 @@  get_min_page_size(int socket_id)
 
 	rte_memseg_list_walk(find_min_pagesz, &wa);
 
-	return wa.min == SIZE_MAX ? (size_t) getpagesize() : wa.min;
+	return wa.min == SIZE_MAX ? (size_t) rte_get_page_size() : wa.min;
 }
 
 
@@ -526,7 +525,7 @@  rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
 	else if (rte_eal_has_hugepages() || alloc_in_ext_mem)
 		*pg_sz = get_min_page_size(mp->socket_id);
 	else
-		*pg_sz = getpagesize();
+		*pg_sz = rte_get_page_size();
 
 	rte_mempool_trace_get_page_size(mp, *pg_sz);
 	return 0;
@@ -686,7 +685,7 @@  get_anon_size(const struct rte_mempool *mp)
 	size_t min_chunk_size;
 	size_t align;
 
-	pg_sz = getpagesize();
+	pg_sz = rte_get_page_size();
 	pg_shift = rte_bsf32(pg_sz);
 	size = rte_mempool_ops_calc_mem_size(mp, mp->size, pg_shift,
 					     &min_chunk_size, &align);
@@ -710,7 +709,7 @@  rte_mempool_memchunk_anon_free(struct rte_mempool_memhdr *memhdr,
 	if (size < 0)
 		return;
 
-	munmap(opaque, size);
+	rte_mem_unmap(opaque, size);
 }
 
 /* populate the mempool with an anonymous mapping */
@@ -740,20 +739,20 @@  rte_mempool_populate_anon(struct rte_mempool *mp)
 	}
 
 	/* get chunk of virtually continuous memory */
-	addr = mmap(NULL, size, PROT_READ | PROT_WRITE,
-		MAP_SHARED | MAP_ANONYMOUS, -1, 0);
-	if (addr == MAP_FAILED) {
+	addr = rte_mem_map(NULL, size, RTE_PROT_READ | RTE_PROT_WRITE,
+		RTE_MAP_SHARED | RTE_MAP_ANONYMOUS, -1, 0);
+	if (addr == NULL) {
 		rte_errno = errno;
 		return 0;
 	}
 	/* can't use MMAP_LOCKED, it does not exist on BSD */
-	if (mlock(addr, size) < 0) {
+	if (rte_mem_lock(addr, size) < 0) {
 		rte_errno = errno;
-		munmap(addr, size);
+		rte_mem_unmap(addr, size);
 		return 0;
 	}
 
-	ret = rte_mempool_populate_virt(mp, addr, size, getpagesize(),
+	ret = rte_mempool_populate_virt(mp, addr, size, rte_get_page_size(),
 		rte_mempool_memchunk_anon_free, addr);
 	if (ret == 0) /* should not happen */
 		ret = -ENOBUFS;