[dpdk-dev,v8,1/3] mempool: support external mempool operations
Commit Message
Until now, the objects stored in a mempool were internally stored in a
ring. This patch introduces the possibility to register external handlers
replacing the ring.
The default behavior remains unchanged, but calling the new function
rte_mempool_set_handler() right after rte_mempool_create_empty() allows
the user to change the handler that will be used when populating
the mempool.
This patch also adds a set of default ops (function callbacks) based
on rte_ring.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
app/test/test_mempool_perf.c | 1 -
lib/librte_mempool/Makefile | 2 +
lib/librte_mempool/rte_mempool.c | 73 ++++-----
lib/librte_mempool/rte_mempool.h | 247 ++++++++++++++++++++++++++++---
lib/librte_mempool/rte_mempool_default.c | 157 ++++++++++++++++++++
lib/librte_mempool/rte_mempool_ops.c | 149 +++++++++++++++++++
6 files changed, 562 insertions(+), 67 deletions(-)
create mode 100644 lib/librte_mempool/rte_mempool_default.c
create mode 100644 lib/librte_mempool/rte_mempool_ops.c
Comments
Hi,
This is more of a question/clarification than a comment. (And I have taken only some snippets from original mail to keep it cleaner)
<snip>
> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
<snip>
<snip>
> + /*
> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> + * set the correct index into the table of ops structs.
> + */
> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> + else if (flags & MEMPOOL_F_SP_PUT)
> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> + else if (flags & MEMPOOL_F_SC_GET)
> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> + else
> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> +
Hi,
(Apologies for overly-eager email sent on this thread earlier. Will be more careful in future).
This is more of a question/clarification than a comment. (And I have taken only some snippets from original mail to keep it cleaner)
<snip>
> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
<snip>
From the above what I understand is that multiple packet pool handlers can be created.
I have a use-case where application has multiple pools but only the packet pool is hardware backed. Using the hardware for general buffer requirements would prove costly.
From what I understand from the patch, selection of the pool is based on the flags below.
<snip>
> + /*
> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> + * set the correct index into the table of ops structs.
> + */
> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> + else if (flags & MEMPOOL_F_SP_PUT)
> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> + else if (flags & MEMPOOL_F_SC_GET)
> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> + else
> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> +
Is there any way I can achieve the above use case of multiple pools which can be selected by an application - something like a run-time toggle/flag?
-
Shreyansh
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of David Hunt
> Sent: Friday, June 03, 2016 8:28 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com; David Hunt <david.hunt@intel.com>
> Subject: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
>
[...]
> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> + struct rte_mempool_ops *ops;
> + int16_t ops_index;
> +
> + rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +
> + if (rte_mempool_ops_table.num_ops >=
> + RTE_MEMPOOL_MAX_OPS_IDX) {
> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Maximum number of mempool ops structs exceeded\n");
> + return -ENOSPC;
> + }
> +
> + if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Missing callback while registering mempool ops\n");
> + return -EINVAL;
> + }
> +
> + if (strlen(h->name) >= sizeof(ops->name) - 1) {
> + RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> + __func__, h->name);
> + rte_errno = EEXIST;
> + return NULL;
rte_mempool_ops_register has return type 'int'. Above should be 'return rte_errno;', or probably 'return -EEXIST;' itself.
> + }
> +
> + ops_index = rte_mempool_ops_table.num_ops++;
> + ops = &rte_mempool_ops_table.ops[ops_index];
> + snprintf(ops->name, sizeof(ops->name), "%s", h->name);
> + ops->alloc = h->alloc;
> + ops->put = h->put;
> + ops->get = h->get;
> + ops->get_count = h->get_count;
> +
> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +
> + return ops_index;
> +}
> +
[...]
-
Shreyansh
Hi Shreyansh,
On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> Hi,
>
> (Apologies for overly-eager email sent on this thread earlier. Will be more careful in future).
>
> This is more of a question/clarification than a comment. (And I have taken only some snippets from original mail to keep it cleaner)
>
> <snip>
>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> <snip>
>
> From the above what I understand is that multiple packet pool handlers can be created.
>
> I have a use-case where application has multiple pools but only the packet pool is hardware backed. Using the hardware for general buffer requirements would prove costly.
> From what I understand from the patch, selection of the pool is based on the flags below.
The flags are only used to select one of the default handlers for
backward compatibility through
the rte_mempool_create call. If you wish to use a mempool handler that
is not one of the
defaults, (i.e. a new hardware handler), you would use the
rte_create_mempool_empty
followed by the rte_mempool_set_ops_byname call.
So, for the external handlers, you create and empty mempool, then set
the operations (ops)
for that particular mempool.
> <snip>
>> + /*
>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>> + * set the correct index into the table of ops structs.
>> + */
>> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>> + else if (flags & MEMPOOL_F_SP_PUT)
>> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>> + else if (flags & MEMPOOL_F_SC_GET)
>> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>> + else
>> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>> +
> Is there any way I can achieve the above use case of multiple pools which can be selected by an application - something like a run-time toggle/flag?
>
> -
> Shreyansh
Yes, you can create multiple pools, some using the default handlers, and
some using external handlers.
There is an example of this in the autotests (app/test/test_mempool.c).
This test creates multiple
mempools, of which one is a custom malloc based mempool handler. The
test puts and gets mbufs
to/from each mempool all in the same application.
Regards,
Dave.
Hi David,
Please find some comments below.
On 06/03/2016 04:58 PM, David Hunt wrote:
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> +/**
> + * Prototype for implementation specific data provisioning function.
> + *
> + * The function should provide the implementation specific memory for
> + * for use by the other mempool ops functions in a given mempool ops struct.
> + * E.g. the default ops provides an instance of the rte_ring for this purpose.
> + * it will mostlikely point to a different type of data structure, and
> + * will be transparent to the application programmer.
> + */
> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
In http://dpdk.org/ml/archives/dev/2016-June/040233.html, I suggested
to change the prototype to return an int (-errno) and directly set
the set mp->pool_data (void *) or mp->pool_if (uint64_t). No cast
would be required in this latter case.
By the way, there is a typo in the comment:
"mostlikely" -> "most likely"
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_default.c
> +static void
> +common_ring_free(struct rte_mempool *mp)
> +{
> + rte_ring_free((struct rte_ring *)mp->pool_data);
> +}
I don't think the cast is needed here.
(same in other functions)
> --- /dev/null
> +++ b/lib/librte_mempool/rte_mempool_ops.c
> +/* add a new ops struct in rte_mempool_ops_table, return its index */
> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> + struct rte_mempool_ops *ops;
> + int16_t ops_index;
> +
> + rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +
> + if (rte_mempool_ops_table.num_ops >=
> + RTE_MEMPOOL_MAX_OPS_IDX) {
> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Maximum number of mempool ops structs exceeded\n");
> + return -ENOSPC;
> + }
> +
> + if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Missing callback while registering mempool ops\n");
> + return -EINVAL;
> + }
> +
> + if (strlen(h->name) >= sizeof(ops->name) - 1) {
> + RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> + __func__, h->name);
> + rte_errno = EEXIST;
> + return NULL;
> + }
This has already been noticed by Shreyansh, but in case of:
rte_mempool_ops.c:75:10: error: return makes integer from pointer
without a cast [-Werror=int-conversion]
return NULL;
^
> +/* sets mempool ops previously registered by rte_mempool_ops_register */
> +int
> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
When I compile with shared libraries enabled, I get the following error:
librte_reorder.so: undefined reference to `rte_mempool_ops_table'
librte_mbuf.so: undefined reference to `rte_mempool_set_ops_byname'
...
The new functions and global variables must be in
rte_mempool_version.map. This was in v5
( http://dpdk.org/ml/archives/dev/2016-May/039365.html ) but
was dropped in v6.
Regards,
Olivier
Hi David,
Thanks for explanation. I have some comments inline...
> -----Original Message-----
> From: Hunt, David [mailto:david.hunt@intel.com]
> Sent: Tuesday, June 07, 2016 2:56 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
>
> Hi Shreyansh,
>
> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> > Hi,
> >
> > (Apologies for overly-eager email sent on this thread earlier. Will be more
> careful in future).
> >
> > This is more of a question/clarification than a comment. (And I have taken
> only some snippets from original mail to keep it cleaner)
> >
> > <snip>
> >> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> >> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> >> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> >> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> > <snip>
> >
> > From the above what I understand is that multiple packet pool handlers can
> be created.
> >
> > I have a use-case where application has multiple pools but only the packet
> pool is hardware backed. Using the hardware for general buffer requirements
> would prove costly.
> > From what I understand from the patch, selection of the pool is based on
> the flags below.
>
> The flags are only used to select one of the default handlers for
> backward compatibility through
> the rte_mempool_create call. If you wish to use a mempool handler that
> is not one of the
> defaults, (i.e. a new hardware handler), you would use the
> rte_create_mempool_empty
> followed by the rte_mempool_set_ops_byname call.
> So, for the external handlers, you create and empty mempool, then set
> the operations (ops)
> for that particular mempool.
I am concerned about the existing applications (for example, l3fwd).
Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.
If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
struct rte_mempool_ops custom_hw_allocator = {...}
thereafter, in config/common_base:
CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
calls to rte_pktmbuf_pool_create would use the new allocator.
But, another problem arises here.
There are two distinct paths for allocations of a memory pool:
1. A 'pkt' pool:
rte_pktmbuf_pool_create
\- rte_mempool_create_empty
| \- rte_mempool_set_ops_byname(..ring_mp_mc..)
|
`- rte_mempool_set_ops_byname
(...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
/* Override default 'ring_mp_mc' of
* rte_mempool_create */
2. Through generic mempool create API
rte_mempool_create
\- rte_mempool_create_empty
(passing pktmbuf and pool constructors)
I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.
In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.
From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.
>
> > <snip>
> >> + /*
> >> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> >> + * set the correct index into the table of ops structs.
> >> + */
> >> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> >> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> >> + else if (flags & MEMPOOL_F_SP_PUT)
> >> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> >> + else if (flags & MEMPOOL_F_SC_GET)
> >> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> >> + else
> >> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> >> +
My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
...
#define MEMPOOL_F_SC_GET 0x0008
#define MEMPOOL_F_PKT_ALLOC 0x0010
...
in rte_mempool_create_empty:
... after checking the other MEMPOOL_F_* flags...
if (flags & MEMPOOL_F_PKT_ALLOC)
rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().
Thereafter, rte_pktmbuf_pool_create can be changed to:
...
mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
- sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+ sizeof(struct rte_pktmbuf_pool_private), socket_id,
+ MEMPOOL_F_PKT_ALLOC);
if (mp == NULL)
return NULL;
> > Is there any way I can achieve the above use case of multiple pools which
> can be selected by an application - something like a run-time toggle/flag?
> >
> > -
> > Shreyansh
>
> Yes, you can create multiple pools, some using the default handlers, and
> some using external handlers.
> There is an example of this in the autotests (app/test/test_mempool.c).
> This test creates multiple
> mempools, of which one is a custom malloc based mempool handler. The
> test puts and gets mbufs
> to/from each mempool all in the same application.
Thanks for the explanation.
>
> Regards,
> Dave.
>
-
Shreyansh
Hi David,
Sorry for multiple mails on a patch. I forgot a trivial comment in previous mail.
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of David Hunt
> Sent: Friday, June 03, 2016 8:28 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com; David Hunt <david.hunt@intel.com>
> Subject: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
>
[...]
> +int
> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
> +{
> + struct rte_mempool_ops *ops;
> + int16_t ops_index;
> +
> + rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +
> + if (rte_mempool_ops_table.num_ops >=
> + RTE_MEMPOOL_MAX_OPS_IDX) {
> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Maximum number of mempool ops structs exceeded\n");
> + return -ENOSPC;
> + }
> +
> + if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
I think 'h->alloc' should also be checked here.
> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> + RTE_LOG(ERR, MEMPOOL,
> + "Missing callback while registering mempool ops\n");
> + return -EINVAL;
> + }
> +
> + if (strlen(h->name) >= sizeof(ops->name) - 1) {
> + RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> + __func__, h->name);
> + rte_errno = EEXIST;
> + return NULL;
> + }
> +
> + ops_index = rte_mempool_ops_table.num_ops++;
> + ops = &rte_mempool_ops_table.ops[ops_index];
> + snprintf(ops->name, sizeof(ops->name), "%s", h->name);
> + ops->alloc = h->alloc;
> + ops->put = h->put;
> + ops->get = h->get;
> + ops->get_count = h->get_count;
> +
> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +
> + return ops_index;
> +}
> +
[...]
-
Shreyansh
Hi Shreyansh,
On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> Hi David,
>
> Thanks for explanation. I have some comments inline...
>
>> -----Original Message-----
>> From: Hunt, David [mailto:david.hunt@intel.com]
>> Sent: Tuesday, June 07, 2016 2:56 PM
>> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
>> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
>> jerin.jacob@caviumnetworks.com
>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>> operations
>>
>> Hi Shreyansh,
>>
>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>> Hi,
>>>
>>> (Apologies for overly-eager email sent on this thread earlier. Will be more
>> careful in future).
>>> This is more of a question/clarification than a comment. (And I have taken
>> only some snippets from original mail to keep it cleaner)
>>> <snip>
>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>> <snip>
>>>
>>> From the above what I understand is that multiple packet pool handlers can
>> be created.
>>> I have a use-case where application has multiple pools but only the packet
>> pool is hardware backed. Using the hardware for general buffer requirements
>> would prove costly.
>>> From what I understand from the patch, selection of the pool is based on
>> the flags below.
>>
>> The flags are only used to select one of the default handlers for
>> backward compatibility through
>> the rte_mempool_create call. If you wish to use a mempool handler that
>> is not one of the
>> defaults, (i.e. a new hardware handler), you would use the
>> rte_create_mempool_empty
>> followed by the rte_mempool_set_ops_byname call.
>> So, for the external handlers, you create and empty mempool, then set
>> the operations (ops)
>> for that particular mempool.
> I am concerned about the existing applications (for example, l3fwd).
> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
> Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.
>
> If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
>
> struct rte_mempool_ops custom_hw_allocator = {...}
>
> thereafter, in config/common_base:
>
> CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>
> calls to rte_pktmbuf_pool_create would use the new allocator.
Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
rte_mempool_create will continue to use the default handlers (ring based).
> But, another problem arises here.
>
> There are two distinct paths for allocations of a memory pool:
> 1. A 'pkt' pool:
> rte_pktmbuf_pool_create
> \- rte_mempool_create_empty
> | \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> |
> `- rte_mempool_set_ops_byname
> (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> /* Override default 'ring_mp_mc' of
> * rte_mempool_create */
>
> 2. Through generic mempool create API
> rte_mempool_create
> \- rte_mempool_create_empty
> (passing pktmbuf and pool constructors)
>
> I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.
>
> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.
>
> From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.
This is intended for backward compatibility, and API consistency. Any
applications that use
rte_mempool_create directly will continue to use the default mempool
handlers. If the need
to use a custeom hander, they will need to be modified to call the newer
API,
rte_mempool_create_empty and rte_mempool_set_ops_byname.
>>> <snip>
>>>> + /*
>>>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>>>> + * set the correct index into the table of ops structs.
>>>> + */
>>>> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>>>> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>>>> + else if (flags & MEMPOOL_F_SP_PUT)
>>>> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>>>> + else if (flags & MEMPOOL_F_SC_GET)
>>>> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>>>> + else
>>>> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>>>> +
> My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>
> ...
> #define MEMPOOL_F_SC_GET 0x0008
> #define MEMPOOL_F_PKT_ALLOC 0x0010
> ...
>
> in rte_mempool_create_empty:
> ... after checking the other MEMPOOL_F_* flags...
>
> if (flags & MEMPOOL_F_PKT_ALLOC)
> rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>
> And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().
>
> Thereafter, rte_pktmbuf_pool_create can be changed to:
>
> ...
> mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> + sizeof(struct rte_pktmbuf_pool_private), socket_id,
> + MEMPOOL_F_PKT_ALLOC);
> if (mp == NULL)
> return NULL;
Yes, this would simplify somewhat the creation of a pktmbuf pool, in
that it replaces
the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
want
to introduce a third method of creating a mempool to the developers. If we
introduced this, we would then have:
1. rte_pktmbuf_pool_create()
2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
use the configured custom handler)
3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
by a call to rte_mempool_set_ops_byname() (would allow several
different custom
handlers to be used in one application
Does anyone else have an opinion on this? Oliver, Jerin, Jan?
Regards,
Dave.
On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote:
> Hi Shreyansh,
>
> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > Hi David,
> >
> > Thanks for explanation. I have some comments inline...
> >
> > > -----Original Message-----
> > > From: Hunt, David [mailto:david.hunt@intel.com]
> > > Sent: Tuesday, June 07, 2016 2:56 PM
> > > To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> > > Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> > > jerin.jacob@caviumnetworks.com
> > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> > > operations
> > >
> > > Hi Shreyansh,
> > >
> > > On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> > > > Hi,
> > > >
> > > > (Apologies for overly-eager email sent on this thread earlier. Will be more
> > > careful in future).
> > > > This is more of a question/clarification than a comment. (And I have taken
> > > only some snippets from original mail to keep it cleaner)
> > > > <snip>
> > > > > +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> > > > > +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> > > > > +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> > > > > +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> > > > <snip>
> > > >
> > > > From the above what I understand is that multiple packet pool handlers can
> > > be created.
> > > > I have a use-case where application has multiple pools but only the packet
> > > pool is hardware backed. Using the hardware for general buffer requirements
> > > would prove costly.
> > > > From what I understand from the patch, selection of the pool is based on
> > > the flags below.
> > >
> > > The flags are only used to select one of the default handlers for
> > > backward compatibility through
> > > the rte_mempool_create call. If you wish to use a mempool handler that
> > > is not one of the
> > > defaults, (i.e. a new hardware handler), you would use the
> > > rte_create_mempool_empty
> > > followed by the rte_mempool_set_ops_byname call.
> > > So, for the external handlers, you create and empty mempool, then set
> > > the operations (ops)
> > > for that particular mempool.
> > I am concerned about the existing applications (for example, l3fwd).
> > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
> > Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.
> >
> > If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
> >
> > struct rte_mempool_ops custom_hw_allocator = {...}
> >
> > thereafter, in config/common_base:
> >
> > CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> >
> > calls to rte_pktmbuf_pool_create would use the new allocator.
>
> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
> rte_mempool_create will continue to use the default handlers (ring based).
> > But, another problem arises here.
> >
> > There are two distinct paths for allocations of a memory pool:
> > 1. A 'pkt' pool:
> > rte_pktmbuf_pool_create
> > \- rte_mempool_create_empty
> > | \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> > |
> > `- rte_mempool_set_ops_byname
> > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> > /* Override default 'ring_mp_mc' of
> > * rte_mempool_create */
> >
> > 2. Through generic mempool create API
> > rte_mempool_create
> > \- rte_mempool_create_empty
> > (passing pktmbuf and pool constructors)
> > I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.
> >
> > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.
> >
> > From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.
>
> This is intended for backward compatibility, and API consistency. Any
> applications that use
> rte_mempool_create directly will continue to use the default mempool
> handlers. If the need
> to use a custeom hander, they will need to be modified to call the newer
> API,
> rte_mempool_create_empty and rte_mempool_set_ops_byname.
>
>
> > > > <snip>
> > > > > + /*
> > > > > + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> > > > > + * set the correct index into the table of ops structs.
> > > > > + */
> > > > > + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> > > > > + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> > > > > + else if (flags & MEMPOOL_F_SP_PUT)
> > > > > + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> > > > > + else if (flags & MEMPOOL_F_SC_GET)
> > > > > + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> > > > > + else
> > > > > + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> > > > > +
> > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >
> > ...
> > #define MEMPOOL_F_SC_GET 0x0008
> > #define MEMPOOL_F_PKT_ALLOC 0x0010
> > ...
> >
> > in rte_mempool_create_empty:
> > ... after checking the other MEMPOOL_F_* flags...
> >
> > if (flags & MEMPOOL_F_PKT_ALLOC)
> > rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >
> > And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().
> >
> > Thereafter, rte_pktmbuf_pool_create can be changed to:
> >
> > ...
> > mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> > - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> > + sizeof(struct rte_pktmbuf_pool_private), socket_id,
> > + MEMPOOL_F_PKT_ALLOC);
> > if (mp == NULL)
> > return NULL;
>
> Yes, this would simplify somewhat the creation of a pktmbuf pool, in that it
> replaces
> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> want
> to introduce a third method of creating a mempool to the developers. If we
> introduced this, we would then have:
> 1. rte_pktmbuf_pool_create()
> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> use the configured custom handler)
> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
> by a call to rte_mempool_set_ops_byname() (would allow several different
> custom
> handlers to be used in one application
>
> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
As I mentioned earlier, My take is not to create the separate API's for
external mempool handlers.In my view, It's same, just that sepreate
mempool handler through function pointers.
To keep the backward compatibility, I think we can extend the flags
in rte_mempool_create and have a single API external/internal pool
creation(makes easy for existing applications too, add a just mempool
flag command line argument to existing applications to choose the
mempool handler)
Jerin
>
> Regards,
> Dave.
>
>
Hi Olivier,
On 8/6/2016 1:13 PM, Olivier Matz wrote:
> Hi David,
>
> Please find some comments below.
>
> On 06/03/2016 04:58 PM, David Hunt wrote:
>
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> +/**
>> + * Prototype for implementation specific data provisioning function.
>> + *
>> + * The function should provide the implementation specific memory for
>> + * for use by the other mempool ops functions in a given mempool ops struct.
>> + * E.g. the default ops provides an instance of the rte_ring for this purpose.
>> + * it will mostlikely point to a different type of data structure, and
>> + * will be transparent to the application programmer.
>> + */
>> +typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
> In http://dpdk.org/ml/archives/dev/2016-June/040233.html, I suggested
> to change the prototype to return an int (-errno) and directly set
> the set mp->pool_data (void *) or mp->pool_if (uint64_t). No cast
> would be required in this latter case.
Done.
> By the way, there is a typo in the comment:
> "mostlikely" -> "most likely"
Fixed.
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_default.c
>> +static void
>> +common_ring_free(struct rte_mempool *mp)
>> +{
>> + rte_ring_free((struct rte_ring *)mp->pool_data);
>> +}
> I don't think the cast is needed here.
> (same in other functions)
Removed.
>
>> --- /dev/null
>> +++ b/lib/librte_mempool/rte_mempool_ops.c
>> +/* add a new ops struct in rte_mempool_ops_table, return its index */
>> +int
>> +rte_mempool_ops_register(const struct rte_mempool_ops *h)
>> +{
>> + struct rte_mempool_ops *ops;
>> + int16_t ops_index;
>> +
>> + rte_spinlock_lock(&rte_mempool_ops_table.sl);
>> +
>> + if (rte_mempool_ops_table.num_ops >=
>> + RTE_MEMPOOL_MAX_OPS_IDX) {
>> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>> + RTE_LOG(ERR, MEMPOOL,
>> + "Maximum number of mempool ops structs exceeded\n");
>> + return -ENOSPC;
>> + }
>> +
>> + if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
>> + rte_spinlock_unlock(&rte_mempool_ops_table.sl);
>> + RTE_LOG(ERR, MEMPOOL,
>> + "Missing callback while registering mempool ops\n");
>> + return -EINVAL;
>> + }
>> +
>> + if (strlen(h->name) >= sizeof(ops->name) - 1) {
>> + RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
>> + __func__, h->name);
>> + rte_errno = EEXIST;
>> + return NULL;
>> + }
> This has already been noticed by Shreyansh, but in case of:
>
> rte_mempool_ops.c:75:10: error: return makes integer from pointer
> without a cast [-Werror=int-conversion]
> return NULL;
> ^
Changed to return -EEXIST
>
>> +/* sets mempool ops previously registered by rte_mempool_ops_register */
>> +int
>> +rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
>
> When I compile with shared libraries enabled, I get the following error:
>
> librte_reorder.so: undefined reference to `rte_mempool_ops_table'
> librte_mbuf.so: undefined reference to `rte_mempool_set_ops_byname'
> ...
>
> The new functions and global variables must be in
> rte_mempool_version.map. This was in v5
> ( http://dpdk.org/ml/archives/dev/2016-May/039365.html ) but
> was dropped in v6.
OK, Added.
>
> Regards,
> Olivier
Thanks,
David.
On 9/6/2016 11:31 AM, Jerin Jacob wrote:
> On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote:
>> Hi Shreyansh,
>>
>> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
>>> Hi David,
>>>
>>> Thanks for explanation. I have some comments inline...
>>>
>>>> -----Original Message-----
>>>> From: Hunt, David [mailto:david.hunt@intel.com]
>>>> Sent: Tuesday, June 07, 2016 2:56 PM
>>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
>>>> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
>>>> jerin.jacob@caviumnetworks.com
>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>>>> operations
>>>>
>>>> Hi Shreyansh,
>>>>
>>>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>>>> Hi,
>>>>>
>>>>> (Apologies for overly-eager email sent on this thread earlier. Will be more
>>>> careful in future).
>>>>> This is more of a question/clarification than a comment. (And I have taken
>>>> only some snippets from original mail to keep it cleaner)
>>>>> <snip>
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>>>> <snip>
>>>>>
>>>>> From the above what I understand is that multiple packet pool handlers can
>>>> be created.
>>>>> I have a use-case where application has multiple pools but only the packet
>>>> pool is hardware backed. Using the hardware for general buffer requirements
>>>> would prove costly.
>>>>> From what I understand from the patch, selection of the pool is based on
>>>> the flags below.
>>>>
>>>> The flags are only used to select one of the default handlers for
>>>> backward compatibility through
>>>> the rte_mempool_create call. If you wish to use a mempool handler that
>>>> is not one of the
>>>> defaults, (i.e. a new hardware handler), you would use the
>>>> rte_create_mempool_empty
>>>> followed by the rte_mempool_set_ops_byname call.
>>>> So, for the external handlers, you create and empty mempool, then set
>>>> the operations (ops)
>>>> for that particular mempool.
>>> I am concerned about the existing applications (for example, l3fwd).
>>> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
>>> Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.
>>>
>>> If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
>>>
>>> struct rte_mempool_ops custom_hw_allocator = {...}
>>>
>>> thereafter, in config/common_base:
>>>
>>> CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>>>
>>> calls to rte_pktmbuf_pool_create would use the new allocator.
>> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
>> rte_mempool_create will continue to use the default handlers (ring based).
>>> But, another problem arises here.
>>>
>>> There are two distinct paths for allocations of a memory pool:
>>> 1. A 'pkt' pool:
>>> rte_pktmbuf_pool_create
>>> \- rte_mempool_create_empty
>>> | \- rte_mempool_set_ops_byname(..ring_mp_mc..)
>>> |
>>> `- rte_mempool_set_ops_byname
>>> (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
>>> /* Override default 'ring_mp_mc' of
>>> * rte_mempool_create */
>>>
>>> 2. Through generic mempool create API
>>> rte_mempool_create
>>> \- rte_mempool_create_empty
>>> (passing pktmbuf and pool constructors)
>>> I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.
>>>
>>> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.
>>>
>>> From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.
>> This is intended for backward compatibility, and API consistency. Any
>> applications that use
>> rte_mempool_create directly will continue to use the default mempool
>> handlers. If the need
>> to use a custeom hander, they will need to be modified to call the newer
>> API,
>> rte_mempool_create_empty and rte_mempool_set_ops_byname.
>>
>>
>>>>> <snip>
>>>>>> + /*
>>>>>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
>>>>>> + * set the correct index into the table of ops structs.
>>>>>> + */
>>>>>> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>>>>>> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>>>>>> + else if (flags & MEMPOOL_F_SP_PUT)
>>>>>> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>>>>>> + else if (flags & MEMPOOL_F_SC_GET)
>>>>>> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>>>>>> + else
>>>>>> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>>>>>> +
>>> My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>>>
>>> ...
>>> #define MEMPOOL_F_SC_GET 0x0008
>>> #define MEMPOOL_F_PKT_ALLOC 0x0010
>>> ...
>>>
>>> in rte_mempool_create_empty:
>>> ... after checking the other MEMPOOL_F_* flags...
>>>
>>> if (flags & MEMPOOL_F_PKT_ALLOC)
>>> rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>>>
>>> And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().
>>>
>>> Thereafter, rte_pktmbuf_pool_create can be changed to:
>>>
>>> ...
>>> mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
>>> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
>>> + sizeof(struct rte_pktmbuf_pool_private), socket_id,
>>> + MEMPOOL_F_PKT_ALLOC);
>>> if (mp == NULL)
>>> return NULL;
>> Yes, this would simplify somewhat the creation of a pktmbuf pool, in that it
>> replaces
>> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
>> want
>> to introduce a third method of creating a mempool to the developers. If we
>> introduced this, we would then have:
>> 1. rte_pktmbuf_pool_create()
>> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
>> use the configured custom handler)
>> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
>> by a call to rte_mempool_set_ops_byname() (would allow several different
>> custom
>> handlers to be used in one application
>>
>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> As I mentioned earlier, My take is not to create the separate API's for
> external mempool handlers.In my view, It's same, just that sepreate
> mempool handler through function pointers.
>
> To keep the backward compatibility, I think we can extend the flags
> in rte_mempool_create and have a single API external/internal pool
> creation(makes easy for existing applications too, add a just mempool
> flag command line argument to existing applications to choose the
> mempool handler)
>
> Jerin
Would a good compromise be what Shreyansh is suggesting, adding
just 1 bit into the flags to allow the rte_mempool_create use the mempool
handler as defined in CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS?
That way we wouldn't have the complexity of the previously discussed
proposed solution of parsing out the sections of the flags bits
into bytes to get the handler index requested, and we wouldn't have to
query by
name to get the index of the handler we're interested in. It's just a
simple "use the configured custom handler or not" bit.
This way, applications can set the bit to use the custom handler
defined in the config, but still have the flexibility using the other
new API calls
to have several handlers in use in an application.
Regards,
David.
Hi David,
> -----Original Message-----
> From: Hunt, David [mailto:david.hunt@intel.com]
> Sent: Thursday, June 09, 2016 3:10 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
>
> Hi Shreyansh,
>
> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > Hi David,
> >
> > Thanks for explanation. I have some comments inline...
> >
> >> -----Original Message-----
> >> From: Hunt, David [mailto:david.hunt@intel.com]
> >> Sent: Tuesday, June 07, 2016 2:56 PM
> >> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> >> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> >> jerin.jacob@caviumnetworks.com
> >> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> >> operations
> >>
> >> Hi Shreyansh,
> >>
> >> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> >>> Hi,
> >>>
> >>> (Apologies for overly-eager email sent on this thread earlier. Will be
> more
> >> careful in future).
> >>> This is more of a question/clarification than a comment. (And I have
> taken
> >> only some snippets from original mail to keep it cleaner)
> >>> <snip>
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> >>> <snip>
> >>>
> >>> From the above what I understand is that multiple packet pool handlers
> can
> >> be created.
> >>> I have a use-case where application has multiple pools but only the
> packet
> >> pool is hardware backed. Using the hardware for general buffer
> requirements
> >> would prove costly.
> >>> From what I understand from the patch, selection of the pool is based
> on
> >> the flags below.
> >>
> >> The flags are only used to select one of the default handlers for
> >> backward compatibility through
> >> the rte_mempool_create call. If you wish to use a mempool handler that
> >> is not one of the
> >> defaults, (i.e. a new hardware handler), you would use the
> >> rte_create_mempool_empty
> >> followed by the rte_mempool_set_ops_byname call.
> >> So, for the external handlers, you create and empty mempool, then set
> >> the operations (ops)
> >> for that particular mempool.
> > I am concerned about the existing applications (for example, l3fwd).
> > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname'
> model would require modifications to these applications.
> > Ideally, without any modifications, these applications should be able to
> use packet pools (backed by hardware) and buffer pools (backed by
> ring/others) - transparently.
> >
> > If I go by your suggestions, what I understand is, doing the above without
> modification to applications would be equivalent to:
> >
> > struct rte_mempool_ops custom_hw_allocator = {...}
> >
> > thereafter, in config/common_base:
> >
> > CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> >
> > calls to rte_pktmbuf_pool_create would use the new allocator.
>
> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
> rte_mempool_create will continue to use the default handlers (ring based).
Agree with you.
But, some applications continue to use rte_mempool_create for allocating packet pools. Thus, even with a custom handler available (which, most probably, would be a hardware packet buffer handler), application would unintentionally end up not using it.
Probably, such applications should be changed? (e.g. pipeline).
> > But, another problem arises here.
> >
> > There are two distinct paths for allocations of a memory pool:
> > 1. A 'pkt' pool:
> > rte_pktmbuf_pool_create
> > \- rte_mempool_create_empty
> > | \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> > |
> > `- rte_mempool_set_ops_byname
> > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> > /* Override default 'ring_mp_mc' of
> > * rte_mempool_create */
> >
> > 2. Through generic mempool create API
> > rte_mempool_create
> > \- rte_mempool_create_empty
> > (passing pktmbuf and pool constructors)
> >
> > I found various instances in example applications where
> rte_mempool_create() is being called directly for packet pools - bypassing
> the more semantically correct call to rte_pktmbuf_* for packet pools.
> >
> > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to
> replace custom handler operations for packet buffer allocations.
> >
> > From a performance point-of-view, Applications should be able to select
> between packet pools and non-packet pools.
>
> This is intended for backward compatibility, and API consistency. Any
> applications that use
> rte_mempool_create directly will continue to use the default mempool
> handlers. If the need
> to use a custeom hander, they will need to be modified to call the newer
> API,
> rte_mempool_create_empty and rte_mempool_set_ops_byname.
My understanding was that applications should be oblivious of how their pools are managed, except that they do understand packet pools should be faster (or accelerated) than non-packet pools.
(Of course, some applications may be designed to explicitly take advantage of an available handler through rte_mempool_create_empty=>rte_mempool_set_ops_byname calls.)
In that perspective, I was expecting that applications should be calling:
-> rte_pktmbuf_* for all packet relation operations
-> rte_mempool_* for non-packet or explicit hardware handlers
And leave rest of the mempool handler related magic to DPDK framework.
>
>
> >>> <snip>
> >>>> + /*
> >>>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the
> flags to
> >>>> + * set the correct index into the table of ops structs.
> >>>> + */
> >>>> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> >>>> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> >>>> + else if (flags & MEMPOOL_F_SP_PUT)
> >>>> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> >>>> + else if (flags & MEMPOOL_F_SC_GET)
> >>>> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> >>>> + else
> >>>> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> >>>> +
> > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which,
> if specified, would:
I read through some previous discussions and realized that something similar [1] had already been proposed earlier.
I didn't want to hijack this thread with an old discussions - it was unintentional.
[1] http://article.gmane.org/gmane.comp.networking.dpdk.devel/39803
But, [1] would make the distinction of *type* of pool and its corresponding handler, whether default or external/custom, quite clear.
> >
> > ...
> > #define MEMPOOL_F_SC_GET 0x0008
> > #define MEMPOOL_F_PKT_ALLOC 0x0010
> > ...
> >
> > in rte_mempool_create_empty:
> > ... after checking the other MEMPOOL_F_* flags...
> >
> > if (flags & MEMPOOL_F_PKT_ALLOC)
> > rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >
> > And removing the redundant call to rte_mempool_set_ops_byname() in
> rte_pktmbuf_create_pool().
> >
> > Thereafter, rte_pktmbuf_pool_create can be changed to:
> >
> > ...
> > mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> > - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> > + sizeof(struct rte_pktmbuf_pool_private), socket_id,
> > + MEMPOOL_F_PKT_ALLOC);
> > if (mp == NULL)
> > return NULL;
>
> Yes, this would simplify somewhat the creation of a pktmbuf pool, in
> that it replaces
> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> want
> to introduce a third method of creating a mempool to the developers. If we
> introduced this, we would then have:
> 1. rte_pktmbuf_pool_create()
> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> use the configured custom handler)
> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
> by a call to rte_mempool_set_ops_byname() (would allow several
> different custom
> handlers to be used in one application
>
> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>
> Regards,
> Dave.
>
-
Shreyansh
Hi Jerin,
> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Thursday, June 09, 2016 4:02 PM
> To: Hunt, David <david.hunt@intel.com>
> Cc: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org;
> olivier.matz@6wind.com; viktorin@rehivetech.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
>
> On Thu, Jun 09, 2016 at 10:39:46AM +0100, Hunt, David wrote:
> > Hi Shreyansh,
> >
> > On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > > Hi David,
> > >
> > > Thanks for explanation. I have some comments inline...
> > >
> > > > -----Original Message-----
> > > > From: Hunt, David [mailto:david.hunt@intel.com]
> > > > Sent: Tuesday, June 07, 2016 2:56 PM
> > > > To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> > > > Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> > > > jerin.jacob@caviumnetworks.com
> > > > Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external
> mempool
> > > > operations
> > > >
> > > > Hi Shreyansh,
> > > >
> > > > On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> > > > > Hi,
> > > > >
> > > > > (Apologies for overly-eager email sent on this thread earlier. Will
> be more
> > > > careful in future).
> > > > > This is more of a question/clarification than a comment. (And I have
> taken
> > > > only some snippets from original mail to keep it cleaner)
> > > > > <snip>
> > > > > > +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> > > > > > +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> > > > > > +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> > > > > > +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> > > > > <snip>
> > > > >
> > > > > From the above what I understand is that multiple packet pool
> handlers can
> > > > be created.
> > > > > I have a use-case where application has multiple pools but only the
> packet
> > > > pool is hardware backed. Using the hardware for general buffer
> requirements
> > > > would prove costly.
> > > > > From what I understand from the patch, selection of the pool is
> based on
> > > > the flags below.
> > > >
> > > > The flags are only used to select one of the default handlers for
> > > > backward compatibility through
> > > > the rte_mempool_create call. If you wish to use a mempool handler that
> > > > is not one of the
> > > > defaults, (i.e. a new hardware handler), you would use the
> > > > rte_create_mempool_empty
> > > > followed by the rte_mempool_set_ops_byname call.
> > > > So, for the external handlers, you create and empty mempool, then set
> > > > the operations (ops)
> > > > for that particular mempool.
> > > I am concerned about the existing applications (for example, l3fwd).
> > > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname'
> model would require modifications to these applications.
> > > Ideally, without any modifications, these applications should be able to
> use packet pools (backed by hardware) and buffer pools (backed by
> ring/others) - transparently.
> > >
> > > If I go by your suggestions, what I understand is, doing the above
> without modification to applications would be equivalent to:
> > >
> > > struct rte_mempool_ops custom_hw_allocator = {...}
> > >
> > > thereafter, in config/common_base:
> > >
> > > CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> > >
> > > calls to rte_pktmbuf_pool_create would use the new allocator.
> >
> > Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
> > rte_mempool_create will continue to use the default handlers (ring based).
> > > But, another problem arises here.
> > >
> > > There are two distinct paths for allocations of a memory pool:
> > > 1. A 'pkt' pool:
> > > rte_pktmbuf_pool_create
> > > \- rte_mempool_create_empty
> > > | \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> > > |
> > > `- rte_mempool_set_ops_byname
> > > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> > > /* Override default 'ring_mp_mc' of
> > > * rte_mempool_create */
> > >
> > > 2. Through generic mempool create API
> > > rte_mempool_create
> > > \- rte_mempool_create_empty
> > > (passing pktmbuf and pool constructors)
> > > I found various instances in example applications where
> rte_mempool_create() is being called directly for packet pools - bypassing
> the more semantically correct call to rte_pktmbuf_* for packet pools.
> > >
> > > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to
> replace custom handler operations for packet buffer allocations.
> > >
> > > From a performance point-of-view, Applications should be able to select
> between packet pools and non-packet pools.
> >
> > This is intended for backward compatibility, and API consistency. Any
> > applications that use
> > rte_mempool_create directly will continue to use the default mempool
> > handlers. If the need
> > to use a custeom hander, they will need to be modified to call the newer
> > API,
> > rte_mempool_create_empty and rte_mempool_set_ops_byname.
> >
> >
> > > > > <snip>
> > > > > > + /*
> > > > > > + * Since we have 4 combinations of the SP/SC/MP/MC examine the
> flags to
> > > > > > + * set the correct index into the table of ops structs.
> > > > > > + */
> > > > > > + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> > > > > > + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> > > > > > + else if (flags & MEMPOOL_F_SP_PUT)
> > > > > > + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> > > > > > + else if (flags & MEMPOOL_F_SC_GET)
> > > > > > + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> > > > > > + else
> > > > > > + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> > > > > > +
> > > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC',
> which, if specified, would:
> > >
> > > ...
> > > #define MEMPOOL_F_SC_GET 0x0008
> > > #define MEMPOOL_F_PKT_ALLOC 0x0010
> > > ...
> > >
> > > in rte_mempool_create_empty:
> > > ... after checking the other MEMPOOL_F_* flags...
> > >
> > > if (flags & MEMPOOL_F_PKT_ALLOC)
> > > rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> > >
> > > And removing the redundant call to rte_mempool_set_ops_byname() in
> rte_pktmbuf_create_pool().
> > >
> > > Thereafter, rte_pktmbuf_pool_create can be changed to:
> > >
> > > ...
> > > mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> > > - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> > > + sizeof(struct rte_pktmbuf_pool_private), socket_id,
> > > + MEMPOOL_F_PKT_ALLOC);
> > > if (mp == NULL)
> > > return NULL;
> >
> > Yes, this would simplify somewhat the creation of a pktmbuf pool, in that
> it
> > replaces
> > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> > want
> > to introduce a third method of creating a mempool to the developers. If we
> > introduced this, we would then have:
> > 1. rte_pktmbuf_pool_create()
> > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> > use the configured custom handler)
> > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
> > by a call to rte_mempool_set_ops_byname() (would allow several different
> > custom
> > handlers to be used in one application
> >
> > Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>
> As I mentioned earlier, My take is not to create the separate API's for
> external mempool handlers.In my view, It's same, just that sepreate
> mempool handler through function pointers.
>
> To keep the backward compatibility, I think we can extend the flags
> in rte_mempool_create and have a single API external/internal pool
> creation(makes easy for existing applications too, add a just mempool
> flag command line argument to existing applications to choose the
> mempool handler)
May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)?
That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets.
I was hoping that external mempool handler would help segregate such use-cases and allow applications to tap into accelerated packet processors.
Do correct me if I my understanding is wrong.
>
> Jerin
>
> >
> > Regards,
> > Dave.
> >
> >
-
Shreyansh
On Thu, Jun 09, 2016 at 11:49:44AM +0000, Shreyansh Jain wrote:
> Hi Jerin,
Hi Shreyansh,
>
> > > Yes, this would simplify somewhat the creation of a pktmbuf pool, in that
> > it
> > > replaces
> > > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> > > want
> > > to introduce a third method of creating a mempool to the developers. If we
> > > introduced this, we would then have:
> > > 1. rte_pktmbuf_pool_create()
> > > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> > > use the configured custom handler)
> > > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
> > > by a call to rte_mempool_set_ops_byname() (would allow several different
> > > custom
> > > handlers to be used in one application
> > >
> > > Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> >
> > As I mentioned earlier, My take is not to create the separate API's for
> > external mempool handlers.In my view, It's same, just that sepreate
> > mempool handler through function pointers.
> >
> > To keep the backward compatibility, I think we can extend the flags
> > in rte_mempool_create and have a single API external/internal pool
> > creation(makes easy for existing applications too, add a just mempool
> > flag command line argument to existing applications to choose the
> > mempool handler)
>
> May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)?
> That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets.
It may vary from platform to platform or particular use case. For instance,
the HW external pool manager for generic buffers may scale better than SW multi
producers/multi-consumer implementation when the number of cores > N
(as no locking involved in enqueue/dequeue(again it is depended on
specific HW implementation))
I thought their no harm in selecting the external pool handlers
in root level itself(rte_mempool_create) as by default it is
SW MP/MC and it just an option to override if the application wants it.
Jerin
>
> I was hoping that external mempool handler would help segregate such use-cases and allow applications to tap into accelerated packet processors.
>
> Do correct me if I my understanding is wrong.
>
> >
> > Jerin
> >
> > >
> > > Regards,
> > > Dave.
> > >
> > >
>
> -
> Shreyansh
On 9/6/2016 12:41 PM, Shreyansh Jain wrote:
> Hi David,
>
>> -----Original Message-----
>> From: Hunt, David [mailto:david.hunt@intel.com]
>> Sent: Thursday, June 09, 2016 3:10 PM
>> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
>> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
>> jerin.jacob@caviumnetworks.com
>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>> operations
>>
>> Hi Shreyansh,
>>
>> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
>>> Hi David,
>>>
>>> Thanks for explanation. I have some comments inline...
>>>
>>>> -----Original Message-----
>>>> From: Hunt, David [mailto:david.hunt@intel.com]
>>>> Sent: Tuesday, June 07, 2016 2:56 PM
>>>> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
>>>> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
>>>> jerin.jacob@caviumnetworks.com
>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
>>>> operations
>>>>
>>>> Hi Shreyansh,
>>>>
>>>> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
>>>>> Hi,
>>>>>
>>>>> (Apologies for overly-eager email sent on this thread earlier. Will be
>> more
>>>> careful in future).
>>>>> This is more of a question/clarification than a comment. (And I have
>> taken
>>>> only some snippets from original mail to keep it cleaner)
>>>>> <snip>
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
>>>>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
>>>>> <snip>
>>>>>
>>>>> From the above what I understand is that multiple packet pool handlers
>> can
>>>> be created.
>>>>> I have a use-case where application has multiple pools but only the
>> packet
>>>> pool is hardware backed. Using the hardware for general buffer
>> requirements
>>>> would prove costly.
>>>>> From what I understand from the patch, selection of the pool is based
>> on
>>>> the flags below.
>>>>
>>>> The flags are only used to select one of the default handlers for
>>>> backward compatibility through
>>>> the rte_mempool_create call. If you wish to use a mempool handler that
>>>> is not one of the
>>>> defaults, (i.e. a new hardware handler), you would use the
>>>> rte_create_mempool_empty
>>>> followed by the rte_mempool_set_ops_byname call.
>>>> So, for the external handlers, you create and empty mempool, then set
>>>> the operations (ops)
>>>> for that particular mempool.
>>> I am concerned about the existing applications (for example, l3fwd).
>>> Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname'
>> model would require modifications to these applications.
>>> Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by
>>> ring/others) - transparently.
>>> If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
>>> struct rte_mempool_ops custom_hw_allocator = {...}
>>>
>>> thereafter, in config/common_base:
>>>
>>> CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
>>>
>>> calls to rte_pktmbuf_pool_create would use the new allocator.
>> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
>> rte_mempool_create will continue to use the default handlers (ring based).
> Agree with you.
> But, some applications continue to use rte_mempool_create for allocating packet pools. Thus, even with a custom handler available (which, most probably, would be a hardware packet buffer handler), application would unintentionally end up not using it.
> Probably, such applications should be changed? (e.g. pipeline).
Yes, agreed. If those applications need to use external mempool
handlers, then they should be changed. I would see that as outside the
scope of this patchset.
>>> But, another problem arises here.
>>>
>>> There are two distinct paths for allocations of a memory pool:
>>> 1. A 'pkt' pool:
>>> rte_pktmbuf_pool_create
>>> \- rte_mempool_create_empty
>>> | \- rte_mempool_set_ops_byname(..ring_mp_mc..)
>>> |
>>> `- rte_mempool_set_ops_byname
>>> (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
>>> /* Override default 'ring_mp_mc' of
>>> * rte_mempool_create */
>>>
>>> 2. Through generic mempool create API
>>> rte_mempool_create
>>> \- rte_mempool_create_empty
>>> (passing pktmbuf and pool constructors)
>>>
>>> I found various instances in example applications where
>> rte_mempool_create() is being called directly for packet pools - bypassing
>> the more semantically correct call to rte_pktmbuf_* for packet pools.
>>> In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to
>> replace custom handler operations for packet buffer allocations.
>>> From a performance point-of-view, Applications should be able to select
>> between packet pools and non-packet pools.
>>
>> This is intended for backward compatibility, and API consistency. Any
>> applications that use
>> rte_mempool_create directly will continue to use the default mempool
>> handlers. If the need
>> to use a custeom hander, they will need to be modified to call the newer
>> API,
>> rte_mempool_create_empty and rte_mempool_set_ops_byname.
> My understanding was that applications should be oblivious of how their pools are managed, except that they do understand packet pools should be faster (or accelerated) than non-packet pools.
> (Of course, some applications may be designed to explicitly take advantage of an available handler through rte_mempool_create_empty=>rte_mempool_set_ops_byname calls.)
> In that perspective, I was expecting that applications should be calling:
> -> rte_pktmbuf_* for all packet relation operations
> -> rte_mempool_* for non-packet or explicit hardware handlers
>
> And leave rest of the mempool handler related magic to DPDK framework.
I think there is still some work on the applications side to know
whether to use external or internal (default) handler for
particular mempools.
I'll propose something based on the further comments on the other part
of this thread based on feedback so far.
Regards,
Dave.
>
>>
>>>>> <snip>
>>>>>> + /*
>>>>>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the
>> flags to
>>>>>> + * set the correct index into the table of ops structs.
>>>>>> + */
>>>>>> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
>>>>>> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
>>>>>> + else if (flags & MEMPOOL_F_SP_PUT)
>>>>>> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
>>>>>> + else if (flags & MEMPOOL_F_SC_GET)
>>>>>> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
>>>>>> + else
>>>>>> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>>>>>> +
>>> My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which,
>> if specified, would:
> I read through some previous discussions and realized that something similar [1] had already been proposed earlier.
> I didn't want to hijack this thread with an old discussions - it was unintentional.
>
> [1] http://article.gmane.org/gmane.comp.networking.dpdk.devel/39803
>
> But, [1] would make the distinction of *type* of pool and its corresponding handler, whether default or external/custom, quite clear.
I can incorporate a bit in the flags (MEMPOOL_F_PKT_ALLOC) as you
suggest, which would allow the rte_mempool_create calls to use a custom
handler
Hi Jerin,
> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Thursday, June 09, 2016 6:01 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>
> Cc: Hunt, David <david.hunt@intel.com>; dev@dpdk.org; olivier.matz@6wind.com;
> viktorin@rehivetech.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
>
> On Thu, Jun 09, 2016 at 11:49:44AM +0000, Shreyansh Jain wrote:
> > Hi Jerin,
>
> Hi Shreyansh,
>
> >
> > > > Yes, this would simplify somewhat the creation of a pktmbuf pool, in
> that
> > > it
> > > > replaces
> > > > the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure
> we
> > > > want
> > > > to introduce a third method of creating a mempool to the developers. If
> we
> > > > introduced this, we would then have:
> > > > 1. rte_pktmbuf_pool_create()
> > > > 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> > > > use the configured custom handler)
> > > > 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> followed
> > > > by a call to rte_mempool_set_ops_byname() (would allow several
> different
> > > > custom
> > > > handlers to be used in one application
> > > >
> > > > Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> > >
> > > As I mentioned earlier, My take is not to create the separate API's for
> > > external mempool handlers.In my view, It's same, just that sepreate
> > > mempool handler through function pointers.
> > >
> > > To keep the backward compatibility, I think we can extend the flags
> > > in rte_mempool_create and have a single API external/internal pool
> > > creation(makes easy for existing applications too, add a just mempool
> > > flag command line argument to existing applications to choose the
> > > mempool handler)
> >
> > May be I am interpreting it wrong, but, are you suggesting a single mempool
> handler for all buffer/packet needs of an application (passed as command line
> argument)?
> > That would be inefficient especially for cases where pool is backed by a
> hardware. The application wouldn't want its generic buffers to consume
> hardware resource which would be better used for packets.
>
> It may vary from platform to platform or particular use case. For instance,
> the HW external pool manager for generic buffers may scale better than SW
> multi
> producers/multi-consumer implementation when the number of cores > N
> (as no locking involved in enqueue/dequeue(again it is depended on
> specific HW implementation))
I agree with you that above cases would exist.
But, even in these cases I think it would be application's prerogative to decide whether it would like its buffers to be managed by a hardware allocator or SW [SM]p/[SM]c implementations. Probably, in this case the application would call the rte_mempool_*(PKT_POOL) for generic buffers as well (or maybe a dedicated buffer pool flag) - just as an example.
>
> I thought their no harm in selecting the external pool handlers
> in root level itself(rte_mempool_create) as by default it is
> SW MP/MC and it just an option to override if the application wants it.
It sounds fine if calls to rte_mempool_* can select an external handler *optionally* - but, if we pass it as command line, it would be binding (at least, semantically) for rte_pktmbuf_* calls as well. Isn't it?
[Probably, I am still unclear how it would remain 'optional' in command line case you suggested.]
>
> Jerin
>
>
[...]
-
Shreyansh
On Thu, 9 Jun 2016 10:39:46 +0100
"Hunt, David" <david.hunt@intel.com> wrote:
> Hi Shreyansh,
>
> On 8/6/2016 2:48 PM, Shreyansh Jain wrote:
> > Hi David,
> >
> > Thanks for explanation. I have some comments inline...
> >
> >> -----Original Message-----
> >> From: Hunt, David [mailto:david.hunt@intel.com]
> >> Sent: Tuesday, June 07, 2016 2:56 PM
> >> To: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org
> >> Cc: olivier.matz@6wind.com; viktorin@rehivetech.com;
> >> jerin.jacob@caviumnetworks.com
> >> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> >> operations
> >>
> >> Hi Shreyansh,
> >>
> >> On 6/6/2016 3:38 PM, Shreyansh Jain wrote:
> >>> Hi,
> >>>
> >>> (Apologies for overly-eager email sent on this thread earlier. Will be more
> >> careful in future).
> >>> This is more of a question/clarification than a comment. (And I have taken
> >> only some snippets from original mail to keep it cleaner)
> >>> <snip>
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_mc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_mp_sc);
> >>>> +MEMPOOL_REGISTER_OPS(ops_sp_mc);
> >>> <snip>
> >>>
> >>> From the above what I understand is that multiple packet pool handlers can
> >> be created.
> >>> I have a use-case where application has multiple pools but only the packet
> >> pool is hardware backed. Using the hardware for general buffer requirements
> >> would prove costly.
> >>> From what I understand from the patch, selection of the pool is based on
> >> the flags below.
> >>
> >> The flags are only used to select one of the default handlers for
> >> backward compatibility through
> >> the rte_mempool_create call. If you wish to use a mempool handler that
> >> is not one of the
> >> defaults, (i.e. a new hardware handler), you would use the
> >> rte_create_mempool_empty
> >> followed by the rte_mempool_set_ops_byname call.
> >> So, for the external handlers, you create and empty mempool, then set
> >> the operations (ops)
> >> for that particular mempool.
> > I am concerned about the existing applications (for example, l3fwd).
> > Explicit calls to 'rte_create_mempool_empty->rte_mempool_set_ops_byname' model would require modifications to these applications.
> > Ideally, without any modifications, these applications should be able to use packet pools (backed by hardware) and buffer pools (backed by ring/others) - transparently.
> >
> > If I go by your suggestions, what I understand is, doing the above without modification to applications would be equivalent to:
> >
> > struct rte_mempool_ops custom_hw_allocator = {...}
> >
> > thereafter, in config/common_base:
> >
> > CONFIG_RTE_DEFAULT_MEMPOOL_OPS="custom_hw_allocator"
> >
> > calls to rte_pktmbuf_pool_create would use the new allocator.
>
> Yes, correct. But only for calls to rte_pktmbuf_pool_create(). Calls to
> rte_mempool_create will continue to use the default handlers (ring based).
> > But, another problem arises here.
> >
> > There are two distinct paths for allocations of a memory pool:
> > 1. A 'pkt' pool:
> > rte_pktmbuf_pool_create
> > \- rte_mempool_create_empty
> > | \- rte_mempool_set_ops_byname(..ring_mp_mc..)
> > |
> > `- rte_mempool_set_ops_byname
> > (...RTE_MBUF_DEFAULT_MEMPOOL_OPS..)
> > /* Override default 'ring_mp_mc' of
> > * rte_mempool_create */
> >
> > 2. Through generic mempool create API
> > rte_mempool_create
> > \- rte_mempool_create_empty
> > (passing pktmbuf and pool constructors)
> >
> > I found various instances in example applications where rte_mempool_create() is being called directly for packet pools - bypassing the more semantically correct call to rte_pktmbuf_* for packet pools.
> >
> > In (2) control path, RTE_MBUF_DEFAULT_MEMPOOLS_OPS wouldn't be able to replace custom handler operations for packet buffer allocations.
> >
> > From a performance point-of-view, Applications should be able to select between packet pools and non-packet pools.
>
> This is intended for backward compatibility, and API consistency. Any
> applications that use
> rte_mempool_create directly will continue to use the default mempool
> handlers. If the need
> to use a custeom hander, they will need to be modified to call the newer
> API,
> rte_mempool_create_empty and rte_mempool_set_ops_byname.
>
>
> >>> <snip>
> >>>> + /*
> >>>> + * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> >>>> + * set the correct index into the table of ops structs.
> >>>> + */
> >>>> + if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> >>>> + rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> >>>> + else if (flags & MEMPOOL_F_SP_PUT)
> >>>> + rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> >>>> + else if (flags & MEMPOOL_F_SC_GET)
> >>>> + rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> >>>> + else
> >>>> + rte_mempool_set_ops_byname(mp, "ring_mp_mc");
> >>>> +
> > My suggestion is to have an additional flag, 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >
> > ...
> > #define MEMPOOL_F_SC_GET 0x0008
> > #define MEMPOOL_F_PKT_ALLOC 0x0010
> > ...
> >
> > in rte_mempool_create_empty:
> > ... after checking the other MEMPOOL_F_* flags...
> >
> > if (flags & MEMPOOL_F_PKT_ALLOC)
> > rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >
> > And removing the redundant call to rte_mempool_set_ops_byname() in rte_pktmbuf_create_pool().
> >
> > Thereafter, rte_pktmbuf_pool_create can be changed to:
> >
> > ...
> > mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> > - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> > + sizeof(struct rte_pktmbuf_pool_private), socket_id,
> > + MEMPOOL_F_PKT_ALLOC);
> > if (mp == NULL)
> > return NULL;
>
> Yes, this would simplify somewhat the creation of a pktmbuf pool, in
> that it replaces
> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
> want
> to introduce a third method of creating a mempool to the developers. If we
> introduced this, we would then have:
> 1. rte_pktmbuf_pool_create()
> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
> use the configured custom handler)
> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
> by a call to rte_mempool_set_ops_byname() (would allow several
> different custom
> handlers to be used in one application
>
> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
I am quite careful about this topic as I don't feel to be very involved in all the
use cases. My opinion is that the _new API_ should be able to cover all cases and
the _old API_ should be backwards compatible, however, built on top of the _new API_.
I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts of the old API)
should be accepted by the old API ONLY. The rte_mempool_create_empty should not process
them. Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the rte_mempool_create_empty
by this anymore.
In overall we would get exactly 2 approaches (and not more):
* using rte_mempool_create with flags calling the rte_mempool_create_empty and
rte_mempool_set_ops_byname internally (so this layer can be marked as deprecated
and removed in the future)
* using rte_mempool_create_empty + rte_mempool_set_ops_byname - allowing any customizations
but with the necessity to change the applications (new preferred API)
So, the old applications can stay as they are (OK, with a possible new flag
MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you have to set the ops
explicitly.
The more different ways of using those APIs we have, the greater hell we have to maintain.
Regards
Jan
>
> Regards,
> Dave.
>
>
On 9/6/2016 1:30 PM, Jerin Jacob wrote:
> On Thu, Jun 09, 2016 at 11:49:44AM +0000, Shreyansh Jain wrote:
>> Hi Jerin,
> Hi Shreyansh,
>
>>>> Yes, this would simplify somewhat the creation of a pktmbuf pool, in that
>>> it
>>>> replaces
>>>> the rte_mempool_set_ops_byname with a flag bit. However, I'm not sure we
>>>> want
>>>> to introduce a third method of creating a mempool to the developers. If we
>>>> introduced this, we would then have:
>>>> 1. rte_pktmbuf_pool_create()
>>>> 2. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which would
>>>> use the configured custom handler)
>>>> 3. rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set followed
>>>> by a call to rte_mempool_set_ops_byname() (would allow several different
>>>> custom
>>>> handlers to be used in one application
>>>>
>>>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>>> As I mentioned earlier, My take is not to create the separate API's for
>>> external mempool handlers.In my view, It's same, just that sepreate
>>> mempool handler through function pointers.
>>>
>>> To keep the backward compatibility, I think we can extend the flags
>>> in rte_mempool_create and have a single API external/internal pool
>>> creation(makes easy for existing applications too, add a just mempool
>>> flag command line argument to existing applications to choose the
>>> mempool handler)
>> May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)?
>> That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets.
> It may vary from platform to platform or particular use case. For instance,
> the HW external pool manager for generic buffers may scale better than SW multi
> producers/multi-consumer implementation when the number of cores > N
> (as no locking involved in enqueue/dequeue(again it is depended on
> specific HW implementation))
>
> I thought their no harm in selecting the external pool handlers
> in root level itself(rte_mempool_create) as by default it is
> SW MP/MC and it just an option to override if the application wants it.
>
> Jerin
>
So, how about we go with the following, based on Shreyansh's suggestion:
1. Add in #define MEMPOOL_F_EMM_ALLOC 0x0010 (EMM for External Mempool
Manager)
2. Check this bit in rte_mempool_create() just before the other bits are
checked (by the way, the flags check has been moved to
rte_mempool_create(), as per an earlier patchset, but was inadvertantly
reverted)
/*
* First check to see if we use the config'd mempool hander.
* Then examine the combinations of SP/SC/MP/MC flags to
* set the correct index into the table of ops structs.
*/
if (flags & MEMPOOL_F_EMM_ALLOC)
rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
else (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
rte_mempool_set_ops_byname(mp, "ring_sp_sc");
else if (flags & MEMPOOL_F_SP_PUT)
rte_mempool_set_ops_byname(mp, "ring_sp_mc");
else if (flags & MEMPOOL_F_SC_GET)
rte_mempool_set_ops_byname(mp, "ring_mp_sc");
else
rte_mempool_set_ops_byname(mp, "ring_mp_mc");
3. Modify rte_pktmbuf_pool_create to pass the bit to rte_mempool_create
- sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
+ sizeof(struct rte_pktmbuf_pool_private), socket_id,
+ MEMPOOL_F_PKT_ALLOC);
This will allow legacy apps to use one external handler ( as defined
RTE_MBUF_DEFAULT_MEMPOOL_OPS) by adding the MEMPOOL_F_EMM_ALLOC bit to
their flags in the call to rte_mempool_create().
Of course, if an app wants to use more than one external handler, they
can call create_empty and set_ops_byname() for each mempool they create.
Regards,
Dave.
On Thu, Jun 09, 2016 at 02:18:57PM +0100, Hunt, David wrote:
>
>
> > > > As I mentioned earlier, My take is not to create the separate API's for
> > > > external mempool handlers.In my view, It's same, just that sepreate
> > > > mempool handler through function pointers.
> > > >
> > > > To keep the backward compatibility, I think we can extend the flags
> > > > in rte_mempool_create and have a single API external/internal pool
> > > > creation(makes easy for existing applications too, add a just mempool
> > > > flag command line argument to existing applications to choose the
> > > > mempool handler)
> > > May be I am interpreting it wrong, but, are you suggesting a single mempool handler for all buffer/packet needs of an application (passed as command line argument)?
> > > That would be inefficient especially for cases where pool is backed by a hardware. The application wouldn't want its generic buffers to consume hardware resource which would be better used for packets.
> > It may vary from platform to platform or particular use case. For instance,
> > the HW external pool manager for generic buffers may scale better than SW multi
> > producers/multi-consumer implementation when the number of cores > N
> > (as no locking involved in enqueue/dequeue(again it is depended on
> > specific HW implementation))
> >
> > I thought their no harm in selecting the external pool handlers
> > in root level itself(rte_mempool_create) as by default it is
> > SW MP/MC and it just an option to override if the application wants it.
> >
> > Jerin
> >
>
>
> So, how about we go with the following, based on Shreyansh's suggestion:
>
> 1. Add in #define MEMPOOL_F_EMM_ALLOC 0x0010 (EMM for External Mempool
> Manager)
>
> 2. Check this bit in rte_mempool_create() just before the other bits are
> checked (by the way, the flags check has been moved to rte_mempool_create(),
> as per an earlier patchset, but was inadvertantly reverted)
>
> /*
> * First check to see if we use the config'd mempool hander.
> * Then examine the combinations of SP/SC/MP/MC flags to
> * set the correct index into the table of ops structs.
> */
> if (flags & MEMPOOL_F_EMM_ALLOC)
> rte_mempool_set_ops_byname(mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> else (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
> rte_mempool_set_ops_byname(mp, "ring_sp_sc");
> else if (flags & MEMPOOL_F_SP_PUT)
> rte_mempool_set_ops_byname(mp, "ring_sp_mc");
> else if (flags & MEMPOOL_F_SC_GET)
> rte_mempool_set_ops_byname(mp, "ring_mp_sc");
> else
> rte_mempool_set_ops_byname(mp, "ring_mp_mc");
>
> 3. Modify rte_pktmbuf_pool_create to pass the bit to rte_mempool_create
>
> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> + sizeof(struct rte_pktmbuf_pool_private), socket_id,
> + MEMPOOL_F_PKT_ALLOC);
>
>
> This will allow legacy apps to use one external handler ( as defined
> RTE_MBUF_DEFAULT_MEMPOOL_OPS) by adding the MEMPOOL_F_EMM_ALLOC bit to their
> flags in the call to rte_mempool_create().
>
> Of course, if an app wants to use more than one external handler, they can
> call create_empty and set_ops_byname() for each mempool they create.
+1
Since rte_pktmbuf_pool_create does not take flag, I think this the only
option left with for legacy apps.
>
> Regards,
> Dave.
>
>
>
>
>
>
>
>
>
>
Hi,
On 06/09/2016 03:09 PM, Jan Viktorin wrote:
>>> My suggestion is to have an additional flag,
>>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>>>
>>> ... #define MEMPOOL_F_SC_GET 0x0008 #define
>>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
>>>
>>> in rte_mempool_create_empty: ... after checking the other
>>> MEMPOOL_F_* flags...
>>>
>>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
>>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>>>
>>> And removing the redundant call to rte_mempool_set_ops_byname()
>>> in rte_pktmbuf_create_pool().
>>>
>>> Thereafter, rte_pktmbuf_pool_create can be changed to:
>>>
>>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
>>> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
>>> + sizeof(struct rte_pktmbuf_pool_private), socket_id, +
>>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
>>
>> Yes, this would simplify somewhat the creation of a pktmbuf pool,
>> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
>> However, I'm not sure we want to introduce a third method of
>> creating a mempool to the developers. If we introduced this, we
>> would then have: 1. rte_pktmbuf_pool_create() 2.
>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
>> would use the configured custom handler) 3.
>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
>> followed by a call to rte_mempool_set_ops_byname() (would allow
>> several different custom handlers to be used in one application
>>
>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>
> I am quite careful about this topic as I don't feel to be very
> involved in all the use cases. My opinion is that the _new API_
> should be able to cover all cases and the _old API_ should be
> backwards compatible, however, built on top of the _new API_.
>
> I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> of the old API) should be accepted by the old API ONLY. The
> rte_mempool_create_empty should not process them.
The rte_mempool_create_empty() function already processes these flags
(SC_GET, SP_PUT) as of today.
> Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> rte_mempool_create_empty by this anymore.
+1
I think we should stop adding flags. Flags are prefered for independent
features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
MEMPOOL_F_SP_PUT?
Another reason to not add this flag is the rte_mempool library
should not be aware of mbufs. The mbuf pools rely on mempools, but
not the contrary.
> In overall we would get exactly 2 approaches (and not more):
>
> * using rte_mempool_create with flags calling the
> rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> (so this layer can be marked as deprecated and removed in the
> future)
Agree. This was one of the objective of my mempool rework patchset:
provide a more flexible API, and avoid functions with 10 to 15
arguments.
> * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> allowing any customizations but with the necessity to change the
> applications (new preferred API)
Yes.
And if required, maybe a third API is possible in case of mbuf pools.
Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
to create a pool of mbuf instead of mempool API. If an application
wants to select specific ops for it, we could add:
rte_pktmbuf_pool_create_<something>(..., name)
instead of using the mempool API.
I think this is what Shreyansh suggests when he says:
It sounds fine if calls to rte_mempool_* can select an external
handler *optionally* - but, if we pass it as command line, it would
be binding (at least, semantically) for rte_pktmbuf_* calls as well.
Isn't it?
> So, the old applications can stay as they are (OK, with a possible
> new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> have to set the ops explicitly.
>
> The more different ways of using those APIs we have, the greater hell
> we have to maintain.
I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.
I think David's patch is already a good step forward. Let's do it
step by step. Next step is maybe to update some applications (at least
testpmd) to select a new pool handler dynamically.
Regards,
Olivier
On Fri, 10 Jun 2016 09:29:44 +0200
Olivier Matz <olivier.matz@6wind.com> wrote:
> Hi,
>
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>>
> >>> ... #define MEMPOOL_F_SC_GET 0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>>
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>>
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>>
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>>
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>>
> >>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> >>> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> >>> + sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
> >>
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >>
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> >
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> >
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.
>
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
Yes, I consider it quite strange. When thinking more about the mempool API,
I'd move the flags processing to the rte_mempool_create. Semantically, it
makes more sense as the "empty" clearly describes that it is empty. But with
the flags, it is not... What is the reason to have those flags there?
>
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.
>
> +1
>
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?
+1 :)
>
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
>
>
> > In overall we would get exactly 2 approaches (and not more):
> > future)
Well, now I can see that I've just written the same thing but with my own words :).
> >
> > * using rte_mempool_create with flags calling the
> > rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> > (so this layer can be marked as deprecated and removed in the
>
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
>
> > * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> > allowing any customizations but with the necessity to change the
> > applications (new preferred API)
>
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.
I don't count this. It's an upper layer, but yes.
> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
>
> rte_pktmbuf_pool_create_<something>(..., name)
Seems like a good idea.
>
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
>
> It sounds fine if calls to rte_mempool_* can select an external
> handler *optionally* - but, if we pass it as command line, it would
> be binding (at least, semantically) for rte_pktmbuf_* calls as well.
> Isn't it?
I think, the question here is whether the processing of such optional
command line specification is up to the application or up the the DPDK
core. If we leave it in applications, it's just a matter of API.
>
> > So, the old applications can stay as they are (OK, with a possible
> > new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> > have to set the ops explicitly.
> >
> > The more different ways of using those APIs we have, the greater hell
> > we have to maintain.
>
> I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.
Your arguments are valid, +1.
>
> I think David's patch is already a good step forward. Let's do it
> step by step. Next step is maybe to update some applications (at least
> testpmd) to select a new pool handler dynamically.
We can probably make an API to process the command line by applications
that configures a mempool automatically. So, it would be a oneliner or
something like that. Like rte_mempool_create_from_devargs(...).
Jan
>
> Regards,
> Olivier
Hi Jan,
On 10/6/2016 9:49 AM, Jan Viktorin wrote:
> On Fri, 10 Jun 2016 09:29:44 +0200
> Olivier Matz <olivier.matz@6wind.com> wrote:
>
>> Hi,
>>
>> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
>>>>> My suggestion is to have an additional flag,
>>>>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>>>>>
>>>>> ... #define MEMPOOL_F_SC_GET 0x0008 #define
>>>>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
>>>>>
>>>>> in rte_mempool_create_empty: ... after checking the other
>>>>> MEMPOOL_F_* flags...
>>>>>
>>>>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
>>>>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>>>>>
>>>>> And removing the redundant call to rte_mempool_set_ops_byname()
>>>>> in rte_pktmbuf_create_pool().
>>>>>
>>>>> Thereafter, rte_pktmbuf_pool_create can be changed to:
>>>>>
>>>>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
>>>>> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
>>>>> + sizeof(struct rte_pktmbuf_pool_private), socket_id, +
>>>>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
>>>> Yes, this would simplify somewhat the creation of a pktmbuf pool,
>>>> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
>>>> However, I'm not sure we want to introduce a third method of
>>>> creating a mempool to the developers. If we introduced this, we
>>>> would then have: 1. rte_pktmbuf_pool_create() 2.
>>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
>>>> would use the configured custom handler) 3.
>>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
>>>> followed by a call to rte_mempool_set_ops_byname() (would allow
>>>> several different custom handlers to be used in one application
>>>>
>>>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>>> I am quite careful about this topic as I don't feel to be very
>>> involved in all the use cases. My opinion is that the _new API_
>>> should be able to cover all cases and the _old API_ should be
>>> backwards compatible, however, built on top of the _new API_.
>>>
>>> I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
>>> of the old API) should be accepted by the old API ONLY. The
>>> rte_mempool_create_empty should not process them.
>> The rte_mempool_create_empty() function already processes these flags
>> (SC_GET, SP_PUT) as of today.
> Yes, I consider it quite strange. When thinking more about the mempool API,
> I'd move the flags processing to the rte_mempool_create. Semantically, it
> makes more sense as the "empty" clearly describes that it is empty. But with
> the flags, it is not... What is the reason to have those flags there?
Yes, they should be in rte_mempool_create. There were in an earlier
patch, but regressed.
I'll have them in rte_mempool_create in the next patch.
[...]
Rgds,
Dave.
Hi all,
On 10/6/2016 8:29 AM, Olivier Matz wrote:
> Hi,
>
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
>>>> My suggestion is to have an additional flag,
>>>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
>>>>
>>>> ... #define MEMPOOL_F_SC_GET 0x0008 #define
>>>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
>>>>
>>>> in rte_mempool_create_empty: ... after checking the other
>>>> MEMPOOL_F_* flags...
>>>>
>>>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
>>>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
>>>>
>>>> And removing the redundant call to rte_mempool_set_ops_byname()
>>>> in rte_pktmbuf_create_pool().
>>>>
>>>> Thereafter, rte_pktmbuf_pool_create can be changed to:
>>>>
>>>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
>>>> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
>>>> + sizeof(struct rte_pktmbuf_pool_private), socket_id, +
>>>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
>>> Yes, this would simplify somewhat the creation of a pktmbuf pool,
>>> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
>>> However, I'm not sure we want to introduce a third method of
>>> creating a mempool to the developers. If we introduced this, we
>>> would then have: 1. rte_pktmbuf_pool_create() 2.
>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
>>> would use the configured custom handler) 3.
>>> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
>>> followed by a call to rte_mempool_set_ops_byname() (would allow
>>> several different custom handlers to be used in one application
>>>
>>> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
>> I am quite careful about this topic as I don't feel to be very
>> involved in all the use cases. My opinion is that the _new API_
>> should be able to cover all cases and the _old API_ should be
>> backwards compatible, however, built on top of the _new API_.
>>
>> I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
>> of the old API) should be accepted by the old API ONLY. The
>> rte_mempool_create_empty should not process them.
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
>
>> Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
>> rte_mempool_create_empty by this anymore.
> +1
>
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?
>
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
>
>
>> In overall we would get exactly 2 approaches (and not more):
>>
>> * using rte_mempool_create with flags calling the
>> rte_mempool_create_empty and rte_mempool_set_ops_byname internally
>> (so this layer can be marked as deprecated and removed in the
>> future)
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
>
>> * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
>> allowing any customizations but with the necessity to change the
>> applications (new preferred API)
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.
> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
>
> rte_pktmbuf_pool_create_<something>(..., name)
>
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
>
> It sounds fine if calls to rte_mempool_* can select an external
> handler *optionally* - but, if we pass it as command line, it would
> be binding (at least, semantically) for rte_pktmbuf_* calls as well.
> Isn't it?
>
>
>> So, the old applications can stay as they are (OK, with a possible
>> new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
>> have to set the ops explicitly.
>>
>> The more different ways of using those APIs we have, the greater hell
>> we have to maintain.
> I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.
I would tend to agree, even though yesterday I proposed making that
change. However,
thinking about it some more, I'm not totally happy with the
MEMPOOL_F_PKT_ALLOC addition. It adds yet another method of creating a
mempool,
and I think that may introduce some confusion with some developers.
I also like the suggestion of rte_pktmbuf_pool_create_<something>(...,
name) suggested
above, I was thinking the same myself last night, and I would prefer
that rather than adding the
MEMPOOL_F_PKT_ALLOC flag. Developers can add that function into their
apps as a wrapper
to rte_mempool_create_empty->rte_mempool_set_ops_byname should the need
to have
more than one pktmbuf allocator. Otherwise they can use the one that
makes use of the
RTE_MBUF_DEFAULT_MEMPOOL_OPS config setting.
> I think David's patch is already a good step forward. Let's do it
> step by step. Next step is maybe to update some applications (at least
> testpmd) to select a new pool handler dynamically.
>
> Regards,
> Olivier
On Fri, Jun 10, 2016 at 09:29:44AM +0200, Olivier Matz wrote:
> Hi,
>
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>>
> >>> ... #define MEMPOOL_F_SC_GET 0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>>
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>>
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>>
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>>
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>>
> >>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> >>> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> >>> + sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
> >>
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >>
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> >
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> >
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.
>
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
>
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.
>
> +1
>
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?
+1
MEMPOOL_F_PKT_ALLOC introduced only for legacy application to make it
work with external pool manager. If we are not taking that path(and
expects applications to change) then IMO we can we have proper mempool
create API to accommodate the external pool and deprecate
rte_mempool_create/rte_mempool_xmem_create
like,
1) Remove 10 to 15 arguments pool create and make it as structure(more
forward looking)
2) Remove flags
3) Have the same API for external and internal mempool create and
differentiate through handler through "name". NULL can be default
mempool handler(MPMC)
>
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
>
Hi David,
> -----Original Message-----
> From: Hunt, David [mailto:david.hunt@intel.com]
> Sent: Friday, June 10, 2016 3:05 PM
> To: Olivier Matz <olivier.matz@6wind.com>; Jan Viktorin
> <viktorin@rehivetech.com>
> Cc: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org;
> jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
>
> Hi all,
>
> On 10/6/2016 8:29 AM, Olivier Matz wrote:
> > Hi,
> >
[...snip...]
> >
> >
> >> So, the old applications can stay as they are (OK, with a possible
> >> new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> >> have to set the ops explicitly.
> >>
> >> The more different ways of using those APIs we have, the greater hell
> >> we have to maintain.
> > I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.
>
> I would tend to agree, even though yesterday I proposed making that
> change. However,
> thinking about it some more, I'm not totally happy with the
> MEMPOOL_F_PKT_ALLOC addition. It adds yet another method of creating a
> mempool,
> and I think that may introduce some confusion with some developers.
>
> I also like the suggestion of rte_pktmbuf_pool_create_<something>(...,
> name) suggested
> above, I was thinking the same myself last night, and I would prefer
> that rather than adding the
> MEMPOOL_F_PKT_ALLOC flag. Developers can add that function into their
> apps as a wrapper
> to rte_mempool_create_empty->rte_mempool_set_ops_byname should the need
> to have
> more than one pktmbuf allocator. Otherwise they can use the one that
> makes use of the
> RTE_MBUF_DEFAULT_MEMPOOL_OPS config setting.
+1
>
>
> > I think David's patch is already a good step forward. Let's do it
> > step by step. Next step is maybe to update some applications (at least
> > testpmd) to select a new pool handler dynamically.
> >
> > Regards,
> > Olivier
Thanks.
-
Shreyansh
Hi Olivier,
> -----Original Message-----
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Friday, June 10, 2016 1:00 PM
> To: Jan Viktorin <viktorin@rehivetech.com>; Hunt, David
> <david.hunt@intel.com>
> Cc: Shreyansh Jain <shreyansh.jain@nxp.com>; dev@dpdk.org;
> jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH v8 1/3] mempool: support external mempool
> operations
>
> Hi,
>
> On 06/09/2016 03:09 PM, Jan Viktorin wrote:
> >>> My suggestion is to have an additional flag,
> >>> 'MEMPOOL_F_PKT_ALLOC', which, if specified, would:
> >>>
> >>> ... #define MEMPOOL_F_SC_GET 0x0008 #define
> >>> MEMPOOL_F_PKT_ALLOC 0x0010 ...
> >>>
> >>> in rte_mempool_create_empty: ... after checking the other
> >>> MEMPOOL_F_* flags...
> >>>
> >>> if (flags & MEMPOOL_F_PKT_ALLOC) rte_mempool_set_ops_byname(mp,
> >>> RTE_MBUF_DEFAULT_MEMPOOL_OPS)
> >>>
> >>> And removing the redundant call to rte_mempool_set_ops_byname()
> >>> in rte_pktmbuf_create_pool().
> >>>
> >>> Thereafter, rte_pktmbuf_pool_create can be changed to:
> >>>
> >>> ... mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> >>> - sizeof(struct rte_pktmbuf_pool_private), socket_id, 0);
> >>> + sizeof(struct rte_pktmbuf_pool_private), socket_id, +
> >>> MEMPOOL_F_PKT_ALLOC); if (mp == NULL) return NULL;
> >>
> >> Yes, this would simplify somewhat the creation of a pktmbuf pool,
> >> in that it replaces the rte_mempool_set_ops_byname with a flag bit.
> >> However, I'm not sure we want to introduce a third method of
> >> creating a mempool to the developers. If we introduced this, we
> >> would then have: 1. rte_pktmbuf_pool_create() 2.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC set (which
> >> would use the configured custom handler) 3.
> >> rte_mempool_create_empty() with MEMPOOL_F_PKT_ALLOC __not__ set
> >> followed by a call to rte_mempool_set_ops_byname() (would allow
> >> several different custom handlers to be used in one application
> >>
> >> Does anyone else have an opinion on this? Oliver, Jerin, Jan?
> >
> > I am quite careful about this topic as I don't feel to be very
> > involved in all the use cases. My opinion is that the _new API_
> > should be able to cover all cases and the _old API_ should be
> > backwards compatible, however, built on top of the _new API_.
> >
> > I.e. I think, the flags MEMPOOL_F_SP_PUT, MEMPOOL_F_SC_GET (relicts
> > of the old API) should be accepted by the old API ONLY. The
> > rte_mempool_create_empty should not process them.
>
> The rte_mempool_create_empty() function already processes these flags
> (SC_GET, SP_PUT) as of today.
>
> > Similarly for a potential MEMPOOL_F_PKT_ALLOC, I would not polute the
> > rte_mempool_create_empty by this anymore.
>
> +1
>
> I think we should stop adding flags. Flags are prefered for independent
> features. Here, what would be the behavior with MEMPOOL_F_PKT_ALLOC +
> MEMPOOL_F_SP_PUT?
>
> Another reason to not add this flag is the rte_mempool library
> should not be aware of mbufs. The mbuf pools rely on mempools, but
> not the contrary.
Agree - mempool should be agnostic of the mbufs using it.
But, mempool should be aware of the allocator it is using, in my opinion.
And, agree with your argument of "MEMPOOL_F_PKT_ALLOC + MEMPOOL_F_SP_PUT" - it is bad semantics.
>
>
> > In overall we would get exactly 2 approaches (and not more):
> >
> > * using rte_mempool_create with flags calling the
> > rte_mempool_create_empty and rte_mempool_set_ops_byname internally
> > (so this layer can be marked as deprecated and removed in the
> > future)
>
> Agree. This was one of the objective of my mempool rework patchset:
> provide a more flexible API, and avoid functions with 10 to 15
> arguments.
>
> > * using rte_mempool_create_empty + rte_mempool_set_ops_byname -
> > allowing any customizations but with the necessity to change the
> > applications (new preferred API)
>
> Yes.
> And if required, maybe a third API is possible in case of mbuf pools.
> Indeed, the applications are encouraged to use rte_pktmbuf_pool_create()
> to create a pool of mbuf instead of mempool API. If an application
> wants to select specific ops for it, we could add:
>
> rte_pktmbuf_pool_create_<something>(..., name)
>
> instead of using the mempool API.
> I think this is what Shreyansh suggests when he says:
>
> It sounds fine if calls to rte_mempool_* can select an external
> handler *optionally* - but, if we pass it as command line, it would
> be binding (at least, semantically) for rte_pktmbuf_* calls as well.
> Isn't it?
Er. I think I should clarify the context.
I was referring to the 'command-line-argument-for-selecting-external-mem-allocator' comment. I was just highlighting that probably it would cause conflict with the two APIs.
But, having said that, I agree with you about "...applications are encouraged to use rte_pktmbuf_pool_create() to create a pool of mbuf...".
>
>
> > So, the old applications can stay as they are (OK, with a possible
> > new flag MEMPOOL_F_PKT_ALLOC) and the new one can do the same but you
> > have to set the ops explicitly.
> >
> > The more different ways of using those APIs we have, the greater hell
> > we have to maintain.
>
> I'm really not in favor of a MEMPOOL_F_PKT_ALLOC flag in mempool api.
Agree. Flags are not pretty way of handling mutually exclusive features - they are not intuitive.
Semantically cleaner API is better approach.
>
> I think David's patch is already a good step forward. Let's do it
> step by step. Next step is maybe to update some applications (at least
> testpmd) to select a new pool handler dynamically.
Fair enough. We can slowly make changes to all applications to show 'best practice' of creating buffer or packet pools.
>
> Regards,
> Olivier
-
Shreyansh
@@ -161,7 +161,6 @@ per_lcore_mempool_test(__attribute__((unused)) void *arg)
n_get_bulk);
if (unlikely(ret < 0)) {
rte_mempool_dump(stdout, mp);
- rte_ring_dump(stdout, mp->ring);
/* in this case, objects are lost... */
return -1;
}
@@ -42,6 +42,8 @@ LIBABIVER := 2
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += rte_mempool_default.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
@@ -148,7 +148,7 @@ mempool_add_elem(struct rte_mempool *mp, void *obj, phys_addr_t physaddr)
#endif
/* enqueue in ring */
- rte_ring_sp_enqueue(mp->ring, obj);
+ rte_mempool_ops_enqueue_bulk(mp, &obj, 1);
}
/* call obj_cb() for each mempool element */
@@ -303,40 +303,6 @@ rte_mempool_xmem_usage(__rte_unused void *vaddr, uint32_t elt_num,
return (size_t)paddr_idx << pg_shift;
}
-/* create the internal ring */
-static int
-rte_mempool_ring_create(struct rte_mempool *mp)
-{
- int rg_flags = 0, ret;
- char rg_name[RTE_RING_NAMESIZE];
- struct rte_ring *r;
-
- ret = snprintf(rg_name, sizeof(rg_name),
- RTE_MEMPOOL_MZ_FORMAT, mp->name);
- if (ret < 0 || ret >= (int)sizeof(rg_name))
- return -ENAMETOOLONG;
-
- /* ring flags */
- if (mp->flags & MEMPOOL_F_SP_PUT)
- rg_flags |= RING_F_SP_ENQ;
- if (mp->flags & MEMPOOL_F_SC_GET)
- rg_flags |= RING_F_SC_DEQ;
-
- /* Allocate the ring that will be used to store objects.
- * Ring functions will return appropriate errors if we are
- * running as a secondary process etc., so no checks made
- * in this function for that condition.
- */
- r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
- mp->socket_id, rg_flags);
- if (r == NULL)
- return -rte_errno;
-
- mp->ring = r;
- mp->flags |= MEMPOOL_F_RING_CREATED;
- return 0;
-}
-
/* free a memchunk allocated with rte_memzone_reserve() */
static void
rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr,
@@ -354,7 +320,7 @@ rte_mempool_free_memchunks(struct rte_mempool *mp)
void *elt;
while (!STAILQ_EMPTY(&mp->elt_list)) {
- rte_ring_sc_dequeue(mp->ring, &elt);
+ rte_mempool_ops_dequeue_bulk(mp, &elt, 1);
(void)elt;
STAILQ_REMOVE_HEAD(&mp->elt_list, next);
mp->populated_size--;
@@ -383,13 +349,16 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
unsigned i = 0;
size_t off;
struct rte_mempool_memhdr *memhdr;
- int ret;
/* create the internal ring if not already done */
- if ((mp->flags & MEMPOOL_F_RING_CREATED) == 0) {
- ret = rte_mempool_ring_create(mp);
- if (ret < 0)
- return ret;
+ if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+ rte_errno = 0;
+ mp->pool_data = rte_mempool_ops_alloc(mp);
+ if (mp->pool_data == NULL) {
+ if (rte_errno == 0)
+ return -EINVAL;
+ return -rte_errno;
+ }
}
/* mempool is already populated */
@@ -703,7 +672,7 @@ rte_mempool_free(struct rte_mempool *mp)
rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
rte_mempool_free_memchunks(mp);
- rte_ring_free(mp->ring);
+ rte_mempool_ops_free(mp);
rte_memzone_free(mp->mz);
}
@@ -815,6 +784,20 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0));
te->data = mp;
+
+ /*
+ * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
+ * set the correct index into the table of ops structs.
+ */
+ if (flags & (MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET))
+ rte_mempool_set_ops_byname(mp, "ring_sp_sc");
+ else if (flags & MEMPOOL_F_SP_PUT)
+ rte_mempool_set_ops_byname(mp, "ring_sp_mc");
+ else if (flags & MEMPOOL_F_SC_GET)
+ rte_mempool_set_ops_byname(mp, "ring_mp_sc");
+ else
+ rte_mempool_set_ops_byname(mp, "ring_mp_mc");
+
rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
TAILQ_INSERT_TAIL(mempool_list, te, next);
rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
@@ -930,7 +913,7 @@ rte_mempool_count(const struct rte_mempool *mp)
unsigned count;
unsigned lcore_id;
- count = rte_ring_count(mp->ring);
+ count = rte_mempool_ops_get_count(mp);
if (mp->cache_size == 0)
return count;
@@ -1123,7 +1106,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
fprintf(f, " flags=%x\n", mp->flags);
- fprintf(f, " ring=<%s>@%p\n", mp->ring->name, mp->ring);
+ fprintf(f, " pool=%p\n", mp->pool_data);
fprintf(f, " phys_addr=0x%" PRIx64 "\n", mp->mz->phys_addr);
fprintf(f, " nb_mem_chunks=%u\n", mp->nb_mem_chunks);
fprintf(f, " size=%"PRIu32"\n", mp->size);
@@ -1144,7 +1127,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)
}
cache_count = rte_mempool_dump_cache(f, mp);
- common_count = rte_ring_count(mp->ring);
+ common_count = rte_mempool_ops_get_count(mp);
if ((cache_count + common_count) > mp->size)
common_count = mp->size - cache_count;
fprintf(f, " common_pool_count=%u\n", common_count);
@@ -67,6 +67,7 @@
#include <inttypes.h>
#include <sys/queue.h>
+#include <rte_spinlock.h>
#include <rte_log.h>
#include <rte_debug.h>
#include <rte_lcore.h>
@@ -203,10 +204,13 @@ struct rte_mempool_memhdr {
*/
struct rte_mempool {
char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
- struct rte_ring *ring; /**< Ring to store objects. */
+ union {
+ void *pool_data; /**< Ring or pool to store objects */
+ uint64_t pool_id; /**< External mempool identifier */
+ };
const struct rte_memzone *mz; /**< Memzone where pool is allocated */
int flags; /**< Flags of the mempool. */
- int socket_id; /**< Socket id passed at mempool creation. */
+ int socket_id; /**< Socket id passed at create */
uint32_t size; /**< Max size of the mempool. */
uint32_t cache_size; /**< Size of per-lcore local cache. */
uint32_t cache_flushthresh;
@@ -217,6 +221,14 @@ struct rte_mempool {
uint32_t trailer_size; /**< Size of trailer (after elt). */
unsigned private_data_size; /**< Size of private data. */
+ /**
+ * Index into rte_mempool_ops_table array of mempool ops
+ * structs, which contain callback function pointers.
+ * We're using an index here rather than pointers to the callbacks
+ * to facilitate any secondary processes that may want to use
+ * this mempool.
+ */
+ int32_t ops_index;
struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
@@ -235,7 +247,7 @@ struct rte_mempool {
#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
#define MEMPOOL_F_SP_PUT 0x0004 /**< Default put is "single-producer".*/
#define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/
-#define MEMPOOL_F_RING_CREATED 0x0010 /**< Internal: ring is created */
+#define MEMPOOL_F_POOL_CREATED 0x0010 /**< Internal: pool is created */
#define MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically contiguous objs. */
/**
@@ -325,6 +337,210 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
#define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
+#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+/**
+ * Prototype for implementation specific data provisioning function.
+ *
+ * The function should provide the implementation specific memory for
+ * for use by the other mempool ops functions in a given mempool ops struct.
+ * E.g. the default ops provides an instance of the rte_ring for this purpose.
+ * it will mostlikely point to a different type of data structure, and
+ * will be transparent to the application programmer.
+ */
+typedef void *(*rte_mempool_alloc_t)(struct rte_mempool *mp);
+
+/**
+ * Free the opaque private data pointed to by mp->pool_data pointer.
+ */
+typedef void (*rte_mempool_free_t)(struct rte_mempool *mp);
+
+/**
+ * Put an object in the external pool.
+ */
+typedef int (*rte_mempool_put_t)(struct rte_mempool *mp,
+ void * const *obj_table, unsigned int n);
+
+/**
+ * Get an object from the external pool.
+ */
+typedef int (*rte_mempool_get_t)(struct rte_mempool *mp,
+ void **obj_table, unsigned int n);
+
+/**
+ * Return the number of available objects in the external pool.
+ */
+typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
+
+/** Structure defining mempool operations structure */
+struct rte_mempool_ops {
+ char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct */
+ rte_mempool_alloc_t alloc; /**< Allocate private data */
+ rte_mempool_free_t free; /**< Free the external pool. */
+ rte_mempool_put_t put; /**< Put an object. */
+ rte_mempool_get_t get; /**< Get an object. */
+ rte_mempool_get_count get_count; /**< Get the number of available objs. */
+} __rte_cache_aligned;
+
+#define RTE_MEMPOOL_MAX_OPS_IDX 16 /**< Max registered ops structs */
+
+/**
+ * Structure storing the table of registered ops structs, each of which contain
+ * the function pointers for the mempool ops functions.
+ * Each process has its own storage for this ops struct array so that
+ * the mempools can be shared across primary and secondary processes.
+ * The indices used to access the array are valid across processes, whereas
+ * any function pointers stored directly in the mempool struct would not be.
+ * This results in us simply having "ops_index" in the mempool struct.
+ */
+struct rte_mempool_ops_table {
+ rte_spinlock_t sl; /**< Spinlock for add/delete. */
+ uint32_t num_ops; /**< Number of used ops structs in the table. */
+ /**
+ * Storage for all possible ops structs.
+ */
+ struct rte_mempool_ops ops[RTE_MEMPOOL_MAX_OPS_IDX];
+} __rte_cache_aligned;
+
+/** Array of registered ops structs */
+extern struct rte_mempool_ops_table rte_mempool_ops_table;
+
+/**
+ * @internal Get the mempool ops struct from its index.
+ *
+ * @param ops_index
+ * The index of the ops struct in the ops struct table. It must be a valid
+ * index: (0 <= idx < num_ops).
+ * @return
+ * The pointer to the ops struct in the table.
+ */
+static inline struct rte_mempool_ops *
+rte_mempool_ops_get(int ops_index)
+{
+ RTE_VERIFY(ops_index < RTE_MEMPOOL_MAX_OPS_IDX);
+
+ return &rte_mempool_ops_table.ops[ops_index];
+}
+
+/**
+ * @internal Wrapper for mempool_ops alloc callback.
+ *
+ * @param mp
+ * Pointer to the memory pool.
+ * @return
+ * The opaque pointer to the external pool.
+ */
+void *
+rte_mempool_ops_alloc(struct rte_mempool *mp);
+
+/**
+ * @internal Wrapper for mempool_ops get callback.
+ *
+ * @param mp
+ * Pointer to the memory pool.
+ * @param obj_table
+ * Pointer to a table of void * pointers (objects).
+ * @param n
+ * Number of objects to get.
+ * @return
+ * - 0: Success; got n objects.
+ * - <0: Error; code of get function.
+ */
+static inline int
+rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
+ void **obj_table, unsigned n)
+{
+ struct rte_mempool_ops *ops;
+
+ ops = rte_mempool_ops_get(mp->ops_index);
+ return ops->get(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops put callback.
+ *
+ * @param mp
+ * Pointer to the memory pool.
+ * @param obj_table
+ * Pointer to a table of void * pointers (objects).
+ * @param n
+ * Number of objects to put.
+ * @return
+ * - 0: Success; n objects supplied.
+ * - <0: Error; code of put function.
+ */
+static inline int
+rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
+ unsigned n)
+{
+ struct rte_mempool_ops *ops;
+
+ ops = rte_mempool_ops_get(mp->ops_index);
+ return ops->put(mp, obj_table, n);
+}
+
+/**
+ * @internal wrapper for mempool_ops get_count callback.
+ *
+ * @param mp
+ * Pointer to the memory pool.
+ * @return
+ * The number of available objects in the external pool.
+ */
+unsigned
+rte_mempool_ops_get_count(const struct rte_mempool *mp);
+
+/**
+ * @internal wrapper for mempool_ops free callback.
+ *
+ * @param mp
+ * Pointer to the memory pool.
+ */
+void
+rte_mempool_ops_free(struct rte_mempool *mp);
+
+/**
+ * Set the ops of a mempool
+ *
+ * This can only be done on a mempool that is not populated, i.e. just after
+ * a call to rte_mempool_create_empty().
+ *
+ * @param mp
+ * Pointer to the memory pool.
+ * @param name
+ * Name of the ops structure to use for this mempool.
+ * @return
+ * - 0: Sucess; the mempool is now using the requested ops functions
+ * - -EINVAL - Invalid ops struct name provided
+ * - -EEXIST - mempool already has an ops struct assigned
+ */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name);
+
+/**
+ * Register mempool operations
+ *
+ * @param h
+ * Pointer to and ops structure to register
+ * @return
+ * - >=0: Sucess; return the index of the ops struct in the table.
+ * - -EINVAL - some missing callbacks while registering ops struct
+ * - -ENOSPC - the maximum number of ops structs has been reached
+ */
+int rte_mempool_ops_register(const struct rte_mempool_ops *ops);
+
+/**
+ * Macro to statically register the ops of an external mempool manager
+ * Note that the rte_mempool_ops_register fails silently here when
+ * more then RTE_MEMPOOL_MAX_OPS_IDX is registered.
+ */
+#define MEMPOOL_REGISTER_OPS(ops) \
+ void mp_hdlr_init_##ops(void); \
+ void __attribute__((constructor, used)) mp_hdlr_init_##ops(void)\
+ { \
+ rte_mempool_ops_register(&ops); \
+ }
+
/**
* An object callback function for mempool.
*
@@ -774,7 +990,7 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
cache->len += n;
if (cache->len >= flushthresh) {
- rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+ rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache_size],
cache->len - cache_size);
cache->len = cache_size;
}
@@ -785,19 +1001,10 @@ ring_enqueue:
/* push remaining objects in ring */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
- if (is_mp) {
- if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
- rte_panic("cannot put objects in mempool\n");
- }
- else {
- if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
- rte_panic("cannot put objects in mempool\n");
- }
+ if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
+ rte_panic("cannot put objects in mempool\n");
#else
- if (is_mp)
- rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
- else
- rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+ rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
#endif
}
@@ -945,7 +1152,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
uint32_t req = n + (cache_size - cache->len);
/* How many do we require i.e. number to fill the cache + the request */
- ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+ ret = rte_mempool_ops_dequeue_bulk(mp,
+ &cache->objs[cache->len], req);
if (unlikely(ret < 0)) {
/*
* In the offchance that we are buffer constrained,
@@ -972,10 +1180,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
ring_dequeue:
/* get remaining objects from ring */
- if (is_mc)
- ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
- else
- ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+ ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
if (ret < 0)
__MEMPOOL_STAT_ADD(mp, get_fail, n);
new file mode 100644
@@ -0,0 +1,157 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
+static int
+common_ring_mp_put(struct rte_mempool *mp, void * const *obj_table, unsigned n)
+{
+ return rte_ring_mp_enqueue_bulk((struct rte_ring *)(mp->pool_data),
+ obj_table, n);
+}
+
+static int
+common_ring_sp_put(struct rte_mempool *mp, void * const *obj_table, unsigned n)
+{
+ return rte_ring_sp_enqueue_bulk((struct rte_ring *)(mp->pool_data),
+ obj_table, n);
+}
+
+static int
+common_ring_mc_get(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+ return rte_ring_mc_dequeue_bulk((struct rte_ring *)(mp->pool_data),
+ obj_table, n);
+}
+
+static int
+common_ring_sc_get(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+ return rte_ring_sc_dequeue_bulk((struct rte_ring *)(mp->pool_data),
+ obj_table, n);
+}
+
+static unsigned
+common_ring_get_count(const struct rte_mempool *mp)
+{
+ return rte_ring_count((struct rte_ring *)(mp->pool_data));
+}
+
+
+static void *
+common_ring_alloc(struct rte_mempool *mp)
+{
+ int rg_flags = 0, ret;
+ char rg_name[RTE_RING_NAMESIZE];
+ struct rte_ring *r;
+
+ ret = snprintf(rg_name, sizeof(rg_name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(rg_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ /* ring flags */
+ if (mp->flags & MEMPOOL_F_SP_PUT)
+ rg_flags |= RING_F_SP_ENQ;
+ if (mp->flags & MEMPOOL_F_SC_GET)
+ rg_flags |= RING_F_SC_DEQ;
+
+ /* Allocate the ring that will be used to store objects.
+ * Ring functions will return appropriate errors if we are
+ * running as a secondary process etc., so no checks made
+ * in this function for that condition. */
+ r = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1),
+ mp->socket_id, rg_flags);
+
+ return r;
+}
+
+static void
+common_ring_free(struct rte_mempool *mp)
+{
+ rte_ring_free((struct rte_ring *)mp->pool_data);
+}
+
+/*
+ * The following 4 declarations of mempool ops structs address
+ * the need for the backward compatible mempool managers for
+ * single/multi producers and single/multi consumers as dictated by the
+ * flags provided to the rte_mempool_create function
+ */
+static const struct rte_mempool_ops ops_mp_mc = {
+ .name = "ring_mp_mc",
+ .alloc = common_ring_alloc,
+ .free = common_ring_free,
+ .put = common_ring_mp_put,
+ .get = common_ring_mc_get,
+ .get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_sc = {
+ .name = "ring_sp_sc",
+ .alloc = common_ring_alloc,
+ .free = common_ring_free,
+ .put = common_ring_sp_put,
+ .get = common_ring_sc_get,
+ .get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_mp_sc = {
+ .name = "ring_mp_sc",
+ .alloc = common_ring_alloc,
+ .free = common_ring_free,
+ .put = common_ring_mp_put,
+ .get = common_ring_sc_get,
+ .get_count = common_ring_get_count,
+};
+
+static const struct rte_mempool_ops ops_sp_mc = {
+ .name = "ring_sp_mc",
+ .alloc = common_ring_alloc,
+ .free = common_ring_free,
+ .put = common_ring_sp_put,
+ .get = common_ring_mc_get,
+ .get_count = common_ring_get_count,
+};
+
+MEMPOOL_REGISTER_OPS(ops_mp_mc);
+MEMPOOL_REGISTER_OPS(ops_sp_sc);
+MEMPOOL_REGISTER_OPS(ops_mp_sc);
+MEMPOOL_REGISTER_OPS(ops_sp_mc);
new file mode 100644
@@ -0,0 +1,149 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ * Copyright(c) 2016 6WIND S.A.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include <rte_mempool.h>
+#include <rte_errno.h>
+
+/* indirect jump table to support external memory pools */
+struct rte_mempool_ops_table rte_mempool_ops_table = {
+ .sl = RTE_SPINLOCK_INITIALIZER ,
+ .num_ops = 0
+};
+
+/* add a new ops struct in rte_mempool_ops_table, return its index */
+int
+rte_mempool_ops_register(const struct rte_mempool_ops *h)
+{
+ struct rte_mempool_ops *ops;
+ int16_t ops_index;
+
+ rte_spinlock_lock(&rte_mempool_ops_table.sl);
+
+ if (rte_mempool_ops_table.num_ops >=
+ RTE_MEMPOOL_MAX_OPS_IDX) {
+ rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+ RTE_LOG(ERR, MEMPOOL,
+ "Maximum number of mempool ops structs exceeded\n");
+ return -ENOSPC;
+ }
+
+ if (h->put == NULL || h->get == NULL || h->get_count == NULL) {
+ rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+ RTE_LOG(ERR, MEMPOOL,
+ "Missing callback while registering mempool ops\n");
+ return -EINVAL;
+ }
+
+ if (strlen(h->name) >= sizeof(ops->name) - 1) {
+ RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
+ __func__, h->name);
+ rte_errno = EEXIST;
+ return NULL;
+ }
+
+ ops_index = rte_mempool_ops_table.num_ops++;
+ ops = &rte_mempool_ops_table.ops[ops_index];
+ snprintf(ops->name, sizeof(ops->name), "%s", h->name);
+ ops->alloc = h->alloc;
+ ops->put = h->put;
+ ops->get = h->get;
+ ops->get_count = h->get_count;
+
+ rte_spinlock_unlock(&rte_mempool_ops_table.sl);
+
+ return ops_index;
+}
+
+/* wrapper to allocate an external mempool's private (pool) data */
+void *
+rte_mempool_ops_alloc(struct rte_mempool *mp)
+{
+ struct rte_mempool_ops *ops;
+
+ ops = rte_mempool_ops_get(mp->ops_index);
+ if (ops->alloc == NULL)
+ return NULL;
+ return ops->alloc(mp);
+}
+
+/* wrapper to free an external pool ops */
+void
+rte_mempool_ops_free(struct rte_mempool *mp)
+{
+ struct rte_mempool_ops *ops;
+
+ ops = rte_mempool_ops_get(mp->ops_index);
+ if (ops->free == NULL)
+ return;
+ return ops->free(mp);
+}
+
+/* wrapper to get available objects in an external mempool */
+unsigned int
+rte_mempool_ops_get_count(const struct rte_mempool *mp)
+{
+ struct rte_mempool_ops *ops;
+
+ ops = rte_mempool_ops_get(mp->ops_index);
+ return ops->get_count(mp);
+}
+
+/* sets mempool ops previously registered by rte_mempool_ops_register */
+int
+rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name)
+{
+ struct rte_mempool_ops *ops = NULL;
+ unsigned i;
+
+ /* too late, the mempool is already populated */
+ if (mp->flags & MEMPOOL_F_POOL_CREATED)
+ return -EEXIST;
+
+ for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
+ if (!strcmp(name,
+ rte_mempool_ops_table.ops[i].name)) {
+ ops = &rte_mempool_ops_table.ops[i];
+ break;
+ }
+ }
+
+ if (ops == NULL)
+ return -EINVAL;
+
+ mp->ops_index = i;
+ return 0;
+}