[v4,1/1] app/testpmd: add valid check to verify multi mempool feature

Message ID 20221118141334.3825072-1-hpothula@marvell.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series [v4,1/1] app/testpmd: add valid check to verify multi mempool feature |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/intel-Testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/github-robot: build success github build: passed
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS

Commit Message

Hanumanth Pothula Nov. 18, 2022, 2:13 p.m. UTC
  Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Bugzilla ID: 1128

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/testpmd.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)
  

Comments

Ferruh Yigit Nov. 18, 2022, 8:56 p.m. UTC | #1
On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 

My preference would be revert the testpmd patch [1] that adds this new
feature after -rc2, and add it back next release with new testpmd
argument and below mentioned changes in setup function.

@Andrew, @Thomas, @Jerin, what do you think?


[1]
4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

> Bugzilla ID: 1128
> 

Can you please add fixes line?

> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

Please put the changelog after '---', which than git will take it as note.

> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/testpmd.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..c1b4dbd716 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>  	struct rte_mempool *mpx;
> +	struct rte_eth_dev_info dev_info;
>  	unsigned int i, mp_n;
>  	uint32_t prev_hdrs = 0;
>  	int ret;
>  
> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> +	if (ret != 0)
> +		return ret;
> +
>  	/* Verify Rx queue configuration is single pool and segment or
>  	 * multiple pool/segment.
> +	 * @see rte_eth_dev_info::max_rx_mempools
>  	 * @see rte_eth_rxconf::rx_mempools
>  	 * @see rte_eth_rxconf::rx_seg
>  	 */
> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
> +	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||

Using `dev_info.max_rx_mempools` for check means if device supports
multiple mempool, multiple mempool will be configured independent from
user configuration. But user may prefer singe mempool or buffer split.

Right now only PMD support multiple mempool is 'cnxk', so this doesn't
impact others but I think this is not correct.

Instead of re-using testpmd "mbuf-size" parameter (it is already used
for two other features, and this is the reason of the defect) it would
be better to have an explicit parameter for multiple mempool feature.


> +	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0))) {
>  		/* Single pool/segment configuration */
>  		rx_conf->rx_seg = NULL;
>  		rx_conf->rx_nseg = 0;


Logic seems correct, although I have not tested.

Current functions tries to detect the requested feature and setup queues
accordingly, features are:
- single mempool
- packet split (to multiple mempool)
- multiple mempool (various size)

And the logic in the function is:
``
if ( (! multiple mempool) && (! packet split))
	setup for single mempool
	exit

if (packet split)
	setup packet split
else
	setup multiple mempool
``

What do you think to
a) simplify logic by making single mempool as fallback and last option,
instead of detecting non existence of other configs
b) have explicit check for multiple mempool

Like:

``
if (packet split)
	setup packet split
	exit
else if (multiple mempool)
	setup multiple mempool
	exit

setup for single mempool
``

I think this both solves the defect and simplifies the code.
  
Hanumanth Pothula Nov. 19, 2022, midnight UTC | #2
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Saturday, November 19, 2022 2:26 AM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> Dabilpuram <ndabilpuram@marvell.com>
> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>; Yuying
> Zhang <yuying.zhang@intel.com>
> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify
> multi mempool feature
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> 
> My preference would be revert the testpmd patch [1] that adds this new
> feature after -rc2, and add it back next release with new testpmd argument
> and below mentioned changes in setup function.
> 
> @Andrew, @Thomas, @Jerin, what do you think?
> 
> 
> [1]
> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> > Bugzilla ID: 1128
> >
> 
> Can you please add fixes line?
> 
Ack
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> 
> Please put the changelog after '---', which than git will take it as note.
> 
Ack
> > v4:
> >  - updated if condition.
> > v3:
> >  - Simplified conditional check.
> >  - Corrected spell, whether.
> > v2:
> >  - Rebased on tip of next-net/main.
> > ---
> >  app/test-pmd/testpmd.c | 10 ++++++++--
> >  1 file changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..c1b4dbd716 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> >  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> >  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> >  	struct rte_mempool *mpx;
> > +	struct rte_eth_dev_info dev_info;
> >  	unsigned int i, mp_n;
> >  	uint32_t prev_hdrs = 0;
> >  	int ret;
> >
> > +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> > +	if (ret != 0)
> > +		return ret;
> > +
> >  	/* Verify Rx queue configuration is single pool and segment or
> >  	 * multiple pool/segment.
> > +	 * @see rte_eth_dev_info::max_rx_mempools
> >  	 * @see rte_eth_rxconf::rx_mempools
> >  	 * @see rte_eth_rxconf::rx_seg
> >  	 */
> > -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > +	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
> 
> Using `dev_info.max_rx_mempools` for check means if device supports
> multiple mempool, multiple mempool will be configured independent from
> user configuration. But user may prefer singe mempool or buffer split.
> 
Please find my suggested logic.

> Right now only PMD support multiple mempool is 'cnxk', so this doesn't
> impact others but I think this is not correct.
> 
> Instead of re-using testpmd "mbuf-size" parameter (it is already used for
> two other features, and this is the reason of the defect) it would be better
> to have an explicit parameter for multiple mempool feature.
> 
> 
> > +	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
> 0))) {
> >  		/* Single pool/segment configuration */
> >  		rx_conf->rx_seg = NULL;
> >  		rx_conf->rx_nseg = 0;
> 
> 
> Logic seems correct, although I have not tested.
> 
> Current functions tries to detect the requested feature and setup queues
> accordingly, features are:
> - single mempool
> - packet split (to multiple mempool)
> - multiple mempool (various size)
> 
> And the logic in the function is:
> ``
> if ( (! multiple mempool) && (! packet split))
> 	setup for single mempool
> 	exit
> 
> if (packet split)
> 	setup packet split
> else
> 	setup multiple mempool
> ``
> 
> What do you think to
> a) simplify logic by making single mempool as fallback and last option,
> instead of detecting non existence of other configs
> b) have explicit check for multiple mempool
> 
> Like:
> 
> ``
> if (packet split)
> 	setup packet split
> 	exit
> else if (multiple mempool)
> 	setup multiple mempool
> 	exit
> 
> setup for single mempool
> ``
> 
> I think this both solves the defect and simplifies the code.

Yes Ferruh your suggested logic simplifies the code.

In the lines of your proposed logic,  below if conditions might work fine for all features(buffer-split/multi-mempool) supported by PMD and user preference,

if (rx_pkt_nb_segs > 1 ||
            rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
	/*multi-segment (buffer split)*/
} else if (mbuf_data_size_n > 1 && dev_info.max_rx_mempools > 1) {
	/*multi-mempool*/
} else {
	/* single pool and segment */
} 

Or  adding new Rx offload parameter for multi_mempool feature, I think it might not be required, using dev_info.max_rx_mempools works fine.

if (rx_pkt_nb_segs > 1 ||
            rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
	/*multi-segment (buffer split)*/
} else if (mbuf_data_size_n > 1 && rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MULTI_MEMPOOL ) {
	/*multi-mempool*/
} else {
	/* single pool and segment */
}

Please let me know your inputs on above logic.
  
Ferruh Yigit Nov. 21, 2022, 10:08 a.m. UTC | #3
On 11/19/2022 12:00 AM, Hanumanth Reddy Pothula wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Saturday, November 19, 2022 2:26 AM
>> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
>> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
>> Dabilpuram <ndabilpuram@marvell.com>
>> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
>> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>; Yuying
>> Zhang <yuying.zhang@intel.com>
>> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to verify
>> multi mempool feature
>>
>> External Email
>>
>> ----------------------------------------------------------------------
>> On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
>>> Validate ethdev parameter 'max_rx_mempools' to know whether device
>>> supports multi-mempool feature or not.
>>>
>>
>> My preference would be revert the testpmd patch [1] that adds this new
>> feature after -rc2, and add it back next release with new testpmd argument
>> and below mentioned changes in setup function.
>>
>> @Andrew, @Thomas, @Jerin, what do you think?
>>
>>
>> [1]
>> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
>>
>>> Bugzilla ID: 1128
>>>
>>
>> Can you please add fixes line?
>>
> Ack
>>> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
>>
>> Please put the changelog after '---', which than git will take it as note.
>>
> Ack
>>> v4:
>>>  - updated if condition.
>>> v3:
>>>  - Simplified conditional check.
>>>  - Corrected spell, whether.
>>> v2:
>>>  - Rebased on tip of next-net/main.
>>> ---
>>>  app/test-pmd/testpmd.c | 10 ++++++++--
>>>  1 file changed, 8 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
>>> 4e25f77c6a..c1b4dbd716 100644
>>> --- a/app/test-pmd/testpmd.c
>>> +++ b/app/test-pmd/testpmd.c
>>> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id, uint16_t
>> rx_queue_id,
>>>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
>>>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
>>>  	struct rte_mempool *mpx;
>>> +	struct rte_eth_dev_info dev_info;
>>>  	unsigned int i, mp_n;
>>>  	uint32_t prev_hdrs = 0;
>>>  	int ret;
>>>
>>> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
>>> +	if (ret != 0)
>>> +		return ret;
>>> +
>>>  	/* Verify Rx queue configuration is single pool and segment or
>>>  	 * multiple pool/segment.
>>> +	 * @see rte_eth_dev_info::max_rx_mempools
>>>  	 * @see rte_eth_rxconf::rx_mempools
>>>  	 * @see rte_eth_rxconf::rx_seg
>>>  	 */
>>> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
>>> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
>> 0))) {
>>> +	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
>>
>> Using `dev_info.max_rx_mempools` for check means if device supports
>> multiple mempool, multiple mempool will be configured independent from
>> user configuration. But user may prefer singe mempool or buffer split.
>>
> Please find my suggested logic.
> 
>> Right now only PMD support multiple mempool is 'cnxk', so this doesn't
>> impact others but I think this is not correct.
>>
>> Instead of re-using testpmd "mbuf-size" parameter (it is already used for
>> two other features, and this is the reason of the defect) it would be better
>> to have an explicit parameter for multiple mempool feature.
>>
>>
>>> +	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
>> 0))) {
>>>  		/* Single pool/segment configuration */
>>>  		rx_conf->rx_seg = NULL;
>>>  		rx_conf->rx_nseg = 0;
>>
>>
>> Logic seems correct, although I have not tested.
>>
>> Current functions tries to detect the requested feature and setup queues
>> accordingly, features are:
>> - single mempool
>> - packet split (to multiple mempool)
>> - multiple mempool (various size)
>>
>> And the logic in the function is:
>> ``
>> if ( (! multiple mempool) && (! packet split))
>> 	setup for single mempool
>> 	exit
>>
>> if (packet split)
>> 	setup packet split
>> else
>> 	setup multiple mempool
>> ``
>>
>> What do you think to
>> a) simplify logic by making single mempool as fallback and last option,
>> instead of detecting non existence of other configs
>> b) have explicit check for multiple mempool
>>
>> Like:
>>
>> ``
>> if (packet split)
>> 	setup packet split
>> 	exit
>> else if (multiple mempool)
>> 	setup multiple mempool
>> 	exit
>>
>> setup for single mempool
>> ``
>>
>> I think this both solves the defect and simplifies the code.
> 
> Yes Ferruh your suggested logic simplifies the code.
> 
> In the lines of your proposed logic,  below if conditions might work fine for all features(buffer-split/multi-mempool) supported by PMD and user preference,
> 
> if (rx_pkt_nb_segs > 1 ||
>             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> 	/*multi-segment (buffer split)*/
> } else if (mbuf_data_size_n > 1 && dev_info.max_rx_mempools > 1) {
> 	/*multi-mempool*/
> } else {
> 	/* single pool and segment */
> } 
> 

`mbuf_data_size_n > 1` may mean user is requesting multiple segment, or
buffer split, so I am not sure about using this value to decide on
multiple mempool feature, it can create side effect as Bug 1128 does.

> Or  adding new Rx offload parameter for multi_mempool feature, I think it might not be required, using dev_info.max_rx_mempools works fine.
> 

In ethdev level, we don't need an offload flag, since separated config
options clarify the intention there.
What is needed is a way to understand users intention, for application
(testpmd) and configure device accordingly.
That is why I think 'dev_info.max_rx_mempools' is not working fine,
because that is a way for device to say multiple mempool is supported,
it is not to get user intention.
In your logic when device supports multiple mempool, use it independent
from what user request.

I suggest having a testpmd argument explicitly to say multiple mempool
feature is requested, this will help to distinguish buffer split,
multiple mempool, single mempool features.

Thanks.

> if (rx_pkt_nb_segs > 1 ||
>             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> 	/*multi-segment (buffer split)*/
> } else if (mbuf_data_size_n > 1 && rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MULTI_MEMPOOL ) {
> 	/*multi-mempool*/
> } else {
> 	/* single pool and segment */
> }
> 
> Please let me know your inputs on above logic.
>
  
Hanumanth Pothula Nov. 21, 2022, 10:44 a.m. UTC | #4
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Monday, November 21, 2022 3:38 PM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> Dabilpuram <ndabilpuram@marvell.com>
> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>; Yuying
> Zhang <yuying.zhang@intel.com>
> Subject: Re: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to
> verify multi mempool feature
> 
> On 11/19/2022 12:00 AM, Hanumanth Reddy Pothula wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Saturday, November 19, 2022 2:26 AM
> >> To: Hanumanth Reddy Pothula <hpothula@marvell.com>;
> >> thomas@monjalon.net; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> >> Dabilpuram <ndabilpuram@marvell.com>
> >> Cc: dev@dpdk.org; yux.jiang@intel.com; Jerin Jacob Kollanukkaran
> >> <jerinj@marvell.com>; Aman Singh <aman.deep.singh@intel.com>;
> Yuying
> >> Zhang <yuying.zhang@intel.com>
> >> Subject: [EXT] Re: [PATCH v4 1/1] app/testpmd: add valid check to
> >> verify multi mempool feature
> >>
> >> External Email
> >>
> >> ---------------------------------------------------------------------
> >> - On 11/18/2022 2:13 PM, Hanumanth Pothula wrote:
> >>> Validate ethdev parameter 'max_rx_mempools' to know whether
> device
> >>> supports multi-mempool feature or not.
> >>>
> >>
> >> My preference would be revert the testpmd patch [1] that adds this
> >> new feature after -rc2, and add it back next release with new testpmd
> >> argument and below mentioned changes in setup function.
> >>
> >> @Andrew, @Thomas, @Jerin, what do you think?
> >>
> >>
> >> [1]
> >> 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> >> queue")
> >>
> >>> Bugzilla ID: 1128
> >>>
> >>
> >> Can you please add fixes line?
> >>
> > Ack
> >>> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> >>
> >> Please put the changelog after '---', which than git will take it as note.
> >>
> > Ack
> >>> v4:
> >>>  - updated if condition.
> >>> v3:
> >>>  - Simplified conditional check.
> >>>  - Corrected spell, whether.
> >>> v2:
> >>>  - Rebased on tip of next-net/main.
> >>> ---
> >>>  app/test-pmd/testpmd.c | 10 ++++++++--
> >>>  1 file changed, 8 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> >>> 4e25f77c6a..c1b4dbd716 100644
> >>> --- a/app/test-pmd/testpmd.c
> >>> +++ b/app/test-pmd/testpmd.c
> >>> @@ -2655,17 +2655,23 @@ rx_queue_setup(uint16_t port_id,
> uint16_t
> >> rx_queue_id,
> >>>  	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> >>>  	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> >>>  	struct rte_mempool *mpx;
> >>> +	struct rte_eth_dev_info dev_info;
> >>>  	unsigned int i, mp_n;
> >>>  	uint32_t prev_hdrs = 0;
> >>>  	int ret;
> >>>
> >>> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> >>> +	if (ret != 0)
> >>> +		return ret;
> >>> +
> >>>  	/* Verify Rx queue configuration is single pool and segment or
> >>>  	 * multiple pool/segment.
> >>> +	 * @see rte_eth_dev_info::max_rx_mempools
> >>>  	 * @see rte_eth_rxconf::rx_mempools
> >>>  	 * @see rte_eth_rxconf::rx_seg
> >>>  	 */
> >>> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> >>> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> >> 0))) {
> >>> +	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
> >>
> >> Using `dev_info.max_rx_mempools` for check means if device supports
> >> multiple mempool, multiple mempool will be configured independent
> >> from user configuration. But user may prefer singe mempool or buffer
> split.
> >>
> > Please find my suggested logic.
> >
> >> Right now only PMD support multiple mempool is 'cnxk', so this
> >> doesn't impact others but I think this is not correct.
> >>
> >> Instead of re-using testpmd "mbuf-size" parameter (it is already used
> >> for two other features, and this is the reason of the defect) it
> >> would be better to have an explicit parameter for multiple mempool
> feature.
> >>
> >>
> >>> +	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) ==
> >> 0))) {
> >>>  		/* Single pool/segment configuration */
> >>>  		rx_conf->rx_seg = NULL;
> >>>  		rx_conf->rx_nseg = 0;
> >>
> >>
> >> Logic seems correct, although I have not tested.
> >>
> >> Current functions tries to detect the requested feature and setup
> >> queues accordingly, features are:
> >> - single mempool
> >> - packet split (to multiple mempool)
> >> - multiple mempool (various size)
> >>
> >> And the logic in the function is:
> >> ``
> >> if ( (! multiple mempool) && (! packet split))
> >> 	setup for single mempool
> >> 	exit
> >>
> >> if (packet split)
> >> 	setup packet split
> >> else
> >> 	setup multiple mempool
> >> ``
> >>
> >> What do you think to
> >> a) simplify logic by making single mempool as fallback and last
> >> option, instead of detecting non existence of other configs
> >> b) have explicit check for multiple mempool
> >>
> >> Like:
> >>
> >> ``
> >> if (packet split)
> >> 	setup packet split
> >> 	exit
> >> else if (multiple mempool)
> >> 	setup multiple mempool
> >> 	exit
> >>
> >> setup for single mempool
> >> ``
> >>
> >> I think this both solves the defect and simplifies the code.
> >
> > Yes Ferruh your suggested logic simplifies the code.
> >
> > In the lines of your proposed logic,  below if conditions might work
> > fine for all features(buffer-split/multi-mempool) supported by PMD and
> > user preference,
> >
> > if (rx_pkt_nb_segs > 1 ||
> >             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > 	/*multi-segment (buffer split)*/
> > } else if (mbuf_data_size_n > 1 && dev_info.max_rx_mempools > 1) {
> > 	/*multi-mempool*/
> > } else {
> > 	/* single pool and segment */
> > }
> >
> 
> `mbuf_data_size_n > 1` may mean user is requesting multiple segment, or
> buffer split, so I am not sure about using this value to decide on
> multiple mempool feature, it can create side effect as Bug 1128 does.
> 
> > Or  adding new Rx offload parameter for multi_mempool feature, I think it
> might not be required, using dev_info.max_rx_mempools works fine.
> >
> 
> In ethdev level, we don't need an offload flag, since separated config
> options clarify the intention there.
> What is needed is a way to understand users intention, for application
> (testpmd) and configure device accordingly.
> That is why I think 'dev_info.max_rx_mempools' is not working fine,
> because that is a way for device to say multiple mempool is supported,
> it is not to get user intention.
> In your logic when device supports multiple mempool, use it independent
> from what user request.
> 
> I suggest having a testpmd argument explicitly to say multiple mempool
> feature is requested, this will help to distinguish buffer split,
> multiple mempool, single mempool features.
> 
> Thanks.

Sure, will upload new patch-set with testpmd argument, which tells multiple mempool feature is enabled/disabled.

> 
> > if (rx_pkt_nb_segs > 1 ||
> >             rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > 	/*multi-segment (buffer split)*/
> > } else if (mbuf_data_size_n > 1 && rx_conf->offloads &
> RTE_ETH_RX_OFFLOAD_MULTI_MEMPOOL ) {
> > 	/*multi-mempool*/
> > } else {
> > 	/* single pool and segment */
> > }
> >
> > Please let me know your inputs on above logic.
> >
  

Patch

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..c1b4dbd716 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -2655,17 +2655,23 @@  rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
 	struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
 	struct rte_mempool *mpx;
+	struct rte_eth_dev_info dev_info;
 	unsigned int i, mp_n;
 	uint32_t prev_hdrs = 0;
 	int ret;
 
+	ret = rte_eth_dev_info_get(port_id, &dev_info);
+	if (ret != 0)
+		return ret;
+
 	/* Verify Rx queue configuration is single pool and segment or
 	 * multiple pool/segment.
+	 * @see rte_eth_dev_info::max_rx_mempools
 	 * @see rte_eth_rxconf::rx_mempools
 	 * @see rte_eth_rxconf::rx_seg
 	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
+	if ((dev_info.max_rx_mempools == 0) && (rx_pkt_nb_segs <= 1 ||
+	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0))) {
 		/* Single pool/segment configuration */
 		rx_conf->rx_seg = NULL;
 		rx_conf->rx_nseg = 0;