[v7,1/1] app/testpmd: add valid check to verify multi mempool feature

Message ID 20221121180756.3924770-1-hpothula@marvell.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers
Series [v7,1/1] app/testpmd: add valid check to verify multi mempool feature |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-broadcom-Performance fail Performance Testing issues
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS

Commit Message

Hanumanth Pothula Nov. 21, 2022, 6:07 p.m. UTC
  Validate ethdev parameter 'max_rx_mempools' to know whether
device supports multi-mempool feature or not.

Also, add new testpmd command line argument, multi-mempool,
to control multi-mempool feature. By default its disabled.

Bugzilla ID: 1128
Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")

Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

---
v7:
 - Update testpmd argument name from multi-mempool to multi-rx-mempool.
 - Upated defination of testpmd argument, mbuf-size.
 - Resolved indentations.
v6:
 - Updated run_app.rst file with multi-mempool argument.
 - defined and populated multi_mempool at related arguments.
 - invoking rte_eth_dev_info_get() withing multi-mempool condition
v5:
 - Added testpmd argument to enable multi-mempool feature.
 - Simplified logic to distinguish between multi-mempool,
   multi-segment and single pool/segment.
v4:
 - updated if condition.
v3:
 - Simplified conditional check.
 - Corrected spell, whether.
v2:
 - Rebased on tip of next-net/main.
---
 app/test-pmd/parameters.c             |  7 ++-
 app/test-pmd/testpmd.c                | 64 ++++++++++++++++-----------
 app/test-pmd/testpmd.h                |  1 +
 doc/guides/testpmd_app_ug/run_app.rst |  4 ++
 4 files changed, 50 insertions(+), 26 deletions(-)
  

Comments

Ferruh Yigit Nov. 21, 2022, 6:40 p.m. UTC | #1
On 11/21/2022 6:07 PM, Hanumanth Pothula wrote:
> Validate ethdev parameter 'max_rx_mempools' to know whether
> device supports multi-mempool feature or not.
> 
> Also, add new testpmd command line argument, multi-mempool,
> to control multi-mempool feature. By default its disabled.

s/multi-mempool/multi-rx-mempool/

Also moving argument paragraph up.

> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
> 
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>

With noted issues fixed,
Applied to dpdk-next-net/main, thanks.


Lets wait test report before requesting this to be merged to main repo.

@Yu, @Yuying,

Can you please verify this at the latest head of the next-net tree?

Thanks,
ferruh

> 
> ---
> v7:
>  - Update testpmd argument name from multi-mempool to multi-rx-mempool.
>  - Upated defination of testpmd argument, mbuf-size.
>  - Resolved indentations.
> v6:
>  - Updated run_app.rst file with multi-mempool argument.
>  - defined and populated multi_mempool at related arguments.
>  - invoking rte_eth_dev_info_get() withing multi-mempool condition
> v5:
>  - Added testpmd argument to enable multi-mempool feature.
>  - Simplified logic to distinguish between multi-mempool,
>    multi-segment and single pool/segment.
> v4:
>  - updated if condition.
> v3:
>  - Simplified conditional check.
>  - Corrected spell, whether.
> v2:
>  - Rebased on tip of next-net/main.
> ---
>  app/test-pmd/parameters.c             |  7 ++-
>  app/test-pmd/testpmd.c                | 64 ++++++++++++++++-----------
>  app/test-pmd/testpmd.h                |  1 +
>  doc/guides/testpmd_app_ug/run_app.rst |  4 ++
>  4 files changed, 50 insertions(+), 26 deletions(-)
> 
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index aed4cdcb84..af9ec39cf9 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -88,7 +88,8 @@ usage(char* progname)
>  	       "in NUMA mode.\n");
>  	printf("  --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to "
>  	       "N bytes. If multiple numbers are specified the extra pools "
> -	       "will be created to receive with packet split features\n");
> +	       "will be created to receive packets based on the features "
> +	       "supported, like buufer-split, multi-mempool.\n");

s/buufer/buffer/

>  	printf("  --total-num-mbufs=N: set the number of mbufs to be allocated "
>  	       "in mbuf pools.\n");
>  	printf("  --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
> @@ -155,6 +156,7 @@ usage(char* progname)
>  	printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
>  	printf("  --txpkts=X[,Y]*: set TX segment sizes"
>  		" or total packet length.\n");
> +	printf("  --multi-rx-mempool: enable multi-mempool support\n");
>  	printf("  --txonly-multi-flow: generate multiple flows in txonly mode\n");
>  	printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
>  	printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
> @@ -669,6 +671,7 @@ launch_args_parse(int argc, char** argv)
>  		{ "rxpkts",			1, 0, 0 },
>  		{ "rxhdrs",			1, 0, 0 },
>  		{ "txpkts",			1, 0, 0 },
> +		{ "multi-rx-mempool",           0, 0, 0 },
>  		{ "txonly-multi-flow",		0, 0, 0 },
>  		{ "rxq-share",			2, 0, 0 },
>  		{ "eth-link-speed",		1, 0, 0 },
> @@ -1295,6 +1298,8 @@ launch_args_parse(int argc, char** argv)
>  				else
>  					rte_exit(EXIT_FAILURE, "bad txpkts\n");
>  			}
> +			if (!strcmp(lgopts[opt_idx].name, "multi-rx-mempool"))
> +				multi_rx_mempool = 1;
>  			if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
>  				txonly_multi_flow = 1;
>  			if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 4e25f77c6a..716937925e 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -245,6 +245,7 @@ uint32_t max_rx_pkt_len;
>   */
>  uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> +uint8_t multi_rx_mempool; /**< Enables multi-mempool feature */
>  uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>  uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>  uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];

Better to move new variable out of packet split related variables, and
below them.

> @@ -2659,24 +2660,9 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  	uint32_t prev_hdrs = 0;
>  	int ret;
>  
> -	/* Verify Rx queue configuration is single pool and segment or
> -	 * multiple pool/segment.
> -	 * @see rte_eth_rxconf::rx_mempools
> -	 * @see rte_eth_rxconf::rx_seg
> -	 */
> -	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> -	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
> -		/* Single pool/segment configuration */
> -		rx_conf->rx_seg = NULL;
> -		rx_conf->rx_nseg = 0;
> -		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
> -					     nb_rx_desc, socket_id,
> -					     rx_conf, mp);
> -		goto exit;
> -	}
>  
> -	if (rx_pkt_nb_segs > 1 ||
> -	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> +	if ((rx_pkt_nb_segs > 1) &&
> +	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
>  		/* multi-segment configuration */
>  		for (i = 0; i < rx_pkt_nb_segs; i++) {
>  			struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
> @@ -2701,22 +2687,50 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>  		}
>  		rx_conf->rx_nseg = rx_pkt_nb_segs;
>  		rx_conf->rx_seg = rx_useg;
> -	} else {
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +				    socket_id, rx_conf, NULL);
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +	} else if (multi_rx_mempool == 1) {
>  		/* multi-pool configuration */
> +		struct rte_eth_dev_info dev_info;
> +
> +		if (mbuf_data_size_n <= 1) {
> +			RTE_LOG(ERR, EAL, "invalid number of mempools %u",

This for EAL logs, not for testpmd, converting them to
"fprintf(stderr, .." as done in rest of the file.

> +				mbuf_data_size_n);
> +			return -EINVAL;
> +		}
> +		ret = rte_eth_dev_info_get(port_id, &dev_info);
> +		if (ret != 0)
> +			return ret;
> +		if (dev_info.max_rx_mempools == 0) {
> +			RTE_LOG(ERR, EAL, "device doesn't support requested multi-mempool configuration");
> +			return -ENOTSUP;
> +		}
>  		for (i = 0; i < mbuf_data_size_n; i++) {
>  			mpx = mbuf_pool_find(socket_id, i);
>  			rx_mempool[i] = mpx ? mpx : mp;
>  		}
>  		rx_conf->rx_mempools = rx_mempool;
>  		rx_conf->rx_nmempool = mbuf_data_size_n;
> -	}
> -	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
>  				    socket_id, rx_conf, NULL);
> -	rx_conf->rx_seg = NULL;
> -	rx_conf->rx_nseg = 0;
> -	rx_conf->rx_mempools = NULL;
> -	rx_conf->rx_nmempool = 0;
> -exit:
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +	} else {
> +		/* Single pool/segment configuration */
> +		rx_conf->rx_seg = NULL;
> +		rx_conf->rx_nseg = 0;
> +		rx_conf->rx_mempools = NULL;
> +		rx_conf->rx_nmempool = 0;
> +		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> +				    socket_id, rx_conf, mp);
> +	}
> +
>  	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
>  						RTE_ETH_QUEUE_STATE_STOPPED :
>  						RTE_ETH_QUEUE_STATE_STARTED;
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index aaf69c349a..0596d38cd2 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -589,6 +589,7 @@ extern uint32_t max_rx_pkt_len;
>  extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
>  extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
>  extern uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
> +extern uint8_t multi_rx_mempool; /**< Enables multi-mempool feature. */
>  extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
>  extern uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
>  
> diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
> index 610e442924..af84b2260a 100644
> --- a/doc/guides/testpmd_app_ug/run_app.rst
> +++ b/doc/guides/testpmd_app_ug/run_app.rst
> @@ -365,6 +365,10 @@ The command line options are:
>      Set TX segment sizes or total packet length. Valid for ``tx-only``
>      and ``flowgen`` forwarding modes.
>  
> +* ``--multi-rx-mempool``
> +
> +    Enable multi-mempool, multiple mbuf pools per Rx queue, support.
> +
>  *   ``--txonly-multi-flow``
>  
>      Generate multiple flows in txonly mode.
  
Yingya Han Nov. 22, 2022, 6:42 a.m. UTC | #2
> -----Original Message-----
> From: Hanumanth Pothula <hpothula@marvell.com>
> Sent: Tuesday, November 22, 2022 2:08 AM
> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net; Jiang, YuX <yux.jiang@intel.com>;
> jerinj@marvell.com; ndabilpuram@marvell.com; hpothula@marvell.com
> Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
> mempool feature
> 
> Validate ethdev parameter 'max_rx_mempools' to know whether device
> supports multi-mempool feature or not.
> 
> Also, add new testpmd command line argument, multi-mempool, to control
> multi-mempool feature. By default its disabled.
> 
> Bugzilla ID: 1128
> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> queue")
> 
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

Tested-by: Yingya Han <yingyax.han@intel.com>
  
Yaqi Tang Nov. 22, 2022, 6:52 a.m. UTC | #3
> -----Original Message-----
> From: Han, YingyaX <yingyax.han@intel.com>
> Sent: Tuesday, November 22, 2022 2:43 PM
> To: Hanumanth Pothula <hpothula@marvell.com>; Singh, Aman Deep
> <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; thomas@monjalon.net;
> Jiang, YuX <yux.jiang@intel.com>; jerinj@marvell.com;
> ndabilpuram@marvell.com
> Subject: RE: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
> mempool feature
> 
> 
> > -----Original Message-----
> > From: Hanumanth Pothula <hpothula@marvell.com>
> > Sent: Tuesday, November 22, 2022 2:08 AM
> > To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> > <yuying.zhang@intel.com>
> > Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net;
> > Jiang, YuX <yux.jiang@intel.com>; jerinj@marvell.com;
> > ndabilpuram@marvell.com; hpothula@marvell.com
> > Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
> > mempool feature
> >
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> > Also, add new testpmd command line argument, multi-mempool, to
> control
> > multi-mempool feature. By default its disabled.
> >
> > Bugzilla ID: 1128
> > Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
> > queue")
> >
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> 
> Tested-by: Yingya Han <yingyax.han@intel.com>
> 
> 

Tested-by: Yaqi Tang <yaqi.tang@intel.com>
  
Ferruh Yigit Nov. 22, 2022, 8:33 a.m. UTC | #4
On 11/22/2022 6:52 AM, Tang, Yaqi wrote:
> 
>> -----Original Message-----
>> From: Han, YingyaX <yingyax.han@intel.com>
>> Sent: Tuesday, November 22, 2022 2:43 PM
>> To: Hanumanth Pothula <hpothula@marvell.com>; Singh, Aman Deep
>> <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; thomas@monjalon.net;
>> Jiang, YuX <yux.jiang@intel.com>; jerinj@marvell.com;
>> ndabilpuram@marvell.com
>> Subject: RE: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
>> mempool feature
>>
>>
>>> -----Original Message-----
>>> From: Hanumanth Pothula <hpothula@marvell.com>
>>> Sent: Tuesday, November 22, 2022 2:08 AM
>>> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>>> <yuying.zhang@intel.com>
>>> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
>> thomas@monjalon.net;
>>> Jiang, YuX <yux.jiang@intel.com>; jerinj@marvell.com;
>>> ndabilpuram@marvell.com; hpothula@marvell.com
>>> Subject: [PATCH v7 1/1] app/testpmd: add valid check to verify multi
>>> mempool feature
>>>
>>> Validate ethdev parameter 'max_rx_mempools' to know whether device
>>> supports multi-mempool feature or not.
>>>
>>> Also, add new testpmd command line argument, multi-mempool, to
>> control
>>> multi-mempool feature. By default its disabled.
>>>
>>> Bugzilla ID: 1128
>>> Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx
>>> queue")
>>>
>>> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
>>
>> Tested-by: Yingya Han <yingyax.han@intel.com>
>>
>>
> 
> Tested-by: Yaqi Tang <yaqi.tang@intel.com>

Thanks Yingya, Yaqi, I have updated commit log with these tags.
  

Patch

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index aed4cdcb84..af9ec39cf9 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -88,7 +88,8 @@  usage(char* progname)
 	       "in NUMA mode.\n");
 	printf("  --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to "
 	       "N bytes. If multiple numbers are specified the extra pools "
-	       "will be created to receive with packet split features\n");
+	       "will be created to receive packets based on the features "
+	       "supported, like buufer-split, multi-mempool.\n");
 	printf("  --total-num-mbufs=N: set the number of mbufs to be allocated "
 	       "in mbuf pools.\n");
 	printf("  --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
@@ -155,6 +156,7 @@  usage(char* progname)
 	printf("  --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n");
 	printf("  --txpkts=X[,Y]*: set TX segment sizes"
 		" or total packet length.\n");
+	printf("  --multi-rx-mempool: enable multi-mempool support\n");
 	printf("  --txonly-multi-flow: generate multiple flows in txonly mode\n");
 	printf("  --tx-ip=src,dst: IP addresses in Tx-only mode\n");
 	printf("  --tx-udp=src[,dst]: UDP ports in Tx-only mode\n");
@@ -669,6 +671,7 @@  launch_args_parse(int argc, char** argv)
 		{ "rxpkts",			1, 0, 0 },
 		{ "rxhdrs",			1, 0, 0 },
 		{ "txpkts",			1, 0, 0 },
+		{ "multi-rx-mempool",           0, 0, 0 },
 		{ "txonly-multi-flow",		0, 0, 0 },
 		{ "rxq-share",			2, 0, 0 },
 		{ "eth-link-speed",		1, 0, 0 },
@@ -1295,6 +1298,8 @@  launch_args_parse(int argc, char** argv)
 				else
 					rte_exit(EXIT_FAILURE, "bad txpkts\n");
 			}
+			if (!strcmp(lgopts[opt_idx].name, "multi-rx-mempool"))
+				multi_rx_mempool = 1;
 			if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
 				txonly_multi_flow = 1;
 			if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 4e25f77c6a..716937925e 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -245,6 +245,7 @@  uint32_t max_rx_pkt_len;
  */
 uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
+uint8_t multi_rx_mempool; /**< Enables multi-mempool feature */
 uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
 uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
 uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
@@ -2659,24 +2660,9 @@  rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	uint32_t prev_hdrs = 0;
 	int ret;
 
-	/* Verify Rx queue configuration is single pool and segment or
-	 * multiple pool/segment.
-	 * @see rte_eth_rxconf::rx_mempools
-	 * @see rte_eth_rxconf::rx_seg
-	 */
-	if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
-	    ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
-		/* Single pool/segment configuration */
-		rx_conf->rx_seg = NULL;
-		rx_conf->rx_nseg = 0;
-		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
-					     nb_rx_desc, socket_id,
-					     rx_conf, mp);
-		goto exit;
-	}
 
-	if (rx_pkt_nb_segs > 1 ||
-	    rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+	if ((rx_pkt_nb_segs > 1) &&
+	    (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 		/* multi-segment configuration */
 		for (i = 0; i < rx_pkt_nb_segs; i++) {
 			struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
@@ -2701,22 +2687,50 @@  rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 		}
 		rx_conf->rx_nseg = rx_pkt_nb_segs;
 		rx_conf->rx_seg = rx_useg;
-	} else {
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+				    socket_id, rx_conf, NULL);
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+	} else if (multi_rx_mempool == 1) {
 		/* multi-pool configuration */
+		struct rte_eth_dev_info dev_info;
+
+		if (mbuf_data_size_n <= 1) {
+			RTE_LOG(ERR, EAL, "invalid number of mempools %u",
+				mbuf_data_size_n);
+			return -EINVAL;
+		}
+		ret = rte_eth_dev_info_get(port_id, &dev_info);
+		if (ret != 0)
+			return ret;
+		if (dev_info.max_rx_mempools == 0) {
+			RTE_LOG(ERR, EAL, "device doesn't support requested multi-mempool configuration");
+			return -ENOTSUP;
+		}
 		for (i = 0; i < mbuf_data_size_n; i++) {
 			mpx = mbuf_pool_find(socket_id, i);
 			rx_mempool[i] = mpx ? mpx : mp;
 		}
 		rx_conf->rx_mempools = rx_mempool;
 		rx_conf->rx_nmempool = mbuf_data_size_n;
-	}
-	ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
 				    socket_id, rx_conf, NULL);
-	rx_conf->rx_seg = NULL;
-	rx_conf->rx_nseg = 0;
-	rx_conf->rx_mempools = NULL;
-	rx_conf->rx_nmempool = 0;
-exit:
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+	} else {
+		/* Single pool/segment configuration */
+		rx_conf->rx_seg = NULL;
+		rx_conf->rx_nseg = 0;
+		rx_conf->rx_mempools = NULL;
+		rx_conf->rx_nmempool = 0;
+		ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
+				    socket_id, rx_conf, mp);
+	}
+
 	ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
 						RTE_ETH_QUEUE_STATE_STOPPED :
 						RTE_ETH_QUEUE_STATE_STARTED;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index aaf69c349a..0596d38cd2 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -589,6 +589,7 @@  extern uint32_t max_rx_pkt_len;
 extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT];
 extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT];
 extern uint8_t  rx_pkt_nb_segs; /**< Number of segments to split */
+extern uint8_t multi_rx_mempool; /**< Enables multi-mempool feature. */
 extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT];
 extern uint8_t  rx_pkt_nb_offs; /**< Number of specified offsets */
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 610e442924..af84b2260a 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -365,6 +365,10 @@  The command line options are:
     Set TX segment sizes or total packet length. Valid for ``tx-only``
     and ``flowgen`` forwarding modes.
 
+* ``--multi-rx-mempool``
+
+    Enable multi-mempool, multiple mbuf pools per Rx queue, support.
+
 *   ``--txonly-multi-flow``
 
     Generate multiple flows in txonly mode.