[v2,6/6] app/testpmd: add mempool flags parameter

Message ID 20190319071256.26302-7-xiaolong.ye@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series Introduce AF_XDP PMD |

Checks

Context Check Description
ci/Intel-compilation fail Compilation issues
ci/checkpatch warning coding style issues

Commit Message

Xiaolong Ye March 19, 2019, 7:12 a.m. UTC
  When create rte_mempool, flags can be parsed from command line.
Now, it is possible for testpmd to create a af_xdp friendly
mempool (which enable zero copy).

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
---
 app/test-pmd/parameters.c             | 12 ++++++++++++
 app/test-pmd/testpmd.c                | 17 ++++++++++-------
 app/test-pmd/testpmd.h                |  1 +
 doc/guides/testpmd_app_ug/run_app.rst |  4 ++++
 4 files changed, 27 insertions(+), 7 deletions(-)
  

Comments

Jerin Jacob Kollanukkaran March 19, 2019, 11:36 p.m. UTC | #1
On Tue, 2019-03-19 at 15:12 +0800, Xiaolong Ye wrote:
> When create rte_mempool, flags can be parsed from command line.
> Now, it is possible for testpmd to create a af_xdp friendly
> mempool (which enable zero copy).
> 
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
> ---
>  app/test-pmd/parameters.c             | 12 ++++++++++++
>  app/test-pmd/testpmd.c                | 17 ++++++++++-------
>  app/test-pmd/testpmd.h                |  1 +
>  doc/guides/testpmd_app_ug/run_app.rst |  4 ++++

If I understand it correctly, The user needs to change all the
application in order to avail zero copy feature of XDP.

If so,

How about creating wrapper mempool driver for xdp at drivers/mempool/?
and mempool's best mempool feature to select the required mempool
driver for XDP at runtime without changing the apps.

see rte_mbuf_best_mempool_ops()
see struct eth_dev_ops::pool_ops_supported

/Jerin
  
Xiaolong Ye March 20, 2019, 2:08 a.m. UTC | #2
Hi, 

On 03/19, Jerin Jacob Kollanukkaran wrote:
>On Tue, 2019-03-19 at 15:12 +0800, Xiaolong Ye wrote:
>> When create rte_mempool, flags can be parsed from command line.
>> Now, it is possible for testpmd to create a af_xdp friendly
>> mempool (which enable zero copy).
>> 
>> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>> Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
>> ---
>>  app/test-pmd/parameters.c             | 12 ++++++++++++
>>  app/test-pmd/testpmd.c                | 17 ++++++++++-------
>>  app/test-pmd/testpmd.h                |  1 +
>>  doc/guides/testpmd_app_ug/run_app.rst |  4 ++++
>
>If I understand it correctly, The user needs to change all the
>application in order to avail zero copy feature of XDP.
>
>If so,
>
>How about creating wrapper mempool driver for xdp at drivers/mempool/?
>and mempool's best mempool feature to select the required mempool
>driver for XDP at runtime without changing the apps.
>
>see rte_mbuf_best_mempool_ops()
>see struct eth_dev_ops::pool_ops_supported

Sounds a good suggestion, I'll investigate and see how to implement it.

Thanks,
Xiaolong
>
>/Jerin
>
>
>
>
  
David Marchand March 20, 2019, 9:23 a.m. UTC | #3
On Wed, Mar 20, 2019 at 12:37 AM Jerin Jacob Kollanukkaran <
jerinj@marvell.com> wrote:

> On Tue, 2019-03-19 at 15:12 +0800, Xiaolong Ye wrote:
> > When create rte_mempool, flags can be parsed from command line.
> > Now, it is possible for testpmd to create a af_xdp friendly
> > mempool (which enable zero copy).
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
> > ---
> >  app/test-pmd/parameters.c             | 12 ++++++++++++
> >  app/test-pmd/testpmd.c                | 17 ++++++++++-------
> >  app/test-pmd/testpmd.h                |  1 +
> >  doc/guides/testpmd_app_ug/run_app.rst |  4 ++++
>
> If I understand it correctly, The user needs to change all the
> application in order to avail zero copy feature of XDP.
>
> If so,
>
> How about creating wrapper mempool driver for xdp at drivers/mempool/?
> and mempool's best mempool feature to select the required mempool
> driver for XDP at runtime without changing the apps.
>
> see rte_mbuf_best_mempool_ops()
> see struct eth_dev_ops::pool_ops_supported
>

Glab to read this, I was under the same impression :-)
  
Xiaolong Ye March 20, 2019, 3:22 p.m. UTC | #4
On 03/20, David Marchand wrote:
>On Wed, Mar 20, 2019 at 12:37 AM Jerin Jacob Kollanukkaran <
>jerinj@marvell.com> wrote:
>
>> On Tue, 2019-03-19 at 15:12 +0800, Xiaolong Ye wrote:
>> > When create rte_mempool, flags can be parsed from command line.
>> > Now, it is possible for testpmd to create a af_xdp friendly
>> > mempool (which enable zero copy).
>> >
>> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>> > Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
>> > ---
>> >  app/test-pmd/parameters.c             | 12 ++++++++++++
>> >  app/test-pmd/testpmd.c                | 17 ++++++++++-------
>> >  app/test-pmd/testpmd.h                |  1 +
>> >  doc/guides/testpmd_app_ug/run_app.rst |  4 ++++
>>
>> If I understand it correctly, The user needs to change all the
>> application in order to avail zero copy feature of XDP.
>>
>> If so,
>>
>> How about creating wrapper mempool driver for xdp at drivers/mempool/?
>> and mempool's best mempool feature to select the required mempool
>> driver for XDP at runtime without changing the apps.
>>
>> see rte_mbuf_best_mempool_ops()
>> see struct eth_dev_ops::pool_ops_supported
>>
>
>Glab to read this, I was under the same impression :-)

We plan to separate this in another patchset and may target for next release.

Thanks,
Xiaolong
>
>
>-- 
>David Marchand
  

Patch

diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 38b419767..9d5be0007 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -61,6 +61,7 @@  usage(char* progname)
 	       "--tx-first | --stats-period=PERIOD | "
 	       "--coremask=COREMASK --portmask=PORTMASK --numa "
 	       "--mbuf-size= | --total-num-mbufs= | "
+	       "--mp-flags= | "
 	       "--nb-cores= | --nb-ports= | "
 #ifdef RTE_LIBRTE_CMDLINE
 	       "--eth-peers-configfile= | "
@@ -105,6 +106,7 @@  usage(char* progname)
 	printf("  --socket-num=N: set socket from which all memory is allocated "
 	       "in NUMA mode.\n");
 	printf("  --mbuf-size=N: set the data size of mbuf to N bytes.\n");
+	printf("  --mp-flags=N: set the flags when create mbuf memory pool.\n");
 	printf("  --total-num-mbufs=N: set the number of mbufs to be allocated "
 	       "in mbuf pools.\n");
 	printf("  --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
@@ -585,6 +587,7 @@  launch_args_parse(int argc, char** argv)
 		{ "ring-numa-config",           1, 0, 0 },
 		{ "socket-num",			1, 0, 0 },
 		{ "mbuf-size",			1, 0, 0 },
+		{ "mp-flags",			1, 0, 0 },
 		{ "total-num-mbufs",		1, 0, 0 },
 		{ "max-pkt-len",		1, 0, 0 },
 		{ "pkt-filter-mode",            1, 0, 0 },
@@ -811,6 +814,15 @@  launch_args_parse(int argc, char** argv)
 					rte_exit(EXIT_FAILURE,
 						 "mbuf-size should be > 0 and < 65536\n");
 			}
+			if (!strcmp(lgopts[opt_idx].name, "mp-flags")) {
+				n = atoi(optarg);
+				if (n > 0 && n <= 0xFFFF)
+					mp_flags = (uint16_t)n;
+				else
+					rte_exit(EXIT_FAILURE,
+						 "mp-flags should be > 0 and < 65536\n");
+			}
+
 			if (!strcmp(lgopts[opt_idx].name, "total-num-mbufs")) {
 				n = atoi(optarg);
 				if (n > 1024)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index d9d0c16d4..eb46dfa53 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -195,6 +195,7 @@  uint32_t burst_tx_delay_time = BURST_TX_WAIT_US;
 uint32_t burst_tx_retry_num = BURST_TX_RETRIES;
 
 uint16_t mbuf_data_size = DEFAULT_MBUF_DATA_SIZE; /**< Mbuf data space size. */
+uint16_t mp_flags = 0; /**< flags parsed when create mempool */
 uint32_t param_total_num_mbufs = 0;  /**< number of mbufs in all pools - if
                                       * specified on command-line. */
 uint16_t stats_period; /**< Period to show statistics (disabled by default) */
@@ -834,6 +835,7 @@  setup_extmem(uint32_t nb_mbufs, uint32_t mbuf_sz, bool huge)
  */
 static void
 mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf,
+		 unsigned int flags,
 		 unsigned int socket_id)
 {
 	char pool_name[RTE_MEMPOOL_NAMESIZE];
@@ -853,8 +855,8 @@  mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf,
 			/* wrapper to rte_mempool_create() */
 			TESTPMD_LOG(INFO, "preferred mempool ops selected: %s\n",
 					rte_mbuf_best_mempool_ops());
-			rte_mp = rte_pktmbuf_pool_create(pool_name, nb_mbuf,
-				mb_mempool_cache, 0, mbuf_seg_size, socket_id);
+			rte_mp = rte_pktmbuf_pool_create_with_flags(pool_name, nb_mbuf,
+				mb_mempool_cache, 0, mbuf_seg_size, flags, socket_id);
 			break;
 		}
 	case MP_ALLOC_ANON:
@@ -891,8 +893,8 @@  mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf,
 
 			TESTPMD_LOG(INFO, "preferred mempool ops selected: %s\n",
 					rte_mbuf_best_mempool_ops());
-			rte_mp = rte_pktmbuf_pool_create(pool_name, nb_mbuf,
-					mb_mempool_cache, 0, mbuf_seg_size,
+			rte_mp = rte_pktmbuf_pool_create_with_flags(pool_name, nb_mbuf,
+					mb_mempool_cache, 0, mbuf_seg_size, flags,
 					heap_socket);
 			break;
 		}
@@ -1128,13 +1130,14 @@  init_config(void)
 
 		for (i = 0; i < num_sockets; i++)
 			mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool,
-					 socket_ids[i]);
+					 mp_flags, socket_ids[i]);
 	} else {
 		if (socket_num == UMA_NO_CONFIG)
-			mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, 0);
+			mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool,
+					 mp_flags, 0);
 		else
 			mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool,
-						 socket_num);
+					 mp_flags, socket_num);
 	}
 
 	init_port_config();
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index fa4887853..3ddb70e3e 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -408,6 +408,7 @@  extern uint8_t dcb_config;
 extern uint8_t dcb_test;
 
 extern uint16_t mbuf_data_size; /**< Mbuf data space size. */
+extern uint16_t mp_flags;  /**< flags for mempool creation. */
 extern uint32_t param_total_num_mbufs;
 
 extern uint16_t stats_period;
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 4495ed038..bafb9c493 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -392,6 +392,10 @@  The commandline options are:
     * xmemhuge: create and populate mempool using externally and anonymously
       allocated hugepage area
 
+*   ``--mp-flag=<N>``
+
+    Select mempool allocation flag.
+
 *   ``--noisy-tx-sw-buffer-size``
 
     Set the number of maximum elements  of the FIFO queue to be created