[v2,04/11] examples/l3fwd: add ethdev setup based on eventdev

Message ID 20191204144345.5736-5-pbhagavatula@marvell.com (mailing list archive)
State Superseded, archived
Delegated to: Jerin Jacob
Headers
Series example/l3fwd: introduce event device support |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Pavan Nikhilesh Bhagavatula Dec. 4, 2019, 2:43 p.m. UTC
  From: Sunil Kumar Kori <skori@marvell.com>

Add ethernet port Rx/Tx queue setup for event device which are later
used for setting up event eth Rx/Tx adapters.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
 examples/l3fwd/l3fwd.h       |  10 +++
 examples/l3fwd/l3fwd_event.c | 129 ++++++++++++++++++++++++++++++++++-
 examples/l3fwd/l3fwd_event.h |   2 +-
 examples/l3fwd/main.c        |  15 ++--
 4 files changed, 144 insertions(+), 12 deletions(-)
  

Comments

Nipun Gupta Dec. 27, 2019, 1:33 p.m. UTC | #1
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of
> pbhagavatula@marvell.com
> Sent: Wednesday, December 4, 2019 8:14 PM
> To: jerinj@marvell.com; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> Kam <orika@mellanox.com>; Bruce Richardson
> <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> Pavan Nikhilesh <pbhagavatula@marvell.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> based on eventdev
> 
> From: Sunil Kumar Kori <skori@marvell.com>
> 
> Add ethernet port Rx/Tx queue setup for event device which are later
> used for setting up event eth Rx/Tx adapters.
> 
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---
>  examples/l3fwd/l3fwd.h       |  10 +++
>  examples/l3fwd/l3fwd_event.c | 129
> ++++++++++++++++++++++++++++++++++-
>  examples/l3fwd/l3fwd_event.h |   2 +-
>  examples/l3fwd/main.c        |  15 ++--
>  4 files changed, 144 insertions(+), 12 deletions(-)
> 

<snip>

> +
> +		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
> +
> 	dev_info.flow_type_rss_offloads;
> +		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
> +				port_conf->rx_adv_conf.rss_conf.rss_hf) {
> +			printf("Port %u modified RSS hash function "
> +			       "based on hardware support,"
> +			       "requested:%#"PRIx64"
> configured:%#"PRIx64"\n",
> +			       port_id,
> +			       port_conf->rx_adv_conf.rss_conf.rss_hf,
> +			       local_port_conf.rx_adv_conf.rss_conf.rss_hf);
> +		}

We are using 1 queue, but using RSS hash function?

> +
> +		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
> +		if (ret < 0)
> +			rte_exit(EXIT_FAILURE,
> +				 "Cannot configure device: err=%d,
> port=%d\n",
> +				 ret, port_id);
> +

We should be using number of RX queues as per the config option provided in the arguments.
L3fwd is supposed to support multiple queue. Right?

Regards,
Nipun
  
Pavan Nikhilesh Bhagavatula Dec. 29, 2019, 3:42 p.m. UTC | #2
>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of
>> pbhagavatula@marvell.com
>> Sent: Wednesday, December 4, 2019 8:14 PM
>> To: jerinj@marvell.com; Marko Kovacevic
><marko.kovacevic@intel.com>; Ori
>> Kam <orika@mellanox.com>; Bruce Richardson
>> <bruce.richardson@intel.com>; Radu Nicolau
><radu.nicolau@intel.com>;
>> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
>> <tomasz.kantecki@intel.com>; Sunil Kumar Kori
><skori@marvell.com>;
>> Pavan Nikhilesh <pbhagavatula@marvell.com>
>> Cc: dev@dpdk.org
>> Subject: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev
>setup
>> based on eventdev
>>
>> From: Sunil Kumar Kori <skori@marvell.com>
>>
>> Add ethernet port Rx/Tx queue setup for event device which are later
>> used for setting up event eth Rx/Tx adapters.
>>
>> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
>> ---
>>  examples/l3fwd/l3fwd.h       |  10 +++
>>  examples/l3fwd/l3fwd_event.c | 129
>> ++++++++++++++++++++++++++++++++++-
>>  examples/l3fwd/l3fwd_event.h |   2 +-
>>  examples/l3fwd/main.c        |  15 ++--
>>  4 files changed, 144 insertions(+), 12 deletions(-)
>>
>
><snip>
>
>> +
>> +		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
>> +
>> 	dev_info.flow_type_rss_offloads;
>> +		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
>> +				port_conf-
>>rx_adv_conf.rss_conf.rss_hf) {
>> +			printf("Port %u modified RSS hash function "
>> +			       "based on hardware support,"
>> +			       "requested:%#"PRIx64"
>> configured:%#"PRIx64"\n",
>> +			       port_id,
>> +			       port_conf->rx_adv_conf.rss_conf.rss_hf,
>> +
>local_port_conf.rx_adv_conf.rss_conf.rss_hf);
>> +		}
>
>We are using 1 queue, but using RSS hash function?

rte_event::flow_id which uniquely identifies a given flow is generated using
RSS Hash function on the required fields in the packet.

>
>> +
>> +		ret = rte_eth_dev_configure(port_id, 1, 1,
>&local_port_conf);
>> +		if (ret < 0)
>> +			rte_exit(EXIT_FAILURE,
>> +				 "Cannot configure device: err=%d,
>> port=%d\n",
>> +				 ret, port_id);
>> +
>
>We should be using number of RX queues as per the config option
>provided in the arguments.
>L3fwd is supposed to support multiple queue. Right?

The entire premise of using event device is to showcase packet scheduling to cores
without the need for splitting packets across multiple queues.

Queue config is ignored when event mode is selected.
 
>
>Regards,
>Nipun
>

Regards,
Pavan.
  
Nipun Gupta Dec. 30, 2019, 7:40 a.m. UTC | #3
Hi Pavan,

> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Sent: Sunday, December 29, 2019 9:12 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> Kam <orika@mellanox.com>; Bruce Richardson
> <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> based on eventdev
> 
> 
> >> -----Original Message-----
> >> From: dev <dev-bounces@dpdk.org> On Behalf Of
> >> pbhagavatula@marvell.com
> >> Sent: Wednesday, December 4, 2019 8:14 PM
> >> To: jerinj@marvell.com; Marko Kovacevic
> ><marko.kovacevic@intel.com>; Ori
> >> Kam <orika@mellanox.com>; Bruce Richardson
> >> <bruce.richardson@intel.com>; Radu Nicolau
> ><radu.nicolau@intel.com>;
> >> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> >> <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> ><skori@marvell.com>;
> >> Pavan Nikhilesh <pbhagavatula@marvell.com>
> >> Cc: dev@dpdk.org
> >> Subject: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev
> >setup
> >> based on eventdev
> >>
> >> From: Sunil Kumar Kori <skori@marvell.com>
> >>
> >> Add ethernet port Rx/Tx queue setup for event device which are later
> >> used for setting up event eth Rx/Tx adapters.
> >>
> >> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> >> ---
> >>  examples/l3fwd/l3fwd.h       |  10 +++
> >>  examples/l3fwd/l3fwd_event.c | 129
> >> ++++++++++++++++++++++++++++++++++-
> >>  examples/l3fwd/l3fwd_event.h |   2 +-
> >>  examples/l3fwd/main.c        |  15 ++--
> >>  4 files changed, 144 insertions(+), 12 deletions(-)
> >>
> >
> ><snip>
> >
> >> +
> >> +		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
> >> +
> >> 	dev_info.flow_type_rss_offloads;
> >> +		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
> >> +				port_conf-
> >>rx_adv_conf.rss_conf.rss_hf) {
> >> +			printf("Port %u modified RSS hash function "
> >> +			       "based on hardware support,"
> >> +			       "requested:%#"PRIx64"
> >> configured:%#"PRIx64"\n",
> >> +			       port_id,
> >> +			       port_conf->rx_adv_conf.rss_conf.rss_hf,
> >> +
> >local_port_conf.rx_adv_conf.rss_conf.rss_hf);
> >> +		}
> >
> >We are using 1 queue, but using RSS hash function?
> 
> rte_event::flow_id which uniquely identifies a given flow is generated using
> RSS Hash function on the required fields in the packet.

Okay. Got it.

> 
> >
> >> +
> >> +		ret = rte_eth_dev_configure(port_id, 1, 1,
> >&local_port_conf);
> >> +		if (ret < 0)
> >> +			rte_exit(EXIT_FAILURE,
> >> +				 "Cannot configure device: err=%d,
> >> port=%d\n",
> >> +				 ret, port_id);
> >> +
> >
> >We should be using number of RX queues as per the config option
> >provided in the arguments.
> >L3fwd is supposed to support multiple queue. Right?
> 
> The entire premise of using event device is to showcase packet scheduling to
> cores
> without the need for splitting packets across multiple queues.
> 
> Queue config is ignored when event mode is selected.

For atomic queues, we have single queue providing packets to a single core at a time till processing on that core is completed, irrespective of the flows on that hardware queue.
And multiple queues are required to distribute separate packets on separate cores, with these atomic queues maintaining the ordering and not scheduling on other core, until processing core has completed its job.
To have this solution generic, we should also take config parameter - (port, number of queues) to enable multiple ethernet RX queues.

Regards,
Nipun

> 
> >
> >Regards,
> >Nipun
> >
> 
> Regards,
> Pavan.
  
Pavan Nikhilesh Bhagavatula Jan. 2, 2020, 6:21 a.m. UTC | #4
>> >&local_port_conf);
>> >> +		if (ret < 0)
>> >> +			rte_exit(EXIT_FAILURE,
>> >> +				 "Cannot configure device: err=%d,
>> >> port=%d\n",
>> >> +				 ret, port_id);
>> >> +
>> >
>> >We should be using number of RX queues as per the config option
>> >provided in the arguments.
>> >L3fwd is supposed to support multiple queue. Right?
>>
>> The entire premise of using event device is to showcase packet
>scheduling to
>> cores
>> without the need for splitting packets across multiple queues.
>>
>> Queue config is ignored when event mode is selected.
>
>For atomic queues, we have single queue providing packets to a single
>core at a time till processing on that core is completed, irrespective of
>the flows on that hardware queue.
>And multiple queues are required to distribute separate packets on
>separate cores, with these atomic queues maintaining the ordering and
>not scheduling on other core, until processing core has completed its
>job.
>To have this solution generic, we should also take config parameter -
>(port, number of queues) to enable multiple ethernet RX queues.
>

Not sure I follow we connect Rx queue to an event queue which is then linked to multiple event ports which are polled on 
by respective cores.
How would increasing Rx queues help? Distributing flows from single event queue to multiple event ports is the responsibility
of Event device as per spec.
Does DPAA/2 function differently? 

Regards,
Pavan.

>Regards,
>Nipun
>
>>
>> >
>> >Regards,
>> >Nipun
>> >
>>
>> Regards,
>> Pavan.
  
Nipun Gupta Jan. 2, 2020, 8:49 a.m. UTC | #5
> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Sent: Thursday, January 2, 2020 11:52 AM
> To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> Kam <orika@mellanox.com>; Bruce Richardson
> <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> based on eventdev
> 
> >> >&local_port_conf);
> >> >> +		if (ret < 0)
> >> >> +			rte_exit(EXIT_FAILURE,
> >> >> +				 "Cannot configure device: err=%d,
> >> >> port=%d\n",
> >> >> +				 ret, port_id);
> >> >> +
> >> >
> >> >We should be using number of RX queues as per the config option
> >> >provided in the arguments.
> >> >L3fwd is supposed to support multiple queue. Right?
> >>
> >> The entire premise of using event device is to showcase packet
> >scheduling to
> >> cores
> >> without the need for splitting packets across multiple queues.
> >>
> >> Queue config is ignored when event mode is selected.
> >
> >For atomic queues, we have single queue providing packets to a single
> >core at a time till processing on that core is completed, irrespective of
> >the flows on that hardware queue.
> >And multiple queues are required to distribute separate packets on
> >separate cores, with these atomic queues maintaining the ordering and
> >not scheduling on other core, until processing core has completed its
> >job.
> >To have this solution generic, we should also take config parameter -
> >(port, number of queues) to enable multiple ethernet RX queues.
> >
> 
> Not sure I follow we connect Rx queue to an event queue which is then
> linked to multiple event ports which are polled on
> by respective cores.

This is what we too support, but with atomic queue case the scenario gets little complex.
Each atomic queue can be scheduled only to one event port at a time, until all the events from
that event port are processed. Then only it can move to other event port.

To have separate event ports process packets at same time in atomic scenario, multiple queues
are required. As l3fwd supports multiple queues, it seems legitimate to add the support.

Thanks,
Nipun

> How would increasing Rx queues help? Distributing flows from single event
> queue to multiple event ports is the responsibility
> of Event device as per spec.
> Does DPAA/2 function differently?
> 
> Regards,
> Pavan.
> 
> >Regards,
> >Nipun
> >
> >>
> >> >
> >> >Regards,
> >> >Nipun
> >> >
> >>
> >> Regards,
> >> Pavan.
  
Jerin Jacob Jan. 2, 2020, 9:33 a.m. UTC | #6
On Thu, Jan 2, 2020 at 2:20 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> > Sent: Thursday, January 2, 2020 11:52 AM
> > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> > <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> > Kam <orika@mellanox.com>; Bruce Richardson
> > <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> > Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> > <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> > Hemant Agrawal <hemant.agrawal@nxp.com>
> > Cc: dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> > based on eventdev
> >
> > >> >&local_port_conf);
> > >> >> +               if (ret < 0)
> > >> >> +                       rte_exit(EXIT_FAILURE,
> > >> >> +                                "Cannot configure device: err=%d,
> > >> >> port=%d\n",
> > >> >> +                                ret, port_id);
> > >> >> +
> > >> >
> > >> >We should be using number of RX queues as per the config option
> > >> >provided in the arguments.
> > >> >L3fwd is supposed to support multiple queue. Right?
> > >>
> > >> The entire premise of using event device is to showcase packet
> > >scheduling to
> > >> cores
> > >> without the need for splitting packets across multiple queues.
> > >>
> > >> Queue config is ignored when event mode is selected.
> > >
> > >For atomic queues, we have single queue providing packets to a single
> > >core at a time till processing on that core is completed, irrespective of
> > >the flows on that hardware queue.
> > >And multiple queues are required to distribute separate packets on
> > >separate cores, with these atomic queues maintaining the ordering and
> > >not scheduling on other core, until processing core has completed its
> > >job.
> > >To have this solution generic, we should also take config parameter -
> > >(port, number of queues) to enable multiple ethernet RX queues.
> > >
> >
> > Not sure I follow we connect Rx queue to an event queue which is then
> > linked to multiple event ports which are polled on
> > by respective cores.
>
> This is what we too support, but with atomic queue case the scenario gets little complex.
> Each atomic queue can be scheduled only to one event port at a time, until all the events from
> that event port are processed. Then only it can move to other event port.

This would make it a poll mode. We might as well use normal PMD + RSS
for the same instead.
i.e use l3fwd in poll mode. It will be the same in terms of performance. Right?

>
> To have separate event ports process packets at same time in atomic scenario, multiple queues
> are required. As l3fwd supports multiple queues, it seems legitimate to add the support.
>
> Thanks,
> Nipun
>
> > How would increasing Rx queues help? Distributing flows from single event
> > queue to multiple event ports is the responsibility
> > of Event device as per spec.
> > Does DPAA/2 function differently?
> >
> > Regards,
> > Pavan.
> >
> > >Regards,
> > >Nipun
> > >
> > >>
> > >> >
> > >> >Regards,
> > >> >Nipun
> > >> >
> > >>
> > >> Regards,
> > >> Pavan.
  
Nipun Gupta Jan. 3, 2020, 9:06 a.m. UTC | #7
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, January 2, 2020 3:04 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>
> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; Marko Kovacevic
> <marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>; Bruce
> Richardson <bruce.richardson@intel.com>; Radu Nicolau
> <radu.nicolau@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; Tomasz
> Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> based on eventdev
> 
> On Thu, Jan 2, 2020 at 2:20 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> > > Sent: Thursday, January 2, 2020 11:52 AM
> > > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> > > <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> > > Kam <orika@mellanox.com>; Bruce Richardson
> > > <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> > > Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> > > <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> > > Hemant Agrawal <hemant.agrawal@nxp.com>
> > > Cc: dev@dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> > > based on eventdev
> > >
> > > >> >&local_port_conf);
> > > >> >> +               if (ret < 0)
> > > >> >> +                       rte_exit(EXIT_FAILURE,
> > > >> >> +                                "Cannot configure device: err=%d,
> > > >> >> port=%d\n",
> > > >> >> +                                ret, port_id);
> > > >> >> +
> > > >> >
> > > >> >We should be using number of RX queues as per the config option
> > > >> >provided in the arguments.
> > > >> >L3fwd is supposed to support multiple queue. Right?
> > > >>
> > > >> The entire premise of using event device is to showcase packet
> > > >scheduling to
> > > >> cores
> > > >> without the need for splitting packets across multiple queues.
> > > >>
> > > >> Queue config is ignored when event mode is selected.
> > > >
> > > >For atomic queues, we have single queue providing packets to a single
> > > >core at a time till processing on that core is completed, irrespective of
> > > >the flows on that hardware queue.
> > > >And multiple queues are required to distribute separate packets on
> > > >separate cores, with these atomic queues maintaining the ordering and
> > > >not scheduling on other core, until processing core has completed its
> > > >job.
> > > >To have this solution generic, we should also take config parameter -
> > > >(port, number of queues) to enable multiple ethernet RX queues.
> > > >
> > >
> > > Not sure I follow we connect Rx queue to an event queue which is then
> > > linked to multiple event ports which are polled on
> > > by respective cores.
> >
> > This is what we too support, but with atomic queue case the scenario gets
> little complex.
> > Each atomic queue can be scheduled only to one event port at a time, until all
> the events from
> > that event port are processed. Then only it can move to other event port.
> 
> This would make it a poll mode. We might as well use normal PMD + RSS
> for the same instead.
> i.e use l3fwd in poll mode. It will be the same in terms of performance. Right?

We do not need to have a complete config, but can have a parameter as number of RX
queues per port. We will send a patch on top of this to support the same.

Thanks,
Nipun

> 
> >
> > To have separate event ports process packets at same time in atomic scenario,
> multiple queues
> > are required. As l3fwd supports multiple queues, it seems legitimate to add the
> support.
> >
> > Thanks,
> > Nipun
> >
> > > How would increasing Rx queues help? Distributing flows from single event
> > > queue to multiple event ports is the responsibility
> > > of Event device as per spec.
> > > Does DPAA/2 function differently?
> > >
> > > Regards,
> > > Pavan.
> > >
> > > >Regards,
> > > >Nipun
> > > >
> > > >>
> > > >> >
> > > >> >Regards,
> > > >> >Nipun
> > > >> >
> > > >>
> > > >> Regards,
> > > >> Pavan.
  
Jerin Jacob Jan. 3, 2020, 9:09 a.m. UTC | #8
On Fri, Jan 3, 2020 at 2:36 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, January 2, 2020 3:04 PM
> > To: Nipun Gupta <nipun.gupta@nxp.com>
> > Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin Jacob
> > Kollanukkaran <jerinj@marvell.com>; Marko Kovacevic
> > <marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>; Bruce
> > Richardson <bruce.richardson@intel.com>; Radu Nicolau
> > <radu.nicolau@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; Tomasz
> > Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> > Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> > based on eventdev
> >
> > On Thu, Jan 2, 2020 at 2:20 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> > > > Sent: Thursday, January 2, 2020 11:52 AM
> > > > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> > > > <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> > > > Kam <orika@mellanox.com>; Bruce Richardson
> > > > <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> > > > Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> > > > <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> > > > Hemant Agrawal <hemant.agrawal@nxp.com>
> > > > Cc: dev@dpdk.org
> > > > Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> > > > based on eventdev
> > > >
> > > > >> >&local_port_conf);
> > > > >> >> +               if (ret < 0)
> > > > >> >> +                       rte_exit(EXIT_FAILURE,
> > > > >> >> +                                "Cannot configure device: err=%d,
> > > > >> >> port=%d\n",
> > > > >> >> +                                ret, port_id);
> > > > >> >> +
> > > > >> >
> > > > >> >We should be using number of RX queues as per the config option
> > > > >> >provided in the arguments.
> > > > >> >L3fwd is supposed to support multiple queue. Right?
> > > > >>
> > > > >> The entire premise of using event device is to showcase packet
> > > > >scheduling to
> > > > >> cores
> > > > >> without the need for splitting packets across multiple queues.
> > > > >>
> > > > >> Queue config is ignored when event mode is selected.
> > > > >
> > > > >For atomic queues, we have single queue providing packets to a single
> > > > >core at a time till processing on that core is completed, irrespective of
> > > > >the flows on that hardware queue.
> > > > >And multiple queues are required to distribute separate packets on
> > > > >separate cores, with these atomic queues maintaining the ordering and
> > > > >not scheduling on other core, until processing core has completed its
> > > > >job.
> > > > >To have this solution generic, we should also take config parameter -
> > > > >(port, number of queues) to enable multiple ethernet RX queues.
> > > > >
> > > >
> > > > Not sure I follow we connect Rx queue to an event queue which is then
> > > > linked to multiple event ports which are polled on
> > > > by respective cores.
> > >
> > > This is what we too support, but with atomic queue case the scenario gets
> > little complex.
> > > Each atomic queue can be scheduled only to one event port at a time, until all
> > the events from
> > > that event port are processed. Then only it can move to other event port.
> >
> > This would make it a poll mode. We might as well use normal PMD + RSS
> > for the same instead.
> > i.e use l3fwd in poll mode. It will be the same in terms of performance. Right?
>
> We do not need to have a complete config, but can have a parameter as number of RX
> queues per port. We will send a patch on top of this to support the same.

Looks good to me.

>
> Thanks,
> Nipun
>
> >
> > >
> > > To have separate event ports process packets at same time in atomic scenario,
> > multiple queues
> > > are required. As l3fwd supports multiple queues, it seems legitimate to add the
> > support.
> > >
> > > Thanks,
> > > Nipun
> > >
> > > > How would increasing Rx queues help? Distributing flows from single event
> > > > queue to multiple event ports is the responsibility
> > > > of Event device as per spec.
> > > > Does DPAA/2 function differently?
> > > >
> > > > Regards,
> > > > Pavan.
> > > >
> > > > >Regards,
> > > > >Nipun
> > > > >
> > > > >>
> > > > >> >
> > > > >> >Regards,
> > > > >> >Nipun
> > > > >> >
> > > > >>
> > > > >> Regards,
> > > > >> Pavan.
  

Patch

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index cd17a41b3..6d16cde74 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -18,9 +18,16 @@ 
 #define NO_HASH_MULTI_LOOKUP 1
 #endif
 
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
 #define MAX_PKT_BURST     32
 #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
 
+#define MEMPOOL_CACHE_SIZE 256
 #define MAX_RX_QUEUE_PER_LCORE 16
 
 /*
@@ -175,6 +182,9 @@  is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len)
 void
 print_usage(const char *prgname);
 
+int
+init_mem(uint16_t portid, unsigned int nb_mbuf);
+
 /* Function pointers for LPM or EM functionality. */
 void
 setup_lpm(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index a027e150d..b1ff8dc31 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -8,6 +8,14 @@ 
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+	char buf[RTE_ETHER_ADDR_FMT_SIZE];
+	rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+	printf("%s%s", name, buf);
+}
+
 static void
 parse_mode(const char *optarg)
 {
@@ -63,6 +71,122 @@  l3fwd_parse_eventdev_args(char **argv, int argc)
 	}
 }
 
+static void
+l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint16_t nb_ports = rte_eth_dev_count_avail();
+	uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+	uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+	unsigned int nb_lcores = rte_lcore_count();
+	struct rte_eth_conf local_port_conf;
+	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf txconf;
+	struct rte_eth_rxconf rxconf;
+	unsigned int nb_mbuf;
+	uint16_t port_id;
+	int32_t ret;
+
+	/* initialize all ports */
+	RTE_ETH_FOREACH_DEV(port_id) {
+		local_port_conf = *port_conf;
+		/* skip ports that are not enabled */
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0) {
+			printf("\nSkipping disabled port %d\n", port_id);
+			continue;
+		}
+
+		/* init port */
+		printf("Initializing port %d ... ", port_id);
+		fflush(stdout);
+		printf("Creating queues: nb_rxq=1 nb_txq=1...\n");
+
+		rte_eth_dev_info_get(port_id, &dev_info);
+		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+			local_port_conf.txmode.offloads |=
+						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
+						dev_info.flow_type_rss_offloads;
+		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
+				port_conf->rx_adv_conf.rss_conf.rss_hf) {
+			printf("Port %u modified RSS hash function "
+			       "based on hardware support,"
+			       "requested:%#"PRIx64" configured:%#"PRIx64"\n",
+			       port_id,
+			       port_conf->rx_adv_conf.rss_conf.rss_hf,
+			       local_port_conf.rx_adv_conf.rss_conf.rss_hf);
+		}
+
+		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot configure device: err=%d, port=%d\n",
+				 ret, port_id);
+
+		ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+						       &nb_txd);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot adjust number of descriptors: err=%d, "
+				 "port=%d\n", ret, port_id);
+
+		rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]);
+		print_ethaddr(" Address:", &ports_eth_addr[port_id]);
+		printf(", ");
+		print_ethaddr("Destination:",
+			(const struct rte_ether_addr *)&dest_eth_addr[port_id]);
+		printf(", ");
+
+		/* prepare source MAC for each port. */
+		rte_ether_addr_copy(&ports_eth_addr[port_id],
+			(struct rte_ether_addr *)(val_eth + port_id) + 1);
+
+		/* init memory */
+		if (!evt_rsrc->per_port_pool) {
+			/* port_id = 0; this is *not* signifying the first port,
+			 * rather, it signifies that port_id is ignored.
+			 */
+			nb_mbuf = RTE_MAX(nb_ports * nb_rxd +
+					  nb_ports * nb_txd +
+					  nb_ports * nb_lcores *
+							MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(0, nb_mbuf);
+		} else {
+			nb_mbuf = RTE_MAX(nb_rxd + nb_rxd +
+					  nb_lcores * MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(port_id, nb_mbuf);
+		}
+		/* init one Rx queue per port */
+		rxconf = dev_info.default_rxconf;
+		rxconf.offloads = local_port_conf.rxmode.offloads;
+		if (!evt_rsrc->per_port_pool)
+			ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0,
+					&rxconf, evt_rsrc->pkt_pool[0][0]);
+		else
+			ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0,
+					&rxconf,
+					evt_rsrc->pkt_pool[port_id][0]);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "rte_eth_rx_queue_setup: err=%d, "
+				 "port=%d\n", ret, port_id);
+
+		/* init one Tx queue per port */
+		txconf = dev_info.default_txconf;
+		txconf.offloads = local_port_conf.txmode.offloads;
+		ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "rte_eth_tx_queue_setup: err=%d, "
+				 "port=%d\n", ret, port_id);
+	}
+}
+
 static void
 l3fwd_event_capability_setup(void)
 {
@@ -89,7 +213,7 @@  l3fwd_event_capability_setup(void)
 }
 
 void
-l3fwd_event_resource_setup(void)
+l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 
@@ -104,6 +228,9 @@  l3fwd_event_resource_setup(void)
 	/* Setup eventdev capability callbacks */
 	l3fwd_event_capability_setup();
 
+	/* Ethernet device configuration */
+	l3fwd_eth_dev_port_setup(port_conf);
+
 	/* Event device configuration */
 	evt_rsrc->ops.event_device_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 5aac0b06c..cd36d99ae 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -103,7 +103,7 @@  l3fwd_get_eventdev_rsrc(void)
 	return NULL;
 }
 
-void l3fwd_event_resource_setup(void);
+void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf);
 void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
 void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 19ca4483c..20df12748 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -47,12 +47,6 @@ 
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
-/*
- * Configurable number of RX/TX ring descriptors
- */
-#define RTE_TEST_RX_DESC_DEFAULT 1024
-#define RTE_TEST_TX_DESC_DEFAULT 1024
-
 #define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS
 #define MAX_RX_QUEUE_PER_PORT 128
 
@@ -449,7 +443,6 @@  parse_eth_dest(const char *optarg)
 }
 
 #define MAX_JUMBO_PKT_LEN  9600
-#define MEMPOOL_CACHE_SIZE 256
 
 static const char short_options[] =
 	"p:"  /* portmask */
@@ -679,7 +672,7 @@  print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
 	printf("%s%s", name, buf);
 }
 
-static int
+int
 init_mem(uint16_t portid, unsigned int nb_mbuf)
 {
 	struct lcore_conf *qconf;
@@ -866,14 +859,16 @@  main(int argc, char **argv)
 	}
 
 	evt_rsrc = l3fwd_get_eventdev_rsrc();
-	RTE_SET_USED(evt_rsrc);
 	/* parse application arguments (after the EAL ones) */
 	ret = parse_args(argc, argv);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
 
+	evt_rsrc->per_port_pool = per_port_pool;
+	evt_rsrc->pkt_pool = pktmbuf_pool;
+	evt_rsrc->port_mask = enabled_port_mask;
 	/* Configure eventdev parameters if user has requested */
-	l3fwd_event_resource_setup();
+	l3fwd_event_resource_setup(&port_conf);
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");