[dpdk-dev,4/6] testpmd: handle all rxqs in rss setup
Commit Message
This patch removes constraints in rxq handling when multiqueue is enabled
to handle all the rxqs.
Current testpmd forces a dedicated core for each rxq, some rxqs may be
ignored when core number is less than rxq number, and that causes confusion
and inconvenience.
Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
---
app/test-pmd/config.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
Comments
2016-05-05 18:46, Zhihong Wang:
> This patch removes constraints in rxq handling when multiqueue is enabled
> to handle all the rxqs.
>
> Current testpmd forces a dedicated core for each rxq, some rxqs may be
> ignored when core number is less than rxq number, and that causes confusion
> and inconvenience.
I have the feeling that "constraints", "confusion" and "inconvenience"
should be more explained.
Please give some examples with not enough and too much cores. Thanks
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, May 25, 2016 5:42 PM
> To: Wang, Zhihong <zhihong.wang@intel.com>
> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Subject: Re: [PATCH 4/6] testpmd: handle all rxqs in rss setup
>
> 2016-05-05 18:46, Zhihong Wang:
> > This patch removes constraints in rxq handling when multiqueue is enabled
> > to handle all the rxqs.
> >
> > Current testpmd forces a dedicated core for each rxq, some rxqs may be
> > ignored when core number is less than rxq number, and that causes confusion
> > and inconvenience.
>
> I have the feeling that "constraints", "confusion" and "inconvenience"
> should be more explained.
> Please give some examples with not enough and too much cores. Thanks
Sure, will add detailed description in v2 ;)
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wang, Zhihong
> Sent: Thursday, May 26, 2016 10:55 AM
> To: Thomas Monjalon <thomas.monjalon@6wind.com>
> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 4/6] testpmd: handle all rxqs in rss setup
>
>
>
> > -----Original Message-----
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > Sent: Wednesday, May 25, 2016 5:42 PM
> > To: Wang, Zhihong <zhihong.wang@intel.com>
> > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > Richardson, Bruce <bruce.richardson@intel.com>; De Lara Guarch, Pablo
> > <pablo.de.lara.guarch@intel.com>
> > Subject: Re: [PATCH 4/6] testpmd: handle all rxqs in rss setup
> >
> > 2016-05-05 18:46, Zhihong Wang:
> > > This patch removes constraints in rxq handling when multiqueue is enabled
> > > to handle all the rxqs.
> > >
> > > Current testpmd forces a dedicated core for each rxq, some rxqs may be
> > > ignored when core number is less than rxq number, and that causes
> confusion
> > > and inconvenience.
> >
> > I have the feeling that "constraints", "confusion" and "inconvenience"
> > should be more explained.
> > Please give some examples with not enough and too much cores. Thanks
>
> Sure, will add detailed description in v2 ;)
V2 has been sent.
We see increasing examples looking for help on this "confusion",
one recent example:
http://openvswitch.org/pipermail/dev/2016-June/072110.html
@@ -1193,19 +1193,13 @@ rss_fwd_config_setup(void)
cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
cur_fwd_config.nb_fwd_streams =
(streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
- if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores)
- cur_fwd_config.nb_fwd_streams =
- (streamid_t)cur_fwd_config.nb_fwd_lcores;
- else
- cur_fwd_config.nb_fwd_lcores =
- (lcoreid_t)cur_fwd_config.nb_fwd_streams;
/* reinitialize forwarding streams */
init_fwd_streams();
setup_fwd_config_of_each_lcore(&cur_fwd_config);
rxp = 0; rxq = 0;
- for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
+ for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) {
struct fwd_stream *fs;
fs = fwd_streams[lc_id];