Message ID | 1465945686-142094-5-git-send-email-zhihong.wang@intel.com (mailing list archive) |
---|---|
State | Accepted, archived |
Delegated to: | Thomas Monjalon |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 0C7E5AE02; Wed, 15 Jun 2016 08:13:57 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id DB34FADEA for <dev@dpdk.org>; Wed, 15 Jun 2016 08:13:52 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP; 14 Jun 2016 23:13:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,474,1459839600"; d="scan'208";a="719268298" Received: from unknown (HELO dpdk5.sh.intel.com) ([10.239.129.244]) by FMSMGA003.fm.intel.com with ESMTP; 14 Jun 2016 23:13:43 -0700 From: Zhihong Wang <zhihong.wang@intel.com> To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com, thomas.monjalon@6wind.com, Zhihong Wang <zhihong.wang@intel.com> Date: Tue, 14 Jun 2016 19:08:05 -0400 Message-Id: <1465945686-142094-5-git-send-email-zhihong.wang@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1465945686-142094-1-git-send-email-zhihong.wang@intel.com> References: <1462488421-118990-1-git-send-email-zhihong.wang@intel.com> <1465945686-142094-1-git-send-email-zhihong.wang@intel.com> Subject: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Zhihong Wang
June 14, 2016, 11:08 p.m. UTC
This patch removes constraints in rxq handling when multiqueue is enabled
to handle all the rxqs.
Current testpmd forces a dedicated core for each rxq, some rxqs may be
ignored when core number is less than rxq number, and that causes confusion
and inconvenience.
One example: One Red Hat engineer was doing multiqueue test, there're 2
ports in guest each with 4 queues, and testpmd was used as the forwarding
engine in guest, as usual he used 1 core for forwarding, as a results he
only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of
emails and quite some time are spent to root cause it, and of course it's
caused by this unreasonable testpmd behavior.
Moreover, even if we understand this behavior, if we want to test the
above case, we still need 8 cores for a single guest to poll all the
rxqs, obviously this is too expensive.
We met quite a lot cases like this, one recent example:
http://openvswitch.org/pipermail/dev/2016-June/072110.html
Signed-off-by: Zhihong Wang <zhihong.wang@intel.com>
---
app/test-pmd/config.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
Comments
On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > This patch removes constraints in rxq handling when multiqueue is enabled > to handle all the rxqs. > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > ignored when core number is less than rxq number, and that causes confusion > and inconvenience. > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > ports in guest each with 4 queues, and testpmd was used as the forwarding > engine in guest, as usual he used 1 core for forwarding, as a results he > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > emails and quite some time are spent to root cause it, and of course it's > caused by this unreasonable testpmd behavior. > > Moreover, even if we understand this behavior, if we want to test the > above case, we still need 8 cores for a single guest to poll all the > rxqs, obviously this is too expensive. > > We met quite a lot cases like this, one recent example: > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > --- > app/test-pmd/config.c | 8 +------- > 1 file changed, 1 insertion(+), 7 deletions(-) > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > index ede7c78..4719a08 100644 > --- a/app/test-pmd/config.c > +++ b/app/test-pmd/config.c > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > cur_fwd_config.nb_fwd_streams = > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > - cur_fwd_config.nb_fwd_streams = > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > - else > - cur_fwd_config.nb_fwd_lcores = > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > /* reinitialize forwarding streams */ > init_fwd_streams(); > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > rxp = 0; rxq = 0; > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > struct fwd_stream *fs; > > fs = fwd_streams[lc_id]; > -- > 2.5.0 Hi Zhihong, It seems this commits introduce a bug in pkt_burst_transmit(), this only occurs when the number of cores present in the coremask is greater than the number of queues i.e. coremask=0xffe --txq=4 --rxq=4. Port 0 Link Up - speed 40000 Mbps - full-duplex Port 1 Link Up - speed 40000 Mbps - full-duplex Done testpmd> start tx_first io packet forwarding - CRC stripping disabled - packets/burst=64 nb forwarding cores=10 - nb forwarding ports=2 RX queues=4 - RX desc=256 - RX free threshold=0 RX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX queues=4 - TX desc=256 - TX free threshold=0 TX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX RS bit threshold=0 - TXQ flags=0x0 Segmentation fault (core dumped) If I start testpmd with a coremask with at most as many cores as queues, everything works well (i.e. coremask=0xff0, or 0xf00). Are you able to reproduce the same issue? Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). Regards,
Hi Nelio, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nélio Laranjeiro > Sent: Monday, June 27, 2016 3:24 PM > To: Wang, Zhihong > Cc: dev@dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, > Pablo; thomas.monjalon@6wind.com > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup > > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > > This patch removes constraints in rxq handling when multiqueue is enabled > > to handle all the rxqs. > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > ignored when core number is less than rxq number, and that causes > confusion > > and inconvenience. > > > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > > ports in guest each with 4 queues, and testpmd was used as the forwarding > > engine in guest, as usual he used 1 core for forwarding, as a results he > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > > emails and quite some time are spent to root cause it, and of course it's > > caused by this unreasonable testpmd behavior. > > > > Moreover, even if we understand this behavior, if we want to test the > > above case, we still need 8 cores for a single guest to poll all the > > rxqs, obviously this is too expensive. > > > > We met quite a lot cases like this, one recent example: > > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > > --- > > app/test-pmd/config.c | 8 +------- > > 1 file changed, 1 insertion(+), 7 deletions(-) > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > index ede7c78..4719a08 100644 > > --- a/app/test-pmd/config.c > > +++ b/app/test-pmd/config.c > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > > cur_fwd_config.nb_fwd_streams = > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > > - cur_fwd_config.nb_fwd_streams = > > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > > - else > > - cur_fwd_config.nb_fwd_lcores = > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > > > /* reinitialize forwarding streams */ > > init_fwd_streams(); > > > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > > rxp = 0; rxq = 0; > > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > > struct fwd_stream *fs; > > > > fs = fwd_streams[lc_id]; > > -- > > 2.5.0 > > Hi Zhihong, > > It seems this commits introduce a bug in pkt_burst_transmit(), this only > occurs when the number of cores present in the coremask is greater than > the number of queues i.e. coremask=0xffe --txq=4 --rxq=4. > > Port 0 Link Up - speed 40000 Mbps - full-duplex > Port 1 Link Up - speed 40000 Mbps - full-duplex > Done > testpmd> start tx_first > io packet forwarding - CRC stripping disabled - packets/burst=64 > nb forwarding cores=10 - nb forwarding ports=2 > RX queues=4 - RX desc=256 - RX free threshold=0 > RX threshold registers: pthresh=0 hthresh=0 wthresh=0 > TX queues=4 - TX desc=256 - TX free threshold=0 > TX threshold registers: pthresh=0 hthresh=0 wthresh=0 > TX RS bit threshold=0 - TXQ flags=0x0 > Segmentation fault (core dumped) > > > If I start testpmd with a coremask with at most as many cores as queues, > everything works well (i.e. coremask=0xff0, or 0xf00). > > Are you able to reproduce the same issue? > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). Thanks for reporting this. I was able to reproduce this issue and sent a patch that should fix it. Could you verify it? http://dpdk.org/dev/patchwork/patch/14430/ Thanks Pablo > > Regards, > > -- > Nélio Laranjeiro > 6WIND
Hi Pablo, On Mon, Jun 27, 2016 at 10:36:38PM +0000, De Lara Guarch, Pablo wrote: > Hi Nelio, > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nélio Laranjeiro > > Sent: Monday, June 27, 2016 3:24 PM > > To: Wang, Zhihong > > Cc: dev@dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, > > Pablo; thomas.monjalon@6wind.com > > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup > > > > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > > > This patch removes constraints in rxq handling when multiqueue is enabled > > > to handle all the rxqs. > > > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > > ignored when core number is less than rxq number, and that causes > > confusion > > > and inconvenience. > > > > > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > > > ports in guest each with 4 queues, and testpmd was used as the forwarding > > > engine in guest, as usual he used 1 core for forwarding, as a results he > > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > > > emails and quite some time are spent to root cause it, and of course it's > > > caused by this unreasonable testpmd behavior. > > > > > > Moreover, even if we understand this behavior, if we want to test the > > > above case, we still need 8 cores for a single guest to poll all the > > > rxqs, obviously this is too expensive. > > > > > > We met quite a lot cases like this, one recent example: > > > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > > > --- > > > app/test-pmd/config.c | 8 +------- > > > 1 file changed, 1 insertion(+), 7 deletions(-) > > > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > > index ede7c78..4719a08 100644 > > > --- a/app/test-pmd/config.c > > > +++ b/app/test-pmd/config.c > > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > > > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > > > cur_fwd_config.nb_fwd_streams = > > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > > > - cur_fwd_config.nb_fwd_streams = > > > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > > > - else > > > - cur_fwd_config.nb_fwd_lcores = > > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > > > > > /* reinitialize forwarding streams */ > > > init_fwd_streams(); > > > > > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > > > rxp = 0; rxq = 0; > > > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > > > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > > > struct fwd_stream *fs; > > > > > > fs = fwd_streams[lc_id]; > > > -- > > > 2.5.0 > > > > Hi Zhihong, > > > > It seems this commits introduce a bug in pkt_burst_transmit(), this only > > occurs when the number of cores present in the coremask is greater than > > the number of queues i.e. coremask=0xffe --txq=4 --rxq=4. > > > > Port 0 Link Up - speed 40000 Mbps - full-duplex > > Port 1 Link Up - speed 40000 Mbps - full-duplex > > Done > > testpmd> start tx_first > > io packet forwarding - CRC stripping disabled - packets/burst=64 > > nb forwarding cores=10 - nb forwarding ports=2 > > RX queues=4 - RX desc=256 - RX free threshold=0 > > RX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > TX queues=4 - TX desc=256 - TX free threshold=0 > > TX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > TX RS bit threshold=0 - TXQ flags=0x0 > > Segmentation fault (core dumped) > > > > > > If I start testpmd with a coremask with at most as many cores as queues, > > everything works well (i.e. coremask=0xff0, or 0xf00). > > > > Are you able to reproduce the same issue? > > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). > > Thanks for reporting this. I was able to reproduce this issue and > sent a patch that should fix it. Could you verify it? > http://dpdk.org/dev/patchwork/patch/14430/ I have tested it, it works, I will add a test report on the corresponding email. Thanks > > > Thanks > Pablo > > > > Regards,
Thanks Nelio and Pablo! > -----Original Message----- > From: Nélio Laranjeiro [mailto:nelio.laranjeiro@6wind.com] > Sent: Tuesday, June 28, 2016 4:34 PM > To: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com> > Cc: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Richardson, Bruce > <bruce.richardson@intel.com>; thomas.monjalon@6wind.com > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup > > Hi Pablo, > > On Mon, Jun 27, 2016 at 10:36:38PM +0000, De Lara Guarch, Pablo wrote: > > Hi Nelio, > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Nélio Laranjeiro > > > Sent: Monday, June 27, 2016 3:24 PM > > > To: Wang, Zhihong > > > Cc: dev@dpdk.org; Ananyev, Konstantin; Richardson, Bruce; De Lara Guarch, > > > Pablo; thomas.monjalon@6wind.com > > > Subject: Re: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup > > > > > > On Tue, Jun 14, 2016 at 07:08:05PM -0400, Zhihong Wang wrote: > > > > This patch removes constraints in rxq handling when multiqueue is enabled > > > > to handle all the rxqs. > > > > > > > > Current testpmd forces a dedicated core for each rxq, some rxqs may be > > > > ignored when core number is less than rxq number, and that causes > > > confusion > > > > and inconvenience. > > > > > > > > One example: One Red Hat engineer was doing multiqueue test, there're 2 > > > > ports in guest each with 4 queues, and testpmd was used as the forwarding > > > > engine in guest, as usual he used 1 core for forwarding, as a results he > > > > only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of > > > > emails and quite some time are spent to root cause it, and of course it's > > > > caused by this unreasonable testpmd behavior. > > > > > > > > Moreover, even if we understand this behavior, if we want to test the > > > > above case, we still need 8 cores for a single guest to poll all the > > > > rxqs, obviously this is too expensive. > > > > > > > > We met quite a lot cases like this, one recent example: > > > > http://openvswitch.org/pipermail/dev/2016-June/072110.html > > > > > > > > > > > > Signed-off-by: Zhihong Wang <zhihong.wang@intel.com> > > > > --- > > > > app/test-pmd/config.c | 8 +------- > > > > 1 file changed, 1 insertion(+), 7 deletions(-) > > > > > > > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > > > > index ede7c78..4719a08 100644 > > > > --- a/app/test-pmd/config.c > > > > +++ b/app/test-pmd/config.c > > > > @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) > > > > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > > > > cur_fwd_config.nb_fwd_streams = > > > > (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > > > > - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) > > > > - cur_fwd_config.nb_fwd_streams = > > > > - (streamid_t)cur_fwd_config.nb_fwd_lcores; > > > > - else > > > > - cur_fwd_config.nb_fwd_lcores = > > > > - (lcoreid_t)cur_fwd_config.nb_fwd_streams; > > > > > > > > /* reinitialize forwarding streams */ > > > > init_fwd_streams(); > > > > > > > > setup_fwd_config_of_each_lcore(&cur_fwd_config); > > > > rxp = 0; rxq = 0; > > > > - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { > > > > + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { > > > > struct fwd_stream *fs; > > > > > > > > fs = fwd_streams[lc_id]; > > > > -- > > > > 2.5.0 > > > > > > Hi Zhihong, > > > > > > It seems this commits introduce a bug in pkt_burst_transmit(), this only > > > occurs when the number of cores present in the coremask is greater than > > > the number of queues i.e. coremask=0xffe --txq=4 --rxq=4. > > > > > > Port 0 Link Up - speed 40000 Mbps - full-duplex > > > Port 1 Link Up - speed 40000 Mbps - full-duplex > > > Done > > > testpmd> start tx_first > > > io packet forwarding - CRC stripping disabled - packets/burst=64 > > > nb forwarding cores=10 - nb forwarding ports=2 > > > RX queues=4 - RX desc=256 - RX free threshold=0 > > > RX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > > TX queues=4 - TX desc=256 - TX free threshold=0 > > > TX threshold registers: pthresh=0 hthresh=0 wthresh=0 > > > TX RS bit threshold=0 - TXQ flags=0x0 > > > Segmentation fault (core dumped) > > > > > > > > > If I start testpmd with a coremask with at most as many cores as queues, > > > everything works well (i.e. coremask=0xff0, or 0xf00). > > > > > > Are you able to reproduce the same issue? > > > Note: It only occurs on dpdk/master branch (commit f2bb7ae1d204). > > > > Thanks for reporting this. I was able to reproduce this issue and > > sent a patch that should fix it. Could you verify it? > > http://dpdk.org/dev/patchwork/patch/14430/ > > > I have tested it, it works, I will add a test report on the > corresponding email. > > Thanks > > > > > > Thanks > > Pablo > > > > > > Regards, > > -- > Nélio Laranjeiro > 6WIND
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index ede7c78..4719a08 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) cur_fwd_config.nb_fwd_ports = nb_fwd_ports; cur_fwd_config.nb_fwd_streams = (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) - cur_fwd_config.nb_fwd_streams = - (streamid_t)cur_fwd_config.nb_fwd_lcores; - else - cur_fwd_config.nb_fwd_lcores = - (lcoreid_t)cur_fwd_config.nb_fwd_streams; /* reinitialize forwarding streams */ init_fwd_streams(); setup_fwd_config_of_each_lcore(&cur_fwd_config); rxp = 0; rxq = 0; - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { struct fwd_stream *fs; fs = fwd_streams[lc_id];