From patchwork Tue Jun 14 23:08:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhihong Wang X-Patchwork-Id: 13755 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 0C7E5AE02; Wed, 15 Jun 2016 08:13:57 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id DB34FADEA for ; Wed, 15 Jun 2016 08:13:52 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP; 14 Jun 2016 23:13:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,474,1459839600"; d="scan'208";a="719268298" Received: from unknown (HELO dpdk5.sh.intel.com) ([10.239.129.244]) by FMSMGA003.fm.intel.com with ESMTP; 14 Jun 2016 23:13:43 -0700 From: Zhihong Wang To: dev@dpdk.org Cc: konstantin.ananyev@intel.com, bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com, thomas.monjalon@6wind.com, Zhihong Wang Date: Tue, 14 Jun 2016 19:08:05 -0400 Message-Id: <1465945686-142094-5-git-send-email-zhihong.wang@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1465945686-142094-1-git-send-email-zhihong.wang@intel.com> References: <1462488421-118990-1-git-send-email-zhihong.wang@intel.com> <1465945686-142094-1-git-send-email-zhihong.wang@intel.com> Subject: [dpdk-dev] [PATCH v3 4/5] testpmd: handle all rxqs in rss setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch removes constraints in rxq handling when multiqueue is enabled to handle all the rxqs. Current testpmd forces a dedicated core for each rxq, some rxqs may be ignored when core number is less than rxq number, and that causes confusion and inconvenience. One example: One Red Hat engineer was doing multiqueue test, there're 2 ports in guest each with 4 queues, and testpmd was used as the forwarding engine in guest, as usual he used 1 core for forwarding, as a results he only saw traffic from port 0 queue 0 to port 1 queue 0, then a lot of emails and quite some time are spent to root cause it, and of course it's caused by this unreasonable testpmd behavior. Moreover, even if we understand this behavior, if we want to test the above case, we still need 8 cores for a single guest to poll all the rxqs, obviously this is too expensive. We met quite a lot cases like this, one recent example: http://openvswitch.org/pipermail/dev/2016-June/072110.html Signed-off-by: Zhihong Wang --- app/test-pmd/config.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index ede7c78..4719a08 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1199,19 +1199,13 @@ rss_fwd_config_setup(void) cur_fwd_config.nb_fwd_ports = nb_fwd_ports; cur_fwd_config.nb_fwd_streams = (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); - if (cur_fwd_config.nb_fwd_streams > cur_fwd_config.nb_fwd_lcores) - cur_fwd_config.nb_fwd_streams = - (streamid_t)cur_fwd_config.nb_fwd_lcores; - else - cur_fwd_config.nb_fwd_lcores = - (lcoreid_t)cur_fwd_config.nb_fwd_streams; /* reinitialize forwarding streams */ init_fwd_streams(); setup_fwd_config_of_each_lcore(&cur_fwd_config); rxp = 0; rxq = 0; - for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) { + for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_streams; lc_id++) { struct fwd_stream *fs; fs = fwd_streams[lc_id];