From patchwork Mon Apr 24 12:33:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 23818 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id C94F85920; Mon, 24 Apr 2017 14:33:40 +0200 (CEST) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id D2F732B9B; Mon, 24 Apr 2017 14:33:37 +0200 (CEST) Received: from glumotte.dev.6wind.com (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 099D6254C2; Mon, 24 Apr 2017 14:33:32 +0200 (CEST) From: Olivier Matz To: dev@dpdk.org, jingjing.wu@intel.com Cc: bruce.richardson@intel.com, stable@dpdk.org Date: Mon, 24 Apr 2017 14:33:58 +0200 Message-Id: <20170424123358.5959-2-olivier.matz@6wind.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170424123358.5959-1-olivier.matz@6wind.com> References: <20170424123358.5959-1-olivier.matz@6wind.com> Subject: [dpdk-dev] [PATCH 2/2] app/testpmd: fix number of mbufs in pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The number of mbufs in pools is not consistent depending on the options passed by the user and the number of ports, especially in numa mode, when the number of mbuf is specified by the user. When the user specifies the number of mbuf (per pool), it should overrides the default value. - before the patch ./build/app/testpmd -- -i : n=331456, size=2176, socket=0 : n=331456, size=2176, socket=1 ./build/app/testpmd -- --total-num-mbufs=8000 -i : n=256000, size=2176, socket=0 : n=256000, size=2176, socket=1 # BAD, should be n=8000 for each socket ./build/app/testpmd -- --no-numa -i : n=331456, size=2176, socket=0 ./build/app/testpmd -- --no-numa --total-num-mbufs=8000 -i : n=8000, size=2176, socket=0 ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- -i : n=331456, size=2176, socket=0 : n=331456, size=2176, socket=1 ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- \ --total-num-mbufs=8000 -i : n=128000, size=2176, socket=0 : n=128000, size=2176, socket=1 # BAD, should be n=8000 for each socket ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- --no-numa -i : n=331456, size=2176, socket=0 ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- --no-numa \ --total-num-mbufs=8000 -i : n=8000, size=2176, socket=0 - after the patch ./build/app/testpmd -- -i : n=331456, size=2176, socket=0 : n=331456, size=2176, socket=1 ./build/app/testpmd -- --total-num-mbufs=8000 -i : n=8000, size=2176, socket=0 : n=8000, size=2176, socket=1 ./build/app/testpmd -- --no-numa -i : n=331456, size=2176, socket=0 ./build/app/testpmd -- --no-numa --total-num-mbufs=8000 -i : n=8000, size=2176, socket=0 ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- -i : n=331456, size=2176, socket=0 : n=331456, size=2176, socket=1 ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- \ --total-num-mbufs=8000 -i : n=8000, size=2176, socket=0 : n=8000, size=2176, socket=1 ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- --no-numa -i : n=331456, size=2176, socket=0 ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- --no-numa \ --total-num-mbufs=8000 -i : n=8000, size=2176, socket=0 Fixes: b6ea6408fbc7 ("ethdev: store numa_node per device") CC: stable@dpdk.org Signed-off-by: Olivier Matz Acked-by: Jingjing Wu --- app/test-pmd/testpmd.c | 65 +++++++++++++++++++++----------------------------- 1 file changed, 27 insertions(+), 38 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index f61f31344..0c6a50ea3 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -543,34 +543,6 @@ init_config(void) fwd_lcores[lc_id]->cpuid_idx = lc_id; } - /* - * Create pools of mbuf. - * If NUMA support is disabled, create a single pool of mbuf in - * socket 0 memory by default. - * Otherwise, create a pool of mbuf in the memory of sockets 0 and 1. - * - * Use the maximum value of nb_rxd and nb_txd here, then nb_rxd and - * nb_txd can be configured at run time. - */ - if (param_total_num_mbufs) - nb_mbuf_per_pool = param_total_num_mbufs; - else { - nb_mbuf_per_pool = RTE_TEST_RX_DESC_MAX + (nb_lcores * mb_mempool_cache) - + RTE_TEST_TX_DESC_MAX + MAX_PKT_BURST; - - if (!numa_support) - nb_mbuf_per_pool = - (nb_mbuf_per_pool * RTE_MAX_ETHPORTS); - } - - if (!numa_support) { - if (socket_num == UMA_NO_CONFIG) - mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, 0); - else - mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, - socket_num); - } - RTE_ETH_FOREACH_DEV(pid) { port = &ports[pid]; rte_eth_dev_info_get(pid, &port->dev_info); @@ -593,20 +565,37 @@ init_config(void) port->need_reconfig_queues = 1; } + /* + * Create pools of mbuf. + * If NUMA support is disabled, create a single pool of mbuf in + * socket 0 memory by default. + * Otherwise, create a pool of mbuf in the memory of sockets 0 and 1. + * + * Use the maximum value of nb_rxd and nb_txd here, then nb_rxd and + * nb_txd can be configured at run time. + */ + if (param_total_num_mbufs) + nb_mbuf_per_pool = param_total_num_mbufs; + else { + nb_mbuf_per_pool = RTE_TEST_RX_DESC_MAX + + (nb_lcores * mb_mempool_cache) + + RTE_TEST_TX_DESC_MAX + MAX_PKT_BURST; + nb_mbuf_per_pool *= RTE_MAX_ETHPORTS; + } + if (numa_support) { uint8_t i; - unsigned int nb_mbuf; - - if (param_total_num_mbufs && nb_ports != 0) - nb_mbuf_per_pool = nb_mbuf_per_pool/nb_ports; - for (i = 0; i < max_socket; i++) { - nb_mbuf = (nb_mbuf_per_pool * RTE_MAX_ETHPORTS); - if (nb_mbuf) - mbuf_pool_create(mbuf_data_size, - nb_mbuf,i); - } + for (i = 0; i < max_socket; i++) + mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, i); + } else { + if (socket_num == UMA_NO_CONFIG) + mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, 0); + else + mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, + socket_num); } + init_port_config(); /*