From patchwork Thu Jan 3 11:30:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shreyansh Jain X-Patchwork-Id: 49397 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 814AE5B26; Thu, 3 Jan 2019 12:30:06 +0100 (CET) Received: from EUR04-DB3-obe.outbound.protection.outlook.com (mail-eopbgr60082.outbound.protection.outlook.com [40.107.6.82]) by dpdk.org (Postfix) with ESMTP id 6A4C05B1E for ; Thu, 3 Jan 2019 12:30:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZwhcnGU4vsHIWtiLQJ7Zijxvo8QPZvkVSLthJiiNUkE=; b=b81PGqHbtUX+Zz8wt3j7vRk+Zw0VyQYF/4SaMbZMLMbb61kNB4AJIDa52drY4gDPMugY8luNC3YXix5AKDDgkmLoGTdJWoa5ZBDuDbiTYNOG/2nU7V3uCzeZcPcrgzncy81ZWbw53L0dueMFQgOw5Z0RdSP3mGcIYnJ/kLKqrrs= Received: from VI1PR04MB4688.eurprd04.prod.outlook.com (20.177.56.80) by VI1PR04MB5582.eurprd04.prod.outlook.com (20.178.123.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1471.20; Thu, 3 Jan 2019 11:30:02 +0000 Received: from VI1PR04MB4688.eurprd04.prod.outlook.com ([fe80::b1eb:7e7e:7b90:7b4]) by VI1PR04MB4688.eurprd04.prod.outlook.com ([fe80::b1eb:7e7e:7b90:7b4%3]) with mapi id 15.20.1495.005; Thu, 3 Jan 2019 11:30:02 +0000 From: Shreyansh Jain To: "dev@dpdk.org" CC: Shreyansh Jain Thread-Topic: [PATCH] examples/l3fwd: support separate buffer pool per port Thread-Index: AQHUo1eqaIUoRjuicUqwMQcvs709jQ== Date: Thu, 3 Jan 2019 11:30:02 +0000 Message-ID: <20190103112932.4415-1-shreyansh.jain@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [14.143.254.130] x-mailer: git-send-email 2.17.1 x-clientproxiedby: TY2PR01CA0034.jpnprd01.prod.outlook.com (2603:1096:404:ce::22) To VI1PR04MB4688.eurprd04.prod.outlook.com (2603:10a6:803:71::16) authentication-results: spf=none (sender IP is ) smtp.mailfrom=shreyansh.jain@nxp.com; x-ms-exchange-messagesentrepresentingtype: 1 x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; VI1PR04MB5582; 6:o6hfAKzptVVpiormuBCM5vZ7DRtmggw9+3IYhSqkEq5ihXxhcQ/di4MtPlubyeJKOfFBll51BCVWPZCDwn5FLbQSM7hysTxALWelT+ymK1gNTzjRO6vYPl9ksEv3PEa42axO1BNX4yvdhuTpf1iqZI85lIFB0WLbA/a1oVpzP50kRmZ+X6KPkAxXFba9Rvp5k3vJuiZHOxSK/bIwNx/jnMC/8JRRiEGaH1q7jcclPNer2RwWPHtlk0UiFSigqToX4KaZIEGfkJy9DyiTU0nDDAlEkJTCR4w+dKwo1AzqVB/YSIdF+bhawbTNRMptwcGBED5DghaRlEcOi3hR9xa3IN95EKvfYegcqcqYMTirrTh6lvG5uJQYil2eHks8zagP+mysssoXfUG1xqn4YxcNcwwCzLatKlcrR8OuxcU/nkOy3I4jwHrtqVP5JcgX1fsSWJAQcPCl7oNMFrkX6OFv8Q==; 5:PuZbYFXHjUn2vgr+xb1TA4ahDk37Z8ugm7F4ytDKrMl7KevjhPht5GCX8Bksv8MUYJXnyFRPryA0ltMsbZryR/DSw6a1pc93StE6qPkjeeBcADc4A3Y5Jg9HzpBUvecUAIvkXB/lf8SAFZb2jW4ZOASXgKXTNX8J6OxK1f/vfGY=; 7:a59JhrsAQehg1nb7Zln53p1wGOqEYCdlNiORsL/QXZiHYSkSEykGlhoj5nNeC8UvHBu9VgmVrIvvPmMYGAH7lXRlHRB5moJf7R82VlGMKbd1zKvfEXNrOVtT673yzBbeFQRSiotqJLdw1Ty+gufPzA== x-ms-office365-filtering-correlation-id: 3bbd79f8-3a82-4f32-79cd-08d6716ecd22 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600109)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:VI1PR04MB5582; x-ms-traffictypediagnostic: VI1PR04MB5582: x-microsoft-antispam-prvs: x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(3230021)(908002)(999002)(5005026)(6040522)(8220060)(2401047)(8121501046)(93006095)(93001095)(3002001)(3231475)(944501520)(52105112)(10201501046)(6055026)(6041310)(20161123564045)(20161123560045)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(201708071742011)(7699051)(76991095); SRVR:VI1PR04MB5582; BCL:0; PCL:0; RULEID:; SRVR:VI1PR04MB5582; x-forefront-prvs: 0906E83A25 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(396003)(346002)(376002)(366004)(136003)(39860400002)(199004)(189003)(1730700003)(8676002)(2351001)(26005)(6506007)(81166006)(44832011)(386003)(6306002)(3846002)(8936002)(81156014)(86362001)(2501003)(6116002)(106356001)(7736002)(186003)(52116002)(305945005)(55236004)(478600001)(102836004)(71200400001)(486006)(105586002)(71190400001)(6512007)(50226002)(6486002)(25786009)(99286004)(5640700003)(6436002)(2616005)(66066001)(14444005)(256004)(5660300001)(2906002)(316002)(53936002)(97736004)(68736007)(78486014)(36756003)(966005)(6916009)(1076003)(476003)(14454004)(4326008); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR04MB5582; H:VI1PR04MB4688.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: PwujImMTq2hHrNc3Fow4fJLVdR6urbOUIBEgiZWZwS5mTQZYcMj4FzyDGhfpAjt3/xhVrzhKak0HZO5X7YcbBaLLgysO/WGMdJlZsD3krWe2YITE4G2up5/hTumgUaWBlVXonYv4EywItkbXNLIO9mmPjeiMsTUoMaRL/drDRXuyYgCDat9Bw444fY7ClL5848LXgvP4l5PVSytcENiGSIRTp1xrZ9dPw4Cy2fHDkLWaoKztbHTMWu68qOOTWF6Tb+9dgUL5n/g7eHTv5Ru/71S/JSUEKs0GMFkIDWh2T2TFgV6MtkgpUfEkrpw6YrLJ spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3bbd79f8-3a82-4f32-79cd-08d6716ecd22 X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Jan 2019 11:30:02.7683 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5582 Subject: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Traditionally, only a single buffer pool per port (or, per-port-per-socket) is created in l3fwd application. If separate pools are created per-port, it might lead to gain in performance as packet alloc/dealloc requests would be isolated across ports (and their corresponding lcores). This patch adds an argument '--per-port-pool' to the l3fwd application. By default, old mode of single pool per port (split on sockets) is active. Signed-off-by: Shreyansh Jain --- RFC: https://mails.dpdk.org/archives/dev/2018-November/120002.html examples/l3fwd/main.c | 74 +++++++++++++++++++++++++++++++------------ 1 file changed, 53 insertions(+), 21 deletions(-) diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index e4b99efe0..7b9683187 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -69,11 +69,13 @@ static int promiscuous_on; static int l3fwd_lpm_on; static int l3fwd_em_on; +/* Global variables. */ + static int numa_on = 1; /**< NUMA is enabled by default. */ static int parse_ptype; /**< Parse packet type using rx callback, and */ /**< disabled by default */ - -/* Global variables. */ +static int per_port_pool; /**< Use separate buffer pools per port; disabled */ + /**< by default */ volatile bool force_quit; @@ -133,7 +135,8 @@ static struct rte_eth_conf port_conf = { }, }; -static struct rte_mempool * pktmbuf_pool[NB_SOCKETS]; +static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS]; +static uint8_t lkp_per_socket[NB_SOCKETS]; struct l3fwd_lkp_mode { void (*setup)(int); @@ -285,7 +288,8 @@ print_usage(const char *prgname) " [--no-numa]" " [--hash-entry-num]" " [--ipv6]" - " [--parse-ptype]\n\n" + " [--parse-ptype]" + " [--per-port-pool]\n\n" " -p PORTMASK: Hexadecimal bitmask of ports to configure\n" " -P : Enable promiscuous mode\n" @@ -299,7 +303,8 @@ print_usage(const char *prgname) " --no-numa: Disable numa awareness\n" " --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n" " --ipv6: Set if running ipv6 packets\n" - " --parse-ptype: Set to use software to analyze packet type\n\n", + " --parse-ptype: Set to use software to analyze packet type\n" + " --per-port-pool: Use separate buffer pool per port\n\n", prgname); } @@ -452,6 +457,7 @@ static const char short_options[] = #define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo" #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num" #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype" +#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool" enum { /* long options mapped to a short option */ @@ -465,6 +471,7 @@ enum { CMD_LINE_OPT_ENABLE_JUMBO_NUM, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM, CMD_LINE_OPT_PARSE_PTYPE_NUM, + CMD_LINE_OPT_PARSE_PER_PORT_POOL, }; static const struct option lgopts[] = { @@ -475,6 +482,7 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM}, {CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM}, {CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM}, + {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL}, {NULL, 0, 0, 0} }; @@ -485,10 +493,10 @@ static const struct option lgopts[] = { * RTE_MAX is used to ensure that NB_MBUF never goes below a minimum * value of 8192 */ -#define NB_MBUF RTE_MAX( \ - (nb_ports*nb_rx_queue*nb_rxd + \ - nb_ports*nb_lcores*MAX_PKT_BURST + \ - nb_ports*n_tx_queue*nb_txd + \ +#define NB_MBUF(nports) RTE_MAX( \ + (nports*nb_rx_queue*nb_rxd + \ + nports*nb_lcores*MAX_PKT_BURST + \ + nports*n_tx_queue*nb_txd + \ nb_lcores*MEMPOOL_CACHE_SIZE), \ (unsigned)8192) @@ -594,6 +602,11 @@ parse_args(int argc, char **argv) parse_ptype = 1; break; + case CMD_LINE_OPT_PARSE_PER_PORT_POOL: + printf("per port buffer pool is enabled\n"); + per_port_pool = 1; + break; + default: print_usage(prgname); return -1; @@ -642,7 +655,7 @@ print_ethaddr(const char *name, const struct ether_addr *eth_addr) } static int -init_mem(unsigned nb_mbuf) +init_mem(uint16_t portid, unsigned int nb_mbuf) { struct lcore_conf *qconf; int socketid; @@ -664,13 +677,14 @@ init_mem(unsigned nb_mbuf) socketid, lcore_id, NB_SOCKETS); } - if (pktmbuf_pool[socketid] == NULL) { - snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); - pktmbuf_pool[socketid] = + if (pktmbuf_pool[portid][socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d:%d", + portid, socketid); + pktmbuf_pool[portid][socketid] = rte_pktmbuf_pool_create(s, nb_mbuf, MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, socketid); - if (pktmbuf_pool[socketid] == NULL) + if (pktmbuf_pool[portid][socketid] == NULL) rte_exit(EXIT_FAILURE, "Cannot init mbuf pool on socket %d\n", socketid); @@ -678,8 +692,13 @@ init_mem(unsigned nb_mbuf) printf("Allocated mbuf pool on socket %d\n", socketid); - /* Setup either LPM or EM(f.e Hash). */ - l3fwd_lkp.setup(socketid); + /* Setup either LPM or EM(f.e Hash). But, only once per + * available socket. + */ + if (!lkp_per_socket[socketid]) { + l3fwd_lkp.setup(socketid); + lkp_per_socket[socketid] = 1; + } } qconf = &lcore_conf[lcore_id]; qconf->ipv4_lookup_struct = @@ -899,7 +918,14 @@ main(int argc, char **argv) (struct ether_addr *)(val_eth + portid) + 1); /* init memory */ - ret = init_mem(NB_MBUF); + if (!per_port_pool) { + /* portid = 0; this is *not* signifying the first port, + * rather, it signifies that portid is ignored. + */ + ret = init_mem(0, NB_MBUF(nb_ports)); + } else { + ret = init_mem(portid, NB_MBUF(1)); + } if (ret < 0) rte_exit(EXIT_FAILURE, "init_mem failed\n"); @@ -966,10 +992,16 @@ main(int argc, char **argv) rte_eth_dev_info_get(portid, &dev_info); rxq_conf = dev_info.default_rxconf; rxq_conf.offloads = conf->rxmode.offloads; - ret = rte_eth_rx_queue_setup(portid, queueid, nb_rxd, - socketid, - &rxq_conf, - pktmbuf_pool[socketid]); + if (!per_port_pool) + ret = rte_eth_rx_queue_setup(portid, queueid, + nb_rxd, socketid, + &rxq_conf, + pktmbuf_pool[0][socketid]); + else + ret = rte_eth_rx_queue_setup(portid, queueid, + nb_rxd, socketid, + &rxq_conf, + pktmbuf_pool[portid][socketid]); if (ret < 0) rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: err=%d, port=%d\n",