From patchwork Thu Apr 9 15:42:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wisam Jaddo X-Patchwork-Id: 68057 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 46F50A0597; Thu, 9 Apr 2020 17:43:19 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9D1B31C437; Thu, 9 Apr 2020 17:43:18 +0200 (CEST) Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-eopbgr80043.outbound.protection.outlook.com [40.107.8.43]) by dpdk.org (Postfix) with ESMTP id 88C381C434 for ; Thu, 9 Apr 2020 17:43:17 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ERfPlv2jPuCbudzfMdbYwS6Pt7tqjZOHwLhTjj/bVQHMw/uWP6vLR1N0v1rDA1/v/eJOau1v2xCRPVowhLmYh8AKzKx9W6cA+7uzU0oycUrCNuQBtXCCgD0gyRKyUpPEb67ru8SQAvntjhHQt7ZI1ci22W97HCR7HHQUdSpV/9ANdu6puV0+GEOT95BrKWwz5d1s6jWCEpqQyikq4teVG5HKi6ozY7nOxmDaB+xGWFeQG1w1Whdikm6WvERqUWzx6GTfXxrjmXdmRGVjKE1AXYI9N6z7BAw5QZDPumozLFi33d4i9+jADIt3Wk2gglgEJXr0ZCjJ0AHMFW3lNCGaYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PXwjQFYYY2k0u3E8jlBmeXmcpxw/8mP9GAqt11pRUPs=; b=chG8+pFIhOD4bdiDjqJJeVqXEv2T2qDUcxqNC9EvD5h1+VwHfQbpR9Himb5Ob7dCgOVP7HaidUQHhdnRl0k0mwv9+CvYBLvLfE+cDGYI5Zg8XYldUjlLyJV2jdExdAvrxhyupssClAoNXEBopAw4OOF//otEXrQ7n+bM44GFPPvdfsI/FDdDtsE4CAP0QtUsX/GfLZtla59yvezx87FFhRJZbcfJQDWRoGa5nrTwHL0vZr68IbkOvTfCkcjFhhe3uf9o6FXnAwcncwec5s+OkkKQLBCsUsJCty65kQiyyAgh/SLP1TEHFzgVfUXUStltSLFHiteCgfpWBw7cZnGXVw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PXwjQFYYY2k0u3E8jlBmeXmcpxw/8mP9GAqt11pRUPs=; b=Syca1PpKK3ljdXZbHPxhYmclNdUAtmUbE92z79otT7X1Gyebq59KsevAb3KXuKnBlve8c1MMh9WTIUZ+t9F7/mEDn8WwPVxmJhJnstR3Y8OOmV+FoKRrxika3blb3+yW6LaLn8XuW7qkNywyEzdpkR55F8poZxqKGXme4Vl4Jkg= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=wisamm@mellanox.com; Received: from AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) by AM0SPR01MB0078.eurprd05.prod.outlook.com (2603:10a6:208:156::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.15; Thu, 9 Apr 2020 15:43:16 +0000 Received: from AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6]) by AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6%5]) with mapi id 15.20.2878.023; Thu, 9 Apr 2020 15:43:16 +0000 From: Wisam Jaddo To: dev@dpdk.org, jackmin@mellanox.com, jerinjacobk@gmail.com Cc: thomas@monjalon.net Date: Thu, 9 Apr 2020 15:42:53 +0000 Message-Id: <20200409154257.11539-1-wisamm@mellanox.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <1584452772-31147-1-git-send-email-wisamm@mellanox.com> References: <1584452772-31147-1-git-send-email-wisamm@mellanox.com> X-ClientProxiedBy: PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) To AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mellanox.com (37.142.13.130) by PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.17 via Frontend Transport; Thu, 9 Apr 2020 15:43:15 +0000 X-Mailer: git-send-email 2.17.1 X-Originating-IP: [37.142.13.130] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 6783e983-b072-42a3-d0ba-08d7dc9cb82c X-MS-TrafficTypeDiagnostic: AM0SPR01MB0078:|AM0SPR01MB0078: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:639; X-Forefront-PRVS: 0368E78B5B X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM0PR05MB6610.eurprd05.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(10009020)(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(66946007)(7696005)(1076003)(55016002)(2906002)(52116002)(5660300002)(66556008)(8936002)(8886007)(81156014)(86362001)(316002)(6666004)(26005)(186003)(8676002)(66476007)(16526019)(4326008)(956004)(478600001)(2616005)(81166007)(36756003)(30864003); DIR:OUT; SFP:1101; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Qo4cn5dlWCFZS/3GHj0J0Zi95AP4B6EaMNmFgqRS4ZbkvTsI/f2qi8XOa25uP9hAMwlrv0m5OAntQO0Ws/lUyQGrRis231ynyJ4najqFyLIdeu2mh1fOt4n5CUDjGwlGFP+E8Za9xcq7tDER8eWbIcePtKT0IZJIpz5LgLQ3yR8Sv6FvHcrhJWhSE/VKZmc0oOyGDIINpTKC0URS4883aWez4F74Ztt3w+uTwyH0dJ10IySvjRvKIQYjD6KJlb6ailmJlcW8k8z8B0K2H5IoYyCgT4aoZ/gNUOh1lwJ70asXFlYMyS9QxzWrObWwHCVp/ma+9Gth6naVLaF4wS+PMNePz6mOXLovit64A4jJBK9uk3iC33bu+KfKzqiJdMG1w6VNvDfB3eTmWtfPBS3Rh6TpyDoh8Jt9jkPhYlj2V4Jclwow4WINd/ByiydodJBg X-MS-Exchange-AntiSpam-MessageData: viR9EKOd7L6FX/LHBPdd7fgAXUuoPy9RkaItoHN7/qSyT8fjRHjkvdIBmwv1ZDOYI4xy3VsahH2YtIQdAt/LXATniRxE54x7zH5SpREhqwhLTrDf6vkBgSXWZK0whKkmbweLg4/oLo+PVCvml42T5Q== X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6783e983-b072-42a3-d0ba-08d7dc9cb82c X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Apr 2020 15:43:16.1786 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: am/jmZCpacCTkeWz3cWOGp+/Fah01L6pebRw/IIWshnzW2XxjYWUmgBqwUMP/Or5xf8WlNmllhbJThm9HIuIOA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0SPR01MB0078 Subject: [dpdk-dev] [PATCH 1/5] app/test-flow-perf: add flow performance skeleton X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add flow performance application skeleton. Signed-off-by: Wisam Jaddo Reviewed-by: Xiaoyu Min --- MAINTAINERS | 5 + app/Makefile | 1 + app/meson.build | 1 + app/test-flow-perf/Makefile | 26 +++ app/test-flow-perf/main.c | 246 +++++++++++++++++++++++++++ app/test-flow-perf/meson.build | 11 ++ app/test-flow-perf/user_parameters.h | 16 ++ config/common_base | 5 + doc/guides/tools/flow-perf.rst | 69 ++++++++ doc/guides/tools/index.rst | 1 + 10 files changed, 381 insertions(+) create mode 100644 app/test-flow-perf/Makefile create mode 100644 app/test-flow-perf/main.c create mode 100644 app/test-flow-perf/meson.build create mode 100644 app/test-flow-perf/user_parameters.h create mode 100644 doc/guides/tools/flow-perf.rst diff --git a/MAINTAINERS b/MAINTAINERS index 4800f6884a..a389ac127f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1495,6 +1495,11 @@ T: git://dpdk.org/next/dpdk-next-net F: app/test-pmd/ F: doc/guides/testpmd_app_ug/ +Flow performance tool +M: Wisam Jaddo +F: app/test-flow-perf +F: doc/guides/flow-perf.rst + Compression performance test application T: git://dpdk.org/next/dpdk-next-crypto F: app/test-compress-perf/ diff --git a/app/Makefile b/app/Makefile index db9d2d5380..694df67358 100644 --- a/app/Makefile +++ b/app/Makefile @@ -9,6 +9,7 @@ DIRS-$(CONFIG_RTE_PROC_INFO) += proc-info DIRS-$(CONFIG_RTE_LIBRTE_PDUMP) += pdump DIRS-$(CONFIG_RTE_LIBRTE_ACL) += test-acl DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += test-cmdline +DIRS-$(CONFIG_RTE_TEST_FLOW_PERF) += test-flow-perf DIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += test-pipeline DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += test-sad diff --git a/app/meson.build b/app/meson.build index 71109cc422..20d77b0bd6 100644 --- a/app/meson.build +++ b/app/meson.build @@ -14,6 +14,7 @@ apps = [ 'test-compress-perf', 'test-crypto-perf', 'test-eventdev', + 'test-flow-perf', 'test-pipeline', 'test-pmd', 'test-sad'] diff --git a/app/test-flow-perf/Makefile b/app/test-flow-perf/Makefile new file mode 100644 index 0000000000..45b1fb1464 --- /dev/null +++ b/app/test-flow-perf/Makefile @@ -0,0 +1,26 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2020 Mellanox Technologies, Ltd + +include $(RTE_SDK)/mk/rte.vars.mk + +ifeq ($(CONFIG_RTE_TEST_FLOW_PERF),y) + +# +# library name +# +APP = flow_perf + +CFLAGS += -DALLOW_EXPERIMENTAL_API +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) +CFLAGS += -Wno-deprecated-declarations +CFLAGS += -Wno-unused-function + +# +# all source are stored in SRCS-y +# +SRCS-y += main.c + +include $(RTE_SDK)/mk/rte.app.mk + +endif diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c new file mode 100644 index 0000000000..156b9ef553 --- /dev/null +++ b/app/test-flow-perf/main.c @@ -0,0 +1,246 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * This file contain the application main file + * This application provides the user the ability to test the + * insertion rate for specific rte_flow rule under stress state ~4M rule/ + * + * Then it will also provide packet per second measurement after installing + * all rules, the user may send traffic to test the PPS that match the rules + * after all rules are installed, to check performance or functionality after + * the stress. + * + * The flows insertion will go for all ports first, then it will print the + * results, after that the application will go into forwarding packets mode + * it will start receiving traffic if any and then forwarding it back and + * gives packet per second measurement. + * + * Copyright 2020 Mellanox Technologies, Ltd + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "user_parameters.h" + +static uint32_t nb_lcores; +static struct rte_mempool *mbuf_mp; + +static void usage(char *progname) +{ + printf("\nusage: %s", progname); +} + +static void +args_parse(int argc, char **argv) +{ + char **argvopt; + int opt; + int opt_idx; + static struct option lgopts[] = { + /* Control */ + { "help", 0, 0, 0 }, + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &opt_idx)) != EOF) { + switch (opt) { + case 0: + if (!strcmp(lgopts[opt_idx].name, "help")) { + usage(argv[0]); + rte_exit(EXIT_SUCCESS, "Displayed help\n"); + } + break; + default: + usage(argv[0]); + printf("Invalid option: %s\n", argv[optind]); + rte_exit(EXIT_SUCCESS, "Invalid option\n"); + break; + } + } +} + +static void +init_port(void) +{ + int ret; + uint16_t i, j; + uint16_t port_id; + uint16_t nr_ports = rte_eth_dev_count_avail(); + struct rte_eth_hairpin_conf hairpin_conf = { + .peer_count = 1, + }; + struct rte_eth_conf port_conf = { + .rxmode = { + .split_hdr_size = 0, + }, + .rx_adv_conf = { + .rss_conf.rss_hf = + ETH_RSS_IP | + ETH_RSS_UDP | + ETH_RSS_TCP, + } + }; + struct rte_eth_txconf txq_conf; + struct rte_eth_rxconf rxq_conf; + struct rte_eth_dev_info dev_info; + + if (nr_ports == 0) + rte_exit(EXIT_FAILURE, "Error: no port detected\n"); + mbuf_mp = rte_pktmbuf_pool_create("mbuf_pool", + TOTAL_MBUF_NUM, MBUF_CACHE_SIZE, + 0, MBUF_SIZE, + rte_socket_id()); + + if (mbuf_mp == NULL) + rte_exit(EXIT_FAILURE, "Error: can't init mbuf pool\n"); + + for (port_id = 0; port_id < nr_ports; port_id++) { + ret = rte_eth_dev_info_get(port_id, &dev_info); + if (ret != 0) + rte_exit(EXIT_FAILURE, + "Error during getting device (port %u) info: %s\n", + port_id, strerror(-ret)); + + port_conf.txmode.offloads &= dev_info.tx_offload_capa; + printf(":: initializing port: %d\n", port_id); + ret = rte_eth_dev_configure(port_id, RXQs + HAIRPIN_QUEUES, + TXQs + HAIRPIN_QUEUES, &port_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, + ":: cannot configure device: err=%d, port=%u\n", + ret, port_id); + + rxq_conf = dev_info.default_rxconf; + rxq_conf.offloads = port_conf.rxmode.offloads; + for (i = 0; i < RXQs; i++) { + ret = rte_eth_rx_queue_setup(port_id, i, NR_RXD, + rte_eth_dev_socket_id(port_id), + &rxq_conf, + mbuf_mp); + if (ret < 0) + rte_exit(EXIT_FAILURE, + ":: Rx queue setup failed: err=%d, port=%u\n", + ret, port_id); + } + + txq_conf = dev_info.default_txconf; + txq_conf.offloads = port_conf.txmode.offloads; + + for (i = 0; i < TXQs; i++) { + ret = rte_eth_tx_queue_setup(port_id, i, NR_TXD, + rte_eth_dev_socket_id(port_id), + &txq_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, + ":: Tx queue setup failed: err=%d, port=%u\n", + ret, port_id); + } + + ret = rte_eth_promiscuous_enable(port_id); + if (ret != 0) + rte_exit(EXIT_FAILURE, + ":: promiscuous mode enable failed: err=%s, port=%u\n", + rte_strerror(-ret), port_id); + + for (i = RXQs, j = 0; i < RXQs + HAIRPIN_QUEUES; i++, j++) { + hairpin_conf.peers[0].port = port_id; + hairpin_conf.peers[0].queue = j + TXQs; + ret = rte_eth_rx_hairpin_queue_setup(port_id, i, + NR_RXD, &hairpin_conf); + if (ret != 0) + rte_exit(EXIT_FAILURE, + ":: Hairpin rx queue setup failed: err=%d, port=%u\n", + ret, port_id); + } + + for (i = TXQs, j = 0; i < TXQs + HAIRPIN_QUEUES; i++, j++) { + hairpin_conf.peers[0].port = port_id; + hairpin_conf.peers[0].queue = j + RXQs; + ret = rte_eth_tx_hairpin_queue_setup(port_id, i, + NR_TXD, &hairpin_conf); + if (ret != 0) + rte_exit(EXIT_FAILURE, + ":: Hairpin tx queue setup failed: err=%d, port=%u\n", + ret, port_id); + } + + ret = rte_eth_dev_start(port_id); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "rte_eth_dev_start:err=%d, port=%u\n", + ret, port_id); + + printf(":: initializing port: %d done\n", port_id); + } +} + +int +main(int argc, char **argv) +{ + uint16_t lcore_id; + uint16_t port; + uint16_t nr_ports; + int ret; + struct rte_flow_error error; + + nr_ports = rte_eth_dev_count_avail(); + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "EAL init failed\n"); + + argc -= ret; + argv += ret; + + if (argc > 1) + args_parse(argc, argv); + + init_port(); + + nb_lcores = rte_lcore_count(); + + if (nb_lcores <= 1) + rte_exit(EXIT_FAILURE, "This app needs at least two cores\n"); + + RTE_LCORE_FOREACH_SLAVE(lcore_id) + + if (rte_eal_wait_lcore(lcore_id) < 0) + break; + + for (port = 0; port < nr_ports; port++) { + rte_flow_flush(port, &error); + rte_eth_dev_stop(port); + rte_eth_dev_close(port); + } + return 0; +} diff --git a/app/test-flow-perf/meson.build b/app/test-flow-perf/meson.build new file mode 100644 index 0000000000..ec9bb3b3aa --- /dev/null +++ b/app/test-flow-perf/meson.build @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2020 Mellanox Technologies, Ltd + +# meson file, for building this example as part of a main DPDK build. +# +# To build this example as a standalone application with an already-installed +# DPDK instance, use 'make' + +sources = files( + 'main.c', +) diff --git a/app/test-flow-perf/user_parameters.h b/app/test-flow-perf/user_parameters.h new file mode 100644 index 0000000000..56ec7f47b5 --- /dev/null +++ b/app/test-flow-perf/user_parameters.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Claus + * + * This file will hold the user parameters values + * + * Copyright 2020 Mellanox Technologies, Ltd + */ + +/** Configuration **/ +#define RXQs 4 +#define TXQs 4 +#define HAIRPIN_QUEUES 4 +#define TOTAL_MBUF_NUM 32000 +#define MBUF_SIZE 2048 +#define MBUF_CACHE_SIZE 512 +#define NR_RXD 256 +#define NR_TXD 256 diff --git a/config/common_base b/config/common_base index c31175f9d6..79455bf94a 100644 --- a/config/common_base +++ b/config/common_base @@ -1111,3 +1111,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y # Compile the eventdev application # CONFIG_RTE_APP_EVENTDEV=y + +# +# Compile the rte flow perf application +# +CONFIG_RTE_TEST_FLOW_PERF=y diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst new file mode 100644 index 0000000000..30ce1b6cc0 --- /dev/null +++ b/doc/guides/tools/flow-perf.rst @@ -0,0 +1,69 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2020 Mellanox Technologies, Ltd + +RTE Flow performance tool +========================= + +Application for rte_flow performance testing. + + +Compiling the Application +========================= +The ``test-flow-perf`` application is compiled as part of the main compilation +of the DPDK libraries and tools. + +Refer to the DPDK Getting Started Guides for details. +The basic compilation steps are: + +#. Set the required environmental variables and go to the source directory: + + .. code-block:: console + + export RTE_SDK=/path/to/rte_sdk + cd $RTE_SDK + +#. Set the compilation target. For example: + + .. code-block:: console + + export RTE_TARGET=x86_64-native-linux-gcc + +#. Build the application: + + .. code-block:: console + + make install T=$RTE_TARGET + +#. The compiled application will be located at: + + .. code-block:: console + + $RTE_SDK/$RTE_TARGET/app/flow-perf + + +Running the Application +======================= + +EAL Command-line Options +------------------------ + +Please refer to :doc:`EAL parameters (Linux) <../linux_gsg/linux_eal_parameters>` +or :doc:`EAL parameters (FreeBSD) <../freebsd_gsg/freebsd_eal_parameters>` for +a list of available EAL command-line options. + + +Flow performance Options +------------------------ + +The following are the command-line options for the flow performance application. +They must be separated from the EAL options, shown in the previous section, with +a ``--`` separator: + +.. code-block:: console + + sudo ./test-flow-perf -n 4 -w 08:00.0,dv_flow_en=1 -- + +The command line options are: + +* ``--help`` + Display a help message and quit. diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst index 782b30864e..7279daebc6 100644 --- a/doc/guides/tools/index.rst +++ b/doc/guides/tools/index.rst @@ -16,3 +16,4 @@ DPDK Tools User Guides cryptoperf comp_perf testeventdev + flow-perf From patchwork Thu Apr 9 15:42:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wisam Jaddo X-Patchwork-Id: 68058 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7C50AA0597; Thu, 9 Apr 2020 17:43:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2A04B1D151; Thu, 9 Apr 2020 17:43:21 +0200 (CEST) Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-eopbgr80050.outbound.protection.outlook.com [40.107.8.50]) by dpdk.org (Postfix) with ESMTP id 086BE1C43D for ; Thu, 9 Apr 2020 17:43:18 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=O9VjsY5xrxJvioFWbXteAoaie5JiJjoAhVPIYQ/eEydhXeOiGOqiNsfMXHnuYXvi+9qe7iLzFsrG637a9nudXOwbLuOwHWsTuIdgFV5qdXQvdewQMsx+u1jOFL3QTD+7Rve5r45wJVs/Dvuja+Q5nU2+jSAzFlwswG1SVbQxHJ7yf9tQ1GxpcaclzHFrGh9AjZiO3yoiI5y8Ca4nzb6h3Xjz5MIBpwNRkMk3w4hDgV06rvOW72XRQKEMN/+nlC1TY26Lg3FCu71kJpzOhPeE9WnLAWrw5MHyjvj/lak9uIUz/nbjll7gO9ToRIdoDdzf9z8amvaVJGUxD/8Ay58MXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WXH4i1VgLOFp+KjbaWWqDMaeUY4uUXGvOP5NMfzqdt0=; b=ONh5dZSUOn8llry0P5fLzJKGm9DeNNIcrg5VZkAGFoArb4ZJDBtGLZo8i4w65Jlo0T4ozTOMdUFY6SXNsbL/H71buG79IRbnH8tscpi0NtM8msF2483QlbHQK3f+T4hJKeBbCTJoeQ8ojoQA1RrA3gl8O1XAttoK+E2LksmTMFEb2yWQ8dyuOVnida7juljoQd83wjgSpigWll3jcet/Xg8tSTVCNxDL+5/NH2xOxP9voOEBFO38LWPePeGFgxFvZTzze19dn1gFBLT/XpGYn/M4tQKvSnJlmvL4/4/SXN6tVCYEAeD1vfaPmqkzoC0Kx24soPnIuDHPxdeECQAkAg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WXH4i1VgLOFp+KjbaWWqDMaeUY4uUXGvOP5NMfzqdt0=; b=U4A9Lm4ZtMdZH7z61csJJrWJyxbA37NQu2ymQklf9vZZuHxS3OLAQEt2/Pj8O7sYhJ74XcdSlnkZrUBCi9eJHdzWccGNzkz5Oa1NnQQ0nWjhjsDupJ0QVwQq8qVda/ZVB7c/NaCEsMFxC54rEw69AAigFUDMQCtI2MqsyrNlPSY= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=wisamm@mellanox.com; Received: from AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) by AM0SPR01MB0078.eurprd05.prod.outlook.com (2603:10a6:208:156::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.15; Thu, 9 Apr 2020 15:43:17 +0000 Received: from AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6]) by AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6%5]) with mapi id 15.20.2878.023; Thu, 9 Apr 2020 15:43:17 +0000 From: Wisam Jaddo To: dev@dpdk.org, jackmin@mellanox.com, jerinjacobk@gmail.com Cc: thomas@monjalon.net Date: Thu, 9 Apr 2020 15:42:54 +0000 Message-Id: <20200409154257.11539-2-wisamm@mellanox.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200409154257.11539-1-wisamm@mellanox.com> References: <1584452772-31147-1-git-send-email-wisamm@mellanox.com> <20200409154257.11539-1-wisamm@mellanox.com> X-ClientProxiedBy: PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) To AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mellanox.com (37.142.13.130) by PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.17 via Frontend Transport; Thu, 9 Apr 2020 15:43:16 +0000 X-Mailer: git-send-email 2.17.1 X-Originating-IP: [37.142.13.130] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: a82fd5c1-7d1b-47ee-1c40-08d7dc9cb8cb X-MS-TrafficTypeDiagnostic: AM0SPR01MB0078:|AM0SPR01MB0078: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-Forefront-PRVS: 0368E78B5B X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM0PR05MB6610.eurprd05.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(10009020)(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(66946007)(7696005)(1076003)(55016002)(2906002)(52116002)(5660300002)(66556008)(8936002)(8886007)(81156014)(86362001)(316002)(6666004)(26005)(186003)(8676002)(66476007)(16526019)(4326008)(956004)(478600001)(2616005)(81166007)(36756003)(30864003)(579004); DIR:OUT; SFP:1101; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Grb5hWRkIbqeodUfySIh0W/RYYbbT3vk7ZTh8TeFTPK9QkikHUEVDDoL3xRGZfOfLWDFd5p0kFcyVGA6XyWnbtn1B1wIAdiaPb3n0pIaiTPe8zeLWYNdexuP2jY1Uab3cnuXtdbsqBAo6byksqr/r03Zz4N6FBI72U9TCl31QWXFyJNKLtG/PZE6KeHO+BbRwYB07a5NnWA1i2Pg1ddr1vs6dCTFen8jP5C4BgtnNf75nmPWIKsj+PBMQukPv4utd37K/LnQWPOmMLaICkm1v3pRGzy0/8fLNj3xBb+dAWoFeIbXeHBeJHIWF103r/Be/PE8jNhpH4Xq6JxAQ8ZvSZYw7p6zyCNB9DNyvi7BPvriYblc8V6iwZPiQABzQnsE+rI61tWUW8fa/wi6GHqV2xButQK6geNM9O0Q6f5zjdiKjWLxtjZErCXzbmJ5ZFAp X-MS-Exchange-AntiSpam-MessageData: zRBvfNdiq4mI2eW5B79qzlZCMnur8Xf+h9u/5RFDD9p+ePs/oVwWZ9k43HDoRor08EfezDD+dvY98ZtP0Y5FdLjrfjtGXxhkRFFxyU1iNil6/Vo5EhzS8gv9woGU2S2v8bdVlgmQZWitAnw0JSzbIA== X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: a82fd5c1-7d1b-47ee-1c40-08d7dc9cb8cb X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Apr 2020 15:43:17.2740 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: NXOiinHMcRMStui1a12rR4+qzgbCPpt+qCADCztC9/VdajCh2rIIs7686VICkaEs//SaZo9k38uJqbisletYFQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0SPR01MB0078 Subject: [dpdk-dev] [PATCH 2/5] app/test-flow-perf: add insertion rate calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add insertion rate calculation feature into flow performance application. The application now provide the ability to test insertion rate of specific rte_flow rule, by stressing it to the NIC, and calculate the insertion rate. The application offers some options in the command line, to configure which rule to apply. After that the application will start producing rules with same pattern but increasing the outer IP source address by 1 each time, thus it will give different flow each time, and all other items will have open masks. The current design have single core insertion rate. In the future we may have a multi core insertion rate measurement support in the app. Signed-off-by: Wisam Jaddo Reviewed-by: Xiaoyu Min --- app/test-flow-perf/Makefile | 3 + app/test-flow-perf/actions_gen.c | 86 ++++++ app/test-flow-perf/actions_gen.h | 48 ++++ app/test-flow-perf/flow_gen.c | 179 ++++++++++++ app/test-flow-perf/flow_gen.h | 61 ++++ app/test-flow-perf/items_gen.c | 265 +++++++++++++++++ app/test-flow-perf/items_gen.h | 68 +++++ app/test-flow-perf/main.c | 415 +++++++++++++++++++++++++-- app/test-flow-perf/meson.build | 8 + app/test-flow-perf/user_parameters.h | 15 + doc/guides/tools/flow-perf.rst | 186 +++++++++++- 11 files changed, 1309 insertions(+), 25 deletions(-) create mode 100644 app/test-flow-perf/actions_gen.c create mode 100644 app/test-flow-perf/actions_gen.h create mode 100644 app/test-flow-perf/flow_gen.c create mode 100644 app/test-flow-perf/flow_gen.h create mode 100644 app/test-flow-perf/items_gen.c create mode 100644 app/test-flow-perf/items_gen.h diff --git a/app/test-flow-perf/Makefile b/app/test-flow-perf/Makefile index 45b1fb1464..968c7c60dd 100644 --- a/app/test-flow-perf/Makefile +++ b/app/test-flow-perf/Makefile @@ -19,6 +19,9 @@ CFLAGS += -Wno-unused-function # # all source are stored in SRCS-y # +SRCS-y += actions_gen.c +SRCS-y += flow_gen.c +SRCS-y += items_gen.c SRCS-y += main.c include $(RTE_SDK)/mk/rte.app.mk diff --git a/app/test-flow-perf/actions_gen.c b/app/test-flow-perf/actions_gen.c new file mode 100644 index 0000000000..564ed820e4 --- /dev/null +++ b/app/test-flow-perf/actions_gen.c @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * The file contains the implementations of actions generators. + * Each generator is responsible for preparing it's action instance + * and initializing it with needed data. + * + * Copyright 2020 Mellanox Technologies, Ltd + **/ + +#include +#include +#include +#include + +#include "actions_gen.h" +#include "user_parameters.h" + +void +gen_mark(void) +{ + mark_action.id = MARK_ID; +} + +void +gen_queue(uint16_t queue) +{ + queue_action.index = queue; +} + +void +gen_jump(uint16_t next_table) +{ + jump_action.group = next_table; +} + +void +gen_rss(uint16_t *queues, uint16_t queues_number) +{ + uint16_t queue; + struct action_rss_data *rss_data; + rss_data = rte_malloc("rss_data", + sizeof(struct action_rss_data), 0); + + if (rss_data == NULL) + rte_exit(EXIT_FAILURE, "No Memory available!"); + + *rss_data = (struct action_rss_data){ + .conf = (struct rte_flow_action_rss){ + .func = RTE_ETH_HASH_FUNCTION_DEFAULT, + .level = 0, + .types = ETH_RSS_IP, + .key_len = 0, + .queue_num = queues_number, + .key = 0, + .queue = rss_data->queue, + }, + .key = { 0 }, + .queue = { 0 }, + }; + + for (queue = 0; queue < queues_number; queue++) + rss_data->queue[queue] = queues[queue]; + + rss_action = &rss_data->conf; +} + +void +gen_set_meta(void) +{ + meta_action.data = RTE_BE32(META_DATA); + meta_action.mask = RTE_BE32(0xffffffff); +} + +void +gen_set_tag(void) +{ + tag_action.data = RTE_BE32(META_DATA); + tag_action.mask = RTE_BE32(0xffffffff); + tag_action.index = TAG_INDEX; +} + +void +gen_port_id(void) +{ + port_id.id = PORT_ID_DST; +} diff --git a/app/test-flow-perf/actions_gen.h b/app/test-flow-perf/actions_gen.h new file mode 100644 index 0000000000..556d48b871 --- /dev/null +++ b/app/test-flow-perf/actions_gen.h @@ -0,0 +1,48 @@ +/** SPDX-License-Identifier: BSD-3-Clause + * + * This file contains the functions definitions to + * generate each supported action. + * + * Copyright 2020 Mellanox Technologies, Ltd + **/ + +#ifndef _ACTION_GEN_ +#define _ACTION_GEN_ + +struct rte_flow_action_mark mark_action; +struct rte_flow_action_queue queue_action; +struct rte_flow_action_jump jump_action; +struct rte_flow_action_rss *rss_action; +struct rte_flow_action_set_meta meta_action; +struct rte_flow_action_set_tag tag_action; +struct rte_flow_action_port_id port_id; + +/* Storage for struct rte_flow_action_rss including external data. */ +struct action_rss_data { + struct rte_flow_action_rss conf; + uint8_t key[64]; + uint16_t queue[128]; +} action_rss_data; + +void +gen_mark(void); + +void +gen_queue(uint16_t queue); + +void +gen_jump(uint16_t next_table); + +void +gen_rss(uint16_t *queues, uint16_t queues_number); + +void +gen_set_meta(void); + +void +gen_set_tag(void); + +void +gen_port_id(void); + +#endif diff --git a/app/test-flow-perf/flow_gen.c b/app/test-flow-perf/flow_gen.c new file mode 100644 index 0000000000..20187e4ed4 --- /dev/null +++ b/app/test-flow-perf/flow_gen.c @@ -0,0 +1,179 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * The file contains the implementations of the method to + * fill items, actions & attributes in their corresponding + * arrays, and then generate rte_flow rule. + * + * After the generation. The rule goes to validation then + * creation state and then return the results. + * + * Copyright 2020 Mellanox Technologies, Ltd + */ + +#include + +#include "flow_gen.h" +#include "items_gen.h" +#include "actions_gen.h" +#include "user_parameters.h" + + +static void +fill_attributes(struct rte_flow_attr *attr, + uint8_t flow_attrs, uint16_t group) +{ + if (flow_attrs & INGRESS) + attr->ingress = 1; + if (flow_attrs & EGRESS) + attr->egress = 1; + if (flow_attrs & TRANSFER) + attr->transfer = 1; + attr->group = group; +} + +static void +fill_items(struct rte_flow_item items[MAX_ITEMS_NUM], + uint16_t flow_items, uint32_t outer_ip_src) +{ + uint8_t items_counter = 0; + + if (flow_items & META_ITEM) + add_meta_data(items, items_counter++); + if (flow_items & TAG_ITEM) + add_meta_tag(items, items_counter++); + if (flow_items & ETH_ITEM) + add_ether(items, items_counter++); + if (flow_items & VLAN_ITEM) + add_vlan(items, items_counter++); + if (flow_items & IPV4_ITEM) + add_ipv4(items, items_counter++, outer_ip_src); + if (flow_items & IPV6_ITEM) + add_ipv6(items, items_counter++, outer_ip_src); + if (flow_items & TCP_ITEM) + add_tcp(items, items_counter++); + if (flow_items & UDP_ITEM) + add_udp(items, items_counter++); + if (flow_items & VXLAN_ITEM) + add_vxlan(items, items_counter++); + if (flow_items & VXLAN_GPE_ITEM) + add_vxlan_gpe(items, items_counter++); + if (flow_items & GRE_ITEM) + add_gre(items, items_counter++); + if (flow_items & GENEVE_ITEM) + add_geneve(items, items_counter++); + if (flow_items & GTP_ITEM) + add_gtp(items, items_counter++); + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_END; +} + +static void +fill_actions(struct rte_flow_action actions[MAX_ACTIONS_NUM], + uint16_t flow_actions, uint32_t counter, uint16_t next_table) +{ + uint8_t actions_counter = 0; + uint16_t queues[RXQs]; + uint16_t hairpin_queues[HAIRPIN_QUEUES]; + uint16_t i; + struct rte_flow_action_count count_action; + + /* None-fate actions */ + if (flow_actions & MARK_ACTION) { + if (!counter) + gen_mark(); + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_MARK; + actions[actions_counter++].conf = &mark_action; + } + if (flow_actions & COUNT_ACTION) { + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_COUNT; + actions[actions_counter++].conf = &count_action; + } + if (flow_actions & META_ACTION) { + if (!counter) + gen_set_meta(); + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_SET_META; + actions[actions_counter++].conf = &meta_action; + } + if (flow_actions & TAG_ACTION) { + if (!counter) + gen_set_tag(); + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_SET_TAG; + actions[actions_counter++].conf = &tag_action; + } + + /* Fate actions */ + if (flow_actions & QUEUE_ACTION) { + gen_queue(counter % RXQs); + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_QUEUE; + actions[actions_counter++].conf = &queue_action; + } + if (flow_actions & RSS_ACTION) { + if (!counter) { + for (i = 0; i < RXQs; i++) + queues[i] = i; + gen_rss(queues, RXQs); + } + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_RSS; + actions[actions_counter++].conf = rss_action; + } + if (flow_actions & JUMP_ACTION) { + if (!counter) + gen_jump(next_table); + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_JUMP; + actions[actions_counter++].conf = &jump_action; + } + if (flow_actions & PORT_ID_ACTION) { + if (!counter) + gen_port_id(); + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_PORT_ID; + actions[actions_counter++].conf = &port_id; + } + if (flow_actions & DROP_ACTION) + actions[actions_counter++].type = RTE_FLOW_ACTION_TYPE_DROP; + if (flow_actions & HAIRPIN_QUEUE_ACTION) { + gen_queue((counter % HAIRPIN_QUEUES) + RXQs); + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_QUEUE; + actions[actions_counter++].conf = &queue_action; + } + if (flow_actions & HAIRPIN_RSS_ACTION) { + if (!counter) { + for (i = 0; i < RXQs; i++) + hairpin_queues[i] = i + RXQs; + gen_rss(hairpin_queues, HAIRPIN_QUEUES); + } + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_RSS; + actions[actions_counter++].conf = rss_action; + } + + actions[actions_counter].type = RTE_FLOW_ACTION_TYPE_END; +} + +struct rte_flow * +generate_flow(uint16_t port_id, + uint16_t group, + uint8_t flow_attrs, + uint16_t flow_items, + uint16_t flow_actions, + uint16_t next_table, + uint32_t outer_ip_src, + struct rte_flow_error *error) +{ + struct rte_flow_attr attr; + struct rte_flow_item items[MAX_ITEMS_NUM]; + struct rte_flow_action actions[MAX_ACTIONS_NUM]; + struct rte_flow *flow = NULL; + + memset(items, 0, sizeof(items)); + memset(actions, 0, sizeof(actions)); + memset(&attr, 0, sizeof(struct rte_flow_attr)); + + fill_attributes(&attr, flow_attrs, group); + + fill_actions(actions, flow_actions, + outer_ip_src, next_table); + + fill_items(items, flow_items, outer_ip_src); + + flow = rte_flow_create(port_id, &attr, items, actions, error); + return flow; +} diff --git a/app/test-flow-perf/flow_gen.h b/app/test-flow-perf/flow_gen.h new file mode 100644 index 0000000000..99cb9e3791 --- /dev/null +++ b/app/test-flow-perf/flow_gen.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * This file contains the items, actions and attributes + * definition. And the methods to prepare and fill items, + * actions and attributes to generate rte_flow rule. + * + * Copyright 2020 Mellanox Technologies, Ltd + */ + +#ifndef _FLOW_GEN_ +#define _FLOW_GEN_ + +#include +#include + +#include "user_parameters.h" + +/* Items */ +#define ETH_ITEM 0x0001 +#define IPV4_ITEM 0x0002 +#define IPV6_ITEM 0x0004 +#define VLAN_ITEM 0x0008 +#define TCP_ITEM 0x0010 +#define UDP_ITEM 0x0020 +#define VXLAN_ITEM 0x0040 +#define VXLAN_GPE_ITEM 0x0080 +#define GRE_ITEM 0x0100 +#define GENEVE_ITEM 0x0200 +#define GTP_ITEM 0x0400 +#define META_ITEM 0x0800 +#define TAG_ITEM 0x1000 + +/* Actions */ +#define QUEUE_ACTION 0x0001 +#define MARK_ACTION 0x0002 +#define JUMP_ACTION 0x0004 +#define RSS_ACTION 0x0008 +#define COUNT_ACTION 0x0010 +#define META_ACTION 0x0020 +#define TAG_ACTION 0x0040 +#define DROP_ACTION 0x0080 +#define PORT_ID_ACTION 0x0100 +#define HAIRPIN_QUEUE_ACTION 0x0200 +#define HAIRPIN_RSS_ACTION 0x0400 + +/* Attributes */ +#define INGRESS 0x0001 +#define EGRESS 0x0002 +#define TRANSFER 0x0004 + +struct rte_flow * +generate_flow(uint16_t port_id, + uint16_t group, + uint8_t flow_attrs, + uint16_t flow_items, + uint16_t flow_actions, + uint16_t next_table, + uint32_t outer_ip_src, + struct rte_flow_error *error); + +#endif diff --git a/app/test-flow-perf/items_gen.c b/app/test-flow-perf/items_gen.c new file mode 100644 index 0000000000..fb9733d4e7 --- /dev/null +++ b/app/test-flow-perf/items_gen.c @@ -0,0 +1,265 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * This file contain the implementations of the items + * related methods. Each Item have a method to prepare + * the item and add it into items array in given index. + * + * Copyright 2020 Mellanox Technologies, Ltd + */ + +#include +#include + +#include "items_gen.h" +#include "user_parameters.h" + +static struct rte_flow_item_eth eth_spec; +static struct rte_flow_item_eth eth_mask; +static struct rte_flow_item_vlan vlan_spec; +static struct rte_flow_item_vlan vlan_mask; +static struct rte_flow_item_ipv4 ipv4_spec; +static struct rte_flow_item_ipv4 ipv4_mask; +static struct rte_flow_item_ipv6 ipv6_spec; +static struct rte_flow_item_ipv6 ipv6_mask; +static struct rte_flow_item_udp udp_spec; +static struct rte_flow_item_udp udp_mask; +static struct rte_flow_item_tcp tcp_spec; +static struct rte_flow_item_tcp tcp_mask; +static struct rte_flow_item_vxlan vxlan_spec; +static struct rte_flow_item_vxlan vxlan_mask; +static struct rte_flow_item_vxlan_gpe vxlan_gpe_spec; +static struct rte_flow_item_vxlan_gpe vxlan_gpe_mask; +static struct rte_flow_item_gre gre_spec; +static struct rte_flow_item_gre gre_mask; +static struct rte_flow_item_geneve geneve_spec; +static struct rte_flow_item_geneve geneve_mask; +static struct rte_flow_item_gtp gtp_spec; +static struct rte_flow_item_gtp gtp_mask; +static struct rte_flow_item_meta meta_spec; +static struct rte_flow_item_meta meta_mask; +static struct rte_flow_item_tag tag_spec; +static struct rte_flow_item_tag tag_mask; + + +void +add_ether(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + memset(ð_spec, 0, sizeof(struct rte_flow_item_eth)); + memset(ð_mask, 0, sizeof(struct rte_flow_item_eth)); + eth_spec.type = 0; + eth_mask.type = 0; + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_ETH; + items[items_counter].spec = ð_spec; + items[items_counter].mask = ð_mask; +} + +void +add_vlan(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + uint16_t vlan_value = VLAN_VALUE; + memset(&vlan_spec, 0, sizeof(struct rte_flow_item_vlan)); + memset(&vlan_mask, 0, sizeof(struct rte_flow_item_vlan)); + + vlan_spec.tci = RTE_BE16(vlan_value); + vlan_mask.tci = RTE_BE16(0xffff); + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_VLAN; + items[items_counter].spec = &vlan_spec; + items[items_counter].mask = &vlan_mask; +} + +void +add_ipv4(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter, uint32_t src_ipv4) +{ + memset(&ipv4_spec, 0, sizeof(struct rte_flow_item_ipv4)); + memset(&ipv4_mask, 0, sizeof(struct rte_flow_item_ipv4)); + + ipv4_spec.hdr.src_addr = src_ipv4; + ipv4_mask.hdr.src_addr = 0xffffffff; + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_IPV4; + items[items_counter].spec = &ipv4_spec; + items[items_counter].mask = &ipv4_mask; +} + + +void +add_ipv6(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter, int src_ipv6) +{ + memset(&ipv6_spec, 0, sizeof(struct rte_flow_item_ipv6)); + memset(&ipv6_mask, 0, sizeof(struct rte_flow_item_ipv6)); + + /** Set ipv6 src **/ + memset(&ipv6_spec.hdr.src_addr, src_ipv6, + sizeof(ipv6_spec.hdr.src_addr) / 2); + + /** Full mask **/ + memset(&ipv6_mask.hdr.src_addr, 1, + sizeof(ipv6_spec.hdr.src_addr)); + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_IPV6; + items[items_counter].spec = &ipv6_spec; + items[items_counter].mask = &ipv6_mask; +} + +void +add_tcp(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + memset(&tcp_spec, 0, sizeof(struct rte_flow_item_tcp)); + memset(&tcp_mask, 0, sizeof(struct rte_flow_item_tcp)); + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_TCP; + items[items_counter].spec = &tcp_spec; + items[items_counter].mask = &tcp_mask; +} + +void +add_udp(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + memset(&udp_spec, 0, sizeof(struct rte_flow_item_udp)); + memset(&udp_mask, 0, sizeof(struct rte_flow_item_udp)); + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_UDP; + items[items_counter].spec = &udp_spec; + items[items_counter].mask = &udp_mask; +} + +void +add_vxlan(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + uint32_t vni_value = VNI_VALUE; + uint8_t i; + memset(&vxlan_spec, 0, sizeof(struct rte_flow_item_vxlan)); + memset(&vxlan_mask, 0, sizeof(struct rte_flow_item_vxlan)); + + /* Set standard vxlan vni */ + for (i = 0; i < 3; i++) { + vxlan_spec.vni[2 - i] = vni_value >> (i * 8); + vxlan_mask.vni[2 - i] = 0xff; + } + + /* Standard vxlan flags **/ + vxlan_spec.flags = 0x8; + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN; + items[items_counter].spec = &vxlan_spec; + items[items_counter].mask = &vxlan_mask; +} + +void +add_vxlan_gpe(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + uint32_t vni_value = VNI_VALUE; + uint8_t i; + memset(&vxlan_gpe_spec, 0, sizeof(struct rte_flow_item_vxlan_gpe)); + memset(&vxlan_gpe_mask, 0, sizeof(struct rte_flow_item_vxlan_gpe)); + + /* Set vxlan-gpe vni */ + for (i = 0; i < 3; i++) { + vxlan_gpe_spec.vni[2 - i] = vni_value >> (i * 8); + vxlan_gpe_mask.vni[2 - i] = 0xff; + } + + /* vxlan-gpe flags */ + vxlan_gpe_spec.flags = 0x0c; + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_VXLAN_GPE; + items[items_counter].spec = &vxlan_gpe_spec; + items[items_counter].mask = &vxlan_gpe_mask; +} + +void +add_gre(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + uint16_t proto = GRE_PROTO; + memset(&gre_spec, 0, sizeof(struct rte_flow_item_gre)); + memset(&gre_mask, 0, sizeof(struct rte_flow_item_gre)); + + gre_spec.protocol = RTE_BE16(proto); + gre_mask.protocol = 0xffff; + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_GRE; + items[items_counter].spec = &gre_spec; + items[items_counter].mask = &gre_mask; +} + +void +add_geneve(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + uint32_t vni_value = VNI_VALUE; + uint8_t i; + memset(&geneve_spec, 0, sizeof(struct rte_flow_item_geneve)); + memset(&geneve_mask, 0, sizeof(struct rte_flow_item_geneve)); + + for (i = 0; i < 3; i++) { + geneve_spec.vni[2 - i] = vni_value >> (i * 8); + geneve_mask.vni[2 - i] = 0xff; + } + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_GENEVE; + items[items_counter].spec = &geneve_spec; + items[items_counter].mask = &geneve_mask; +} + +void +add_gtp(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + uint32_t teid_value = TEID_VALUE; + memset(>p_spec, 0, sizeof(struct rte_flow_item_gtp)); + memset(>p_mask, 0, sizeof(struct rte_flow_item_gtp)); + + gtp_spec.teid = RTE_BE32(teid_value); + gtp_mask.teid = RTE_BE32(0xffffffff); + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_GTP; + items[items_counter].spec = >p_spec; + items[items_counter].mask = >p_mask; +} + +void +add_meta_data(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + uint32_t data = META_DATA; + memset(&meta_spec, 0, sizeof(struct rte_flow_item_meta)); + memset(&meta_mask, 0, sizeof(struct rte_flow_item_meta)); + + meta_spec.data = RTE_BE32(data); + meta_mask.data = RTE_BE32(0xffffffff); + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_META; + items[items_counter].spec = &meta_spec; + items[items_counter].mask = &meta_mask; +} + + +void +add_meta_tag(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter) +{ + uint32_t data = META_DATA; + uint8_t index = TAG_INDEX; + memset(&tag_spec, 0, sizeof(struct rte_flow_item_tag)); + memset(&tag_mask, 0, sizeof(struct rte_flow_item_tag)); + + tag_spec.data = RTE_BE32(data); + tag_mask.data = RTE_BE32(0xffffffff); + tag_spec.index = index; + tag_mask.index = 0xff; + + items[items_counter].type = RTE_FLOW_ITEM_TYPE_TAG; + items[items_counter].spec = &tag_spec; + items[items_counter].mask = &tag_mask; +} diff --git a/app/test-flow-perf/items_gen.h b/app/test-flow-perf/items_gen.h new file mode 100644 index 0000000000..0b01385951 --- /dev/null +++ b/app/test-flow-perf/items_gen.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * This file contains the items related methods + * + * Copyright 2020 Mellanox Technologies, Ltd + */ + +#ifndef _ITEMS_GEN_ +#define _ITEMS_GEN_ + +#include +#include + +#include "user_parameters.h" + +void +add_ether(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_vlan(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_ipv4(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter, uint32_t src_ipv4); + +void +add_ipv6(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter, int src_ipv6); + +void +add_udp(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_tcp(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_vxlan(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_vxlan_gpe(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_gre(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_geneve(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_gtp(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_meta_data(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +void +add_meta_tag(struct rte_flow_item items[MAX_ITEMS_NUM], + uint8_t items_counter); + +#endif diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c index 156b9ef553..59dc5ae0f4 100644 --- a/app/test-flow-perf/main.c +++ b/app/test-flow-perf/main.c @@ -49,29 +49,119 @@ #include #include +#include "flow_gen.h" #include "user_parameters.h" -static uint32_t nb_lcores; +#define MAX_ITERATIONS 100 + +struct rte_flow *flow; +static uint8_t flow_group; + +static uint16_t flow_items; +static uint16_t flow_actions; +static uint8_t flow_attrs; +static volatile bool force_quit; +static volatile bool dump_iterations; static struct rte_mempool *mbuf_mp; +static uint32_t nb_lcores; +static uint32_t flows_count; +static uint32_t iterations_number; static void usage(char *progname) { printf("\nusage: %s", progname); + printf("\nControl configurations:\n"); + printf(" --flows-count=N: to set the number of needed" + " flows to insert, default is 4,000,000\n"); + printf(" --dump-iterations: To print rates for each" + " iteration\n"); + + printf("To set flow attributes:\n"); + printf(" --ingress: set ingress attribute in flows\n"); + printf(" --egress: set egress attribute in flows\n"); + printf(" --transfer: set transfer attribute in flows\n"); + printf(" --group=N: set group for all flows," + " default is 0\n"); + + printf("To set flow items:\n"); + printf(" --ether: add ether layer in flow items\n"); + printf(" --vlan: add vlan layer in flow items\n"); + printf(" --ipv4: add ipv4 layer in flow items\n"); + printf(" --ipv6: add ipv6 layer in flow items\n"); + printf(" --tcp: add tcp layer in flow items\n"); + printf(" --udp: add udp layer in flow items\n"); + printf(" --vxlan: add vxlan layer in flow items\n"); + printf(" --vxlan-gpe: add vxlan-gpe layer in flow items\n"); + printf(" --gre: add gre layer in flow items\n"); + printf(" --geneve: add geneve layer in flow items\n"); + printf(" --gtp: add gtp layer in flow items\n"); + printf(" --meta: add meta layer in flow items\n"); + printf(" --tag: add tag layer in flow items\n"); + + printf("To set flow actions:\n"); + printf(" --port-id: add port-id action in flow actions\n"); + printf(" --rss: add rss action in flow actions\n"); + printf(" --queue: add queue action in flow actions\n"); + printf(" --jump: add jump action in flow actions\n"); + printf(" --mark: add mark action in flow actions\n"); + printf(" --count: add count action in flow actions\n"); + printf(" --set-meta: add set meta action in flow actions\n"); + printf(" --set-tag: add set tag action in flow actions\n"); + printf(" --drop: add drop action in flow actions\n"); + printf(" --hairpin-queue: add hairpin-queue action in flow actions\n"); + printf(" --hairpin-rss: add hairping-rss action in flow actions\n"); } static void args_parse(int argc, char **argv) { char **argvopt; - int opt; + int n, opt; int opt_idx; static struct option lgopts[] = { /* Control */ { "help", 0, 0, 0 }, + { "flows-count", 1, 0, 0 }, + { "dump-iterations", 0, 0, 0 }, + /* Attributes */ + { "ingress", 0, 0, 0 }, + { "egress", 0, 0, 0 }, + { "transfer", 0, 0, 0 }, + { "group", 1, 0, 0 }, + /* Items */ + { "ether", 0, 0, 0 }, + { "vlan", 0, 0, 0 }, + { "ipv4", 0, 0, 0 }, + { "ipv6", 0, 0, 0 }, + { "tcp", 0, 0, 0 }, + { "udp", 0, 0, 0 }, + { "vxlan", 0, 0, 0 }, + { "vxlan-gpe", 0, 0, 0 }, + { "gre", 0, 0, 0 }, + { "geneve", 0, 0, 0 }, + { "gtp", 0, 0, 0 }, + { "meta", 0, 0, 0 }, + { "tag", 0, 0, 0 }, + /* Actions */ + { "port-id", 0, 0, 0 }, + { "rss", 0, 0, 0 }, + { "queue", 0, 0, 0 }, + { "jump", 0, 0, 0 }, + { "mark", 0, 0, 0 }, + { "count", 0, 0, 0 }, + { "set-meta", 0, 0, 0 }, + { "set-tag", 0, 0, 0 }, + { "drop", 0, 0, 0 }, + { "hairpin-queue", 0, 0, 0 }, + { "hairpin-rss", 0, 0, 0 }, }; + flow_items = 0; + flow_actions = 0; + flow_attrs = 0; argvopt = argv; + printf(":: Flow -> "); while ((opt = getopt_long(argc, argvopt, "", lgopts, &opt_idx)) != EOF) { switch (opt) { @@ -80,6 +170,140 @@ args_parse(int argc, char **argv) usage(argv[0]); rte_exit(EXIT_SUCCESS, "Displayed help\n"); } + /* Attributes */ + if (!strcmp(lgopts[opt_idx].name, "ingress")) { + flow_attrs |= INGRESS; + printf("ingress "); + } + if (!strcmp(lgopts[opt_idx].name, "egress")) { + flow_attrs |= EGRESS; + printf("egress "); + } + if (!strcmp(lgopts[opt_idx].name, "transfer")) { + flow_attrs |= TRANSFER; + printf("transfer "); + } + if (!strcmp(lgopts[opt_idx].name, "group")) { + n = atoi(optarg); + if (n >= 0) + flow_group = n; + else + rte_exit(EXIT_SUCCESS, + "flow group should be >= 0"); + printf("group %d ", flow_group); + } + /* Items */ + if (!strcmp(lgopts[opt_idx].name, "ether")) { + flow_items |= ETH_ITEM; + printf("ether / "); + } + if (!strcmp(lgopts[opt_idx].name, "ipv4")) { + flow_items |= IPV4_ITEM; + printf("ipv4 / "); + } + if (!strcmp(lgopts[opt_idx].name, "vlan")) { + flow_items |= VLAN_ITEM; + printf("vlan / "); + } + if (!strcmp(lgopts[opt_idx].name, "ipv6")) { + flow_items |= IPV6_ITEM; + printf("ipv6 / "); + } + if (!strcmp(lgopts[opt_idx].name, "tcp")) { + flow_items |= TCP_ITEM; + printf("tcp / "); + } + if (!strcmp(lgopts[opt_idx].name, "udp")) { + flow_items |= UDP_ITEM; + printf("udp / "); + } + if (!strcmp(lgopts[opt_idx].name, "vxlan")) { + flow_items |= VXLAN_ITEM; + printf("vxlan / "); + } + if (!strcmp(lgopts[opt_idx].name, "vxlan-gpe")) { + flow_items |= VXLAN_GPE_ITEM; + printf("vxlan-gpe / "); + } + if (!strcmp(lgopts[opt_idx].name, "gre")) { + flow_items |= GRE_ITEM; + printf("gre / "); + } + if (!strcmp(lgopts[opt_idx].name, "geneve")) { + flow_items |= GENEVE_ITEM; + printf("geneve / "); + } + if (!strcmp(lgopts[opt_idx].name, "gtp")) { + flow_items |= GTP_ITEM; + printf("gtp / "); + } + if (!strcmp(lgopts[opt_idx].name, "meta")) { + flow_items |= META_ITEM; + printf("meta / "); + } + if (!strcmp(lgopts[opt_idx].name, "tag")) { + flow_items |= TAG_ITEM; + printf("tag / "); + } + /* Actions */ + if (!strcmp(lgopts[opt_idx].name, "port-id")) { + flow_actions |= PORT_ID_ACTION; + printf("port-id / "); + } + if (!strcmp(lgopts[opt_idx].name, "rss")) { + flow_actions |= RSS_ACTION; + printf("rss / "); + } + if (!strcmp(lgopts[opt_idx].name, "hairpin-rss")) { + flow_actions |= HAIRPIN_RSS_ACTION; + printf("hairpin-rss / "); + } + if (!strcmp(lgopts[opt_idx].name, "queue")) { + flow_actions |= QUEUE_ACTION; + printf("queue / "); + } + if (!strcmp(lgopts[opt_idx].name, "hairpin-queue")) { + flow_actions |= HAIRPIN_QUEUE_ACTION; + printf("hairpin-queue / "); + } + if (!strcmp(lgopts[opt_idx].name, "jump")) { + flow_actions |= JUMP_ACTION; + printf("jump / "); + } + if (!strcmp(lgopts[opt_idx].name, "mark")) { + flow_actions |= MARK_ACTION; + printf("mark / "); + } + if (!strcmp(lgopts[opt_idx].name, "count")) { + flow_actions |= COUNT_ACTION; + printf("count / "); + } + if (!strcmp(lgopts[opt_idx].name, "set-meta")) { + flow_actions |= META_ACTION; + printf("set-meta / "); + } + if (!strcmp(lgopts[opt_idx].name, "set-tag")) { + flow_actions |= TAG_ACTION; + printf("set-tag / "); + } + if (!strcmp(lgopts[opt_idx].name, "drop")) { + flow_actions |= DROP_ACTION; + printf("drop / "); + } + + /* Control */ + if (!strcmp(lgopts[opt_idx].name, "flows-count")) { + n = atoi(optarg); + if (n > (int) iterations_number) + flows_count = n; + else { + printf("\n\nflows_count should be > %d", + iterations_number); + rte_exit(EXIT_SUCCESS, " "); + } + } + if (!strcmp(lgopts[opt_idx].name, "dump-iterations")) + dump_iterations = true; break; default: usage(argv[0]); @@ -88,6 +312,127 @@ args_parse(int argc, char **argv) break; } } + printf("end_flow\n"); +} + +static void +print_flow_error(struct rte_flow_error error) +{ + printf("Flow can't be created %d message: %s\n", + error.type, + error.message ? error.message : "(no stated reason)"); +} + +static inline void +flows_handler(void) +{ + struct rte_flow_error error; + clock_t start_iter, end_iter; + double cpu_time_used = 0; + double flows_rate; + double cpu_time_per_iter[MAX_ITERATIONS]; + double delta; + uint16_t nr_ports; + uint32_t i; + int port_id; + int iter_id; + uint32_t eagain_counter = 0; + + nr_ports = rte_eth_dev_count_avail(); + + for (i = 0; i < MAX_ITERATIONS; i++) + cpu_time_per_iter[i] = -1; + + if (iterations_number > flows_count) + iterations_number = flows_count; + + printf(":: Flows Count per port: %d\n", flows_count); + + for (port_id = 0; port_id < nr_ports; port_id++) { + if (flow_group > 0) { + /* + * Create global rule to jumo into flow_group + * This way the app will avoid the default rules + * + * Golbal rule: + * group 0 eth / end actions jump group + * + */ + flow = generate_flow(port_id, 0, flow_attrs, ETH_ITEM, + JUMP_ACTION, flow_group, 0, &error); + + if (!flow) { + print_flow_error(error); + rte_exit(EXIT_FAILURE, "error in creating flow"); + } + } + + /* Insertion Rate */ + printf("Flows insertion on port = %d\n", port_id); + start_iter = clock(); + for (i = 0; i < flows_count; i++) { + do { + rte_errno = 0; + flow = generate_flow(port_id, flow_group, + flow_attrs, flow_items, flow_actions, + JUMP_ACTION_TABLE, i, &error); + if (!flow) + eagain_counter++; + } while (rte_errno == EAGAIN); + + if (force_quit) + i = flows_count; + + if (!flow) { + print_flow_error(error); + rte_exit(EXIT_FAILURE, "error in creating flow"); + } + + if (i && !((i + 1) % iterations_number)) { + /* Save the insertion rate of each iter */ + end_iter = clock(); + delta = (double) (end_iter - start_iter); + iter_id = ((i + 1) / iterations_number) - 1; + cpu_time_per_iter[iter_id] = + delta / CLOCKS_PER_SEC; + cpu_time_used += cpu_time_per_iter[iter_id]; + start_iter = clock(); + } + } + + /* Iteration rate per iteration */ + if (dump_iterations) + for (i = 0; i < MAX_ITERATIONS; i++) { + if (cpu_time_per_iter[i] == -1) + continue; + delta = (double)(iterations_number / + cpu_time_per_iter[i]); + flows_rate = delta / 1000; + printf(":: Iteration #%d: %d flows " + "in %f sec[ Rate = %f K/Sec ]\n", + i, iterations_number, + cpu_time_per_iter[i], flows_rate); + } + + /* Insertion rate for all flows */ + flows_rate = ((double) (flows_count / cpu_time_used) / 1000); + printf("\n:: Total flow insertion rate -> %f K/Sec\n", + flows_rate); + printf(":: The time for creating %d in flows %f seconds\n", + flows_count, cpu_time_used); + printf(":: EAGAIN counter = %d\n", eagain_counter); + } +} + +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + printf("\n\nSignal %d received, preparing to exit...\n", + signum); + printf("Error: Stats are wrong due to sudden signal!\n\n"); + force_quit = true; + } } static void @@ -96,6 +441,8 @@ init_port(void) int ret; uint16_t i, j; uint16_t port_id; + uint16_t nr_queues; + bool hairpin_flag = false; uint16_t nr_ports = rte_eth_dev_count_avail(); struct rte_eth_hairpin_conf hairpin_conf = { .peer_count = 1, @@ -115,6 +462,13 @@ init_port(void) struct rte_eth_rxconf rxq_conf; struct rte_eth_dev_info dev_info; + nr_queues = RXQs; + if (flow_actions & HAIRPIN_QUEUE_ACTION || + flow_actions & HAIRPIN_RSS_ACTION) { + nr_queues = RXQs + HAIRPIN_QUEUES; + hairpin_flag = true; + } + if (nr_ports == 0) rte_exit(EXIT_FAILURE, "Error: no port detected\n"); mbuf_mp = rte_pktmbuf_pool_create("mbuf_pool", @@ -134,8 +488,8 @@ init_port(void) port_conf.txmode.offloads &= dev_info.tx_offload_capa; printf(":: initializing port: %d\n", port_id); - ret = rte_eth_dev_configure(port_id, RXQs + HAIRPIN_QUEUES, - TXQs + HAIRPIN_QUEUES, &port_conf); + ret = rte_eth_dev_configure(port_id, nr_queues, + nr_queues, &port_conf); if (ret < 0) rte_exit(EXIT_FAILURE, ":: cannot configure device: err=%d, port=%u\n", @@ -173,26 +527,30 @@ init_port(void) ":: promiscuous mode enable failed: err=%s, port=%u\n", rte_strerror(-ret), port_id); - for (i = RXQs, j = 0; i < RXQs + HAIRPIN_QUEUES; i++, j++) { - hairpin_conf.peers[0].port = port_id; - hairpin_conf.peers[0].queue = j + TXQs; - ret = rte_eth_rx_hairpin_queue_setup(port_id, i, - NR_RXD, &hairpin_conf); - if (ret != 0) - rte_exit(EXIT_FAILURE, - ":: Hairpin rx queue setup failed: err=%d, port=%u\n", - ret, port_id); - } + if (hairpin_flag) { + for (i = RXQs, j = 0; + i < RXQs + HAIRPIN_QUEUES; i++, j++) { + hairpin_conf.peers[0].port = port_id; + hairpin_conf.peers[0].queue = j + TXQs; + ret = rte_eth_rx_hairpin_queue_setup(port_id, i, + NR_RXD, &hairpin_conf); + if (ret != 0) + rte_exit(EXIT_FAILURE, + ":: Hairpin rx queue setup failed: err=%d, port=%u\n", + ret, port_id); + } - for (i = TXQs, j = 0; i < TXQs + HAIRPIN_QUEUES; i++, j++) { - hairpin_conf.peers[0].port = port_id; - hairpin_conf.peers[0].queue = j + RXQs; - ret = rte_eth_tx_hairpin_queue_setup(port_id, i, - NR_TXD, &hairpin_conf); - if (ret != 0) - rte_exit(EXIT_FAILURE, - ":: Hairpin tx queue setup failed: err=%d, port=%u\n", - ret, port_id); + for (i = TXQs, j = 0; + i < TXQs + HAIRPIN_QUEUES; i++, j++) { + hairpin_conf.peers[0].port = port_id; + hairpin_conf.peers[0].queue = j + RXQs; + ret = rte_eth_tx_hairpin_queue_setup(port_id, i, + NR_TXD, &hairpin_conf); + if (ret != 0) + rte_exit(EXIT_FAILURE, + ":: Hairpin tx queue setup failed: err=%d, port=%u\n", + ret, port_id); + } } ret = rte_eth_dev_start(port_id); @@ -219,6 +577,15 @@ main(int argc, char **argv) if (ret < 0) rte_exit(EXIT_FAILURE, "EAL init failed\n"); + force_quit = false; + dump_iterations = false; + flows_count = 4000000; + iterations_number = 100000; + flow_group = 0; + + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + argc -= ret; argv += ret; @@ -232,6 +599,8 @@ main(int argc, char **argv) if (nb_lcores <= 1) rte_exit(EXIT_FAILURE, "This app needs at least two cores\n"); + flows_handler(); + RTE_LCORE_FOREACH_SLAVE(lcore_id) if (rte_eal_wait_lcore(lcore_id) < 0) diff --git a/app/test-flow-perf/meson.build b/app/test-flow-perf/meson.build index ec9bb3b3aa..b3941f5c2d 100644 --- a/app/test-flow-perf/meson.build +++ b/app/test-flow-perf/meson.build @@ -5,7 +5,15 @@ # # To build this example as a standalone application with an already-installed # DPDK instance, use 'make' +name = 'flow_perf' +allow_experimental_apis = true +cflags += '-Wno-deprecated-declarations' +cflags += '-Wunused-function' sources = files( + 'actions_gen.c', + 'flow_gen.c', + 'items_gen.c', 'main.c', ) +deps += ['ethdev'] diff --git a/app/test-flow-perf/user_parameters.h b/app/test-flow-perf/user_parameters.h index 56ec7f47b5..1d157430b6 100644 --- a/app/test-flow-perf/user_parameters.h +++ b/app/test-flow-perf/user_parameters.h @@ -14,3 +14,18 @@ #define MBUF_CACHE_SIZE 512 #define NR_RXD 256 #define NR_TXD 256 + +/** Items/Actions parameters **/ +#define JUMP_ACTION_TABLE 2 +#define VLAN_VALUE 1 +#define VNI_VALUE 1 +#define GRE_PROTO 0x6558 +#define META_DATA 1 +#define TAG_INDEX 0 +#define PORT_ID_DST 1 +#define MARK_ID 1 +#define TEID_VALUE 1 + +/** Flow items/acctions max size **/ +#define MAX_ITEMS_NUM 20 +#define MAX_ACTIONS_NUM 20 diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst index 30ce1b6cc0..62e038c430 100644 --- a/doc/guides/tools/flow-perf.rst +++ b/doc/guides/tools/flow-perf.rst @@ -4,7 +4,19 @@ RTE Flow performance tool ========================= -Application for rte_flow performance testing. +Application for rte_flow performance testing. The application provide the +ability to test insertion rate of specific rte_flow rule, by stressing it +to the NIC, and calculate the insertion rate. + +The application offers some options in the command line, to configure +which rule to apply. + +After that the application will start producing rules with same pattern +but increasing the outer IP source address by 1 each time, thus it will +give different flow each time, and all other items will have open masks. + +The current design have single core insertion rate. In the future we may +have a multi core insertion rate measurement support in the app. Compiling the Application @@ -61,9 +73,179 @@ a ``--`` separator: .. code-block:: console - sudo ./test-flow-perf -n 4 -w 08:00.0,dv_flow_en=1 -- + sudo ./flow_perf -n 4 -w 08:00.0,dv_flow_en=1 -- --ingress --ether --ipv4 --queue --flows-count=1000000 The command line options are: * ``--help`` Display a help message and quit. + +* ``--flows-count=N`` + Set the number of needed flows to insert, + where 1 <= N <= "number of flows". + The default value is 4,000,000. + +* ``--dump-iterations`` + Print rates for each iteration of flows. + Default iteration is 1,00,000. + + +Attributes: + +* ``--ingress`` + Set Ingress attribute to all flows attributes. + +* ``--egress`` + Set Egress attribute to all flows attributes. + +* ``--transfer`` + Set Transfer attribute to all flows attributes. + +* ``--group=N`` + Set group for all flows, where N >= 0. + Default group is 0. + +Items: + +* ``--ether`` + Add Ether item to all flows items, This item have open mask. + +* ``--vlan`` + Add VLAN item to all flows items, + This item have VLAN value defined in user_parameters.h + under ``VNI_VALUE`` with full mask, default value = 1. + Other fields are open mask. + +* ``--ipv4`` + Add IPv4 item to all flows items, + This item have incremental source IP, with full mask. + Other fields are open mask. + +* ``--ipv6`` + Add IPv6 item to all flows item, + This item have incremental source IP, with full mask. + Other fields are open mask. + +* ``--tcp`` + Add TCP item to all flows items, This item have open mask. + +* ``--udp`` + Add UDP item to all flows items, This item have open mask. + +* ``--vxlan`` + Add VXLAN item to all flows items, + This item have VNI value defined in user_parameters.h + under ``VNI_VALUE`` with full mask, default value = 1. + Other fields are open mask. + +* ``--vxlan-gpe`` + Add VXLAN-GPE item to all flows items, + This item have VNI value defined in user_parameters.h + under ``VNI_VALUE`` with full mask, default value = 1. + Other fields are open mask. + +* ``--gre`` + Add GRE item to all flows items, + This item have protocol value defined in user_parameters.h + under ``GRE_PROTO`` with full mask, default protocol = 0x6558 "Ether" + Other fields are open mask. + +* ``--geneve`` + Add GENEVE item to all flows items, + This item have VNI value defined in user_parameters.h + under ``VNI_VALUE`` with full mask, default value = 1. + Other fields are open mask. + +* ``--gtp`` + Add GTP item to all flows items, + This item have TEID value defined in user_parameters.h + under ``TEID_VALUE`` with full mask, default value = 1. + Other fields are open mask. + +* ``--meta`` + Add Meta item to all flows items, + This item have data value defined in user_parameters.h + under ``META_DATA`` with full mask, default value = 1. + Other fields are open mask. + +* ``--tag`` + Add Tag item to all flows items, + This item have data value defined in user_parameters.h + under ``META_DATA`` with full mask, default value = 1. + + Also it have tag value defined in user_parameters.h + under ``TAG_INDEX`` with full mask, default value = 0. + Other fields are open mask. + + +Actions: + +* ``--port-id`` + Add port redirection action to all flows actions. + Port redirection destination is defined in user_parameters.h + under PORT_ID_DST, default value = 1. + +* ``--rss`` + Add RSS action to all flows actions, + The queues in RSS action will be all queues configured + in the app. + +* ``--queue`` + Add queue action to all flows items, + The queue will change in round robin state for each flow. + + For example: + The app running with 4 RX queues + Flow #0: queue index 0 + Flow #1: queue index 1 + Flow #2: queue index 2 + Flow #3: queue index 3 + Flow #4: queue index 0 + ... + +* ``--jump`` + Add jump action to all flows actions. + Jump action destination is defined in user_parameters.h + under ``JUMP_ACTION_TABLE``, default value = 2. + +* ``--mark`` + Add mark action to all flows actions. + Mark action id is defined in user_parameters.h + under ``MARK_ID``, default value = 1. + +* ``--count`` + Add count action to all flows actions. + +* ``--set-meta`` + Add set-meta action to all flows actions. + Meta data is defined in user_parameters.h under ``META_DATA`` + with full mask, default value = 1. + +* ``--set-tag`` + Add set-tag action to all flows actions. + Meta data is defined in user_parameters.h under ``META_DATA`` + with full mask, default value = 1. + + Tag index is defined in user_parameters.h under ``TAG_INDEX`` + with full mask, default value = 0. + +* ``--drop`` + Add drop action to all flows actions. + +* ``--hairpin-queue`` + Add hairpin queue action to all flows actions. + The queue will change in round robin state for each flow. + + For example: + The app running with 4 RX hairpin queues and 4 normal RX queues + Flow #0: queue index 4 + Flow #1: queue index 5 + Flow #2: queue index 6 + Flow #3: queue index 7 + Flow #4: queue index 4 + ... + +* ``--hairpin-rss`` + Add hairpin RSS action to all flows actions. + The queues in RSS action will be all hairpin queues configured + in the app. From patchwork Thu Apr 9 15:42:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wisam Jaddo X-Patchwork-Id: 68059 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0427CA0597; Thu, 9 Apr 2020 17:43:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 99A521D159; Thu, 9 Apr 2020 17:43:22 +0200 (CEST) Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-eopbgr80050.outbound.protection.outlook.com [40.107.8.50]) by dpdk.org (Postfix) with ESMTP id 30FA81D14A for ; Thu, 9 Apr 2020 17:43:19 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hozLIhiUXP4eEjFYPBzvzJ2wgK9jbvpMhY4GypunXOFtRPn0JfrPgp1QNspznZ42w6MZ+DI1qZZfon42WJmT+VsycASgSqzfFXXUHnaK7OwQ0oMC7NtPFzbyzCQgW1zsC+Hs10oqD84Y9WdOYYF8UyXsVk+h4YPXQkr3+4YKUR5m+bFZ3XcTIMMYy9KorVrQm9dazOIUPtrzpnZKjP+bi55QIWHPepzbEb6oGuwovgYhCG3ZOnlVXVTloe+M81I4VoJqkfefpIiJePL/dJuMJkS3EJUK1VWcGsmfRI9LPtUmECPqgZzEBlf8OK5gNZE2BCeIlIPkOP+opSceWeA1tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9AZJfnOGXnuq8plOKt7XMUn7UfPe7ixqjLpvuueIiLE=; b=j8OBkFMtZsB5V/AHWlF1f3F56n9w5shqenRjDGOyUwR7M3ni2aUlKAvH5FYmYNqCUZUBFWO6yj/JC4EnWgUCvA/cwRKEZLMx0gwAPhvGTEHU0ZKVpB3Q2SHQ4pIDJzDUPudgpxqwWq7kBCkzDHabDbm/gWZ5Qy/BAbINKw/tKQz2LAhuQ9pLA1TUacgVr2tGjoR+lzF9jIoAVtQEZe3ecgm9TDJY8jPq2IzeaCOzMDqkADFDtKmfNCaLZ6OSEMSXQR1tG3ZPmQ8mBPB9r+U3yplht67vOvDD7Ucegp5Ng8NNKMuXeEabcJ6zx0bGeOpk2WRDcw+Mw+T4Bc+xLlMAcQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9AZJfnOGXnuq8plOKt7XMUn7UfPe7ixqjLpvuueIiLE=; b=QDAOT7qeLe8R/Fo7azqOp/MSDmv89EFYtIwgG0XQvnO/C7WYSLyp55zG4iDief2XcPqTnrCBCayMmx9dRDpnkHgEO2B2mHv1xNwCUoGsq7YtMvG/DllqsBkxogxIOb4CqfkILG3UFjAOCjp508MAkJ3SNP6w8byf+nQ4y9vtS24= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=wisamm@mellanox.com; Received: from AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) by AM0SPR01MB0078.eurprd05.prod.outlook.com (2603:10a6:208:156::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.15; Thu, 9 Apr 2020 15:43:18 +0000 Received: from AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6]) by AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6%5]) with mapi id 15.20.2878.023; Thu, 9 Apr 2020 15:43:18 +0000 From: Wisam Jaddo To: dev@dpdk.org, jackmin@mellanox.com, jerinjacobk@gmail.com Cc: thomas@monjalon.net Date: Thu, 9 Apr 2020 15:42:55 +0000 Message-Id: <20200409154257.11539-3-wisamm@mellanox.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200409154257.11539-1-wisamm@mellanox.com> References: <1584452772-31147-1-git-send-email-wisamm@mellanox.com> <20200409154257.11539-1-wisamm@mellanox.com> X-ClientProxiedBy: PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) To AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mellanox.com (37.142.13.130) by PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.17 via Frontend Transport; Thu, 9 Apr 2020 15:43:17 +0000 X-Mailer: git-send-email 2.17.1 X-Originating-IP: [37.142.13.130] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: af71bb55-88df-4fd2-b912-08d7dc9cb979 X-MS-TrafficTypeDiagnostic: AM0SPR01MB0078:|AM0SPR01MB0078: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2000; X-Forefront-PRVS: 0368E78B5B X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM0PR05MB6610.eurprd05.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(10009020)(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(66946007)(7696005)(1076003)(55016002)(2906002)(52116002)(5660300002)(66556008)(8936002)(8886007)(81156014)(86362001)(316002)(6666004)(26005)(186003)(8676002)(66476007)(16526019)(4326008)(956004)(478600001)(2616005)(81166007)(36756003); DIR:OUT; SFP:1101; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: krW4P1Fc51cNM17Wy3/WCxYIUk170VarFAv5gFtc4MN5XDoBWdDf0BYyzTKS+3EaSz75TQ5xyuYIo86d3Iyf/qyUkrqVvMummX9AF+RKoVrsYdLQgSr2WWx6xa9WWzcP2rWdUSo3g2ArzE0Z2TqcrxkWLV5KG0InWIdNfyil38WHvZBGqprELYZLTjfwmKZJWtz3XbRntS/GSf6llWhEUk8KfSbL2xYONgLZdZ5PpbP6HRJDYaXRty1GV9W83hFP0YeqDs8teJHT7Q4RFK6Xyvl5Ts0Sy6QQUGP6zTSVDVhkzgu7kZMuHvc7ImtIJKmspGOwfdj4/8r63UJYjN535bHKjecz5PXkF2JLxGJFj4OVroVlJNJRhliWpsNj3De1puCDMVz1Hwzf1lyHgH+i7fuasc6BaPJGuIrZaSwJUEieUrLJ/xuBu37INAomOjFO X-MS-Exchange-AntiSpam-MessageData: EAoxMUP8TGsIrk6e60ZRDBDzJRAyV4AgM0KmjFRfF4LMTmN7H09r0hli0kwGg20rEjXagj0CdfGuhRFytuyZpxYCTtWUsrB55iJVa7tgIS/oFolIjuv4Zwo6DLgbXcj98kvKKcGALQCQI6NOB+fQiw== X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: af71bb55-88df-4fd2-b912-08d7dc9cb979 X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Apr 2020 15:43:18.3334 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: nRNNUDX7bMWiHWCfSEMCJfbB5MJGNiRaRl9aZxgzMlGReo5Pbq1kfnhHBK+yFJomkC3gQ3cTcjZjfi5AB3VmSg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0SPR01MB0078 Subject: [dpdk-dev] [PATCH 3/5] app/test-flow-perf: add deletion rate calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the ability to test deletion rate for flow performance application. This feature is disabled by default, and can be enabled by add "--deletion-rate" in the application command line options. Signed-off-by: Wisam Jaddo Reviewed-by: Xiaoyu Min --- app/test-flow-perf/main.c | 87 ++++++++++++++++++++++++++++++++++ doc/guides/tools/flow-perf.rst | 4 ++ 2 files changed, 91 insertions(+) diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c index 59dc5ae0f4..84f2c0c39b 100644 --- a/app/test-flow-perf/main.c +++ b/app/test-flow-perf/main.c @@ -62,6 +62,7 @@ static uint16_t flow_actions; static uint8_t flow_attrs; static volatile bool force_quit; static volatile bool dump_iterations; +static volatile bool delete_flag; static struct rte_mempool *mbuf_mp; static uint32_t nb_lcores; static uint32_t flows_count; @@ -75,6 +76,8 @@ static void usage(char *progname) " flows to insert, default is 4,000,000\n"); printf(" --dump-iterations: To print rates for each" " iteration\n"); + printf(" --deletion-rate: Enable deletion rate" + " calculations\n"); printf("To set flow attributes:\n"); printf(" --ingress: set ingress attribute in flows\n"); @@ -123,6 +126,7 @@ args_parse(int argc, char **argv) { "help", 0, 0, 0 }, { "flows-count", 1, 0, 0 }, { "dump-iterations", 0, 0, 0 }, + { "deletion-rate", 0, 0, 0 }, /* Attributes */ { "ingress", 0, 0, 0 }, { "egress", 0, 0, 0 }, @@ -304,6 +308,8 @@ args_parse(int argc, char **argv) } if (!strcmp(lgopts[opt_idx].name, "dump-iterations")) dump_iterations = true; + if (!strcmp(lgopts[opt_idx].name, "deletion-rate")) + delete_flag = true; break; default: usage(argv[0]); @@ -323,9 +329,75 @@ print_flow_error(struct rte_flow_error error) error.message ? error.message : "(no stated reason)"); } +static inline void +destroy_flows(int port_id, struct rte_flow **flow_list) +{ + struct rte_flow_error error; + clock_t start_iter, end_iter; + double cpu_time_used = 0; + double flows_rate; + double cpu_time_per_iter[MAX_ITERATIONS]; + double delta; + uint32_t i; + int iter_id; + + for (i = 0; i < MAX_ITERATIONS; i++) + cpu_time_per_iter[i] = -1; + + if (iterations_number > flows_count) + iterations_number = flows_count; + + /* Deletion Rate */ + printf("Flows Deletion on port = %d\n", port_id); + start_iter = clock(); + for (i = 0; i < flows_count; i++) { + if (!flow_list[i]) + break; + + memset(&error, 0x33, sizeof(error)); + if (rte_flow_destroy(port_id, flow_list[i], &error)) { + print_flow_error(error); + rte_exit(EXIT_FAILURE, "Error in deleting flow"); + } + + if (i && !((i + 1) % iterations_number)) { + /* Save the deletion rate of each iter */ + end_iter = clock(); + delta = (double) (end_iter - start_iter); + iter_id = ((i + 1) / iterations_number) - 1; + cpu_time_per_iter[iter_id] = + delta / CLOCKS_PER_SEC; + cpu_time_used += cpu_time_per_iter[iter_id]; + start_iter = clock(); + } + } + + /* Deletion rate per iteration */ + if (dump_iterations) + for (i = 0; i < MAX_ITERATIONS; i++) { + if (cpu_time_per_iter[i] == -1) + continue; + delta = (double)(iterations_number / + cpu_time_per_iter[i]); + flows_rate = delta / 1000; + printf(":: Iteration #%d: %d flows " + "in %f sec[ Rate = %f K/Sec ]\n", + i, iterations_number, + cpu_time_per_iter[i], flows_rate); + } + + /* Deletion rate for all flows */ + flows_rate = ((double) (flows_count / cpu_time_used) / 1000); + printf("\n:: Total flow deletion rate -> %f K/Sec\n", + flows_rate); + printf(":: The time for deleting %d in flows %f seconds\n", + flows_count, cpu_time_used); +} + static inline void flows_handler(void) { + struct rte_flow **flow_list; struct rte_flow_error error; clock_t start_iter, end_iter; double cpu_time_used = 0; @@ -337,6 +409,7 @@ flows_handler(void) int port_id; int iter_id; uint32_t eagain_counter = 0; + uint32_t flow_index; nr_ports = rte_eth_dev_count_avail(); @@ -348,7 +421,14 @@ flows_handler(void) printf(":: Flows Count per port: %d\n", flows_count); + flow_list = rte_zmalloc("flow_list", + (sizeof(struct rte_flow *) * flows_count) + 1, 0); + if (flow_list == NULL) + rte_exit(EXIT_FAILURE, "No Memory available!"); + for (port_id = 0; port_id < nr_ports; port_id++) { + flow_index = 0; + if (flow_group > 0) { /* * Create global rule to jumo into flow_group @@ -365,6 +445,7 @@ flows_handler(void) print_flow_error(error); rte_exit(EXIT_FAILURE, "error in creating flow"); } + flow_list[flow_index++] = flow; } /* Insertion Rate */ @@ -388,6 +469,8 @@ flows_handler(void) rte_exit(EXIT_FAILURE, "error in creating flow"); } + flow_list[flow_index++] = flow; + if (i && !((i + 1) % iterations_number)) { /* Save the insertion rate of each iter */ end_iter = clock(); @@ -421,6 +504,9 @@ flows_handler(void) printf(":: The time for creating %d in flows %f seconds\n", flows_count, cpu_time_used); printf(":: EAGAIN counter = %d\n", eagain_counter); + + if (delete_flag) + destroy_flows(port_id, flow_list); } } @@ -579,6 +665,7 @@ main(int argc, char **argv) force_quit = false; dump_iterations = false; + delete_flag = false; flows_count = 4000000; iterations_number = 100000; flow_group = 0; diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst index 62e038c430..e07e659df5 100644 --- a/doc/guides/tools/flow-perf.rst +++ b/doc/guides/tools/flow-perf.rst @@ -18,6 +18,8 @@ give different flow each time, and all other items will have open masks. The current design have single core insertion rate. In the future we may have a multi core insertion rate measurement support in the app. +The application also provide the ability to measure rte flow deletion rate. + Compiling the Application ========================= @@ -89,6 +91,8 @@ The command line options are: Print rates for each iteration of flows. Default iteration is 1,00,000. +* ``--deletion-rate`` + Enable deletion rate calculations. Attributes: From patchwork Thu Apr 9 15:42:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wisam Jaddo X-Patchwork-Id: 68060 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94645A0597; Thu, 9 Apr 2020 17:44:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4CEDA1D162; Thu, 9 Apr 2020 17:43:24 +0200 (CEST) Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00053.outbound.protection.outlook.com [40.107.0.53]) by dpdk.org (Postfix) with ESMTP id 5F4891D14B for ; Thu, 9 Apr 2020 17:43:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=myn7Fv8RaZvwnmkapfvaUyr25AAI6HbHTTumt8C67M9RsgCQwui8N7ZbK4L6brwDU55zx6oKkah2KGQnrf2gTB03srYdMzroQjY6P4XkEnz8ZP6PsppXtibMvnSn/n9xiajdh7W/6zyy1F1lxRaj7+fPXDlRAONL8tayxOK//ucx/h9DZcZ0k/jCYnfDjkEp1f5QGXGCFFIsUg04QPpa/eGg+mZBwd3ZgLtyj4b2nWTPcxrJussxm4Z79cg8tRwh8xab0CyRfqi2DJlv610A0va9HC1FAeeX9cDDQafWVhYqxPxjadfp0vQLEHQIqK9EmCpdfcoouXteGAZn9cKydQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=M05zpHKKsYouy1KeVll85YgoOnLuTEpR9IKjMEUOxM0=; b=DLyjydeVY7XNxIwPtyzhsZuk9IPxceT3+xPoIYmaK2NuUVFa6pGgtUIHe7l/xWbdPS9YsP6+VXN+O+DreGP08OCOZthCV5pKdeM7qrInITUc1KWKcjmc2grFDK2U4v0MaiVQHlPDCJXq0zQJYHJbXTXHZnI/HjZVD/BuAQlGpABZFc8EfDqCnsj32EvRHIy8/+Yf0vbEPLAVtETdvirMp2i31SVI9d/Lc3W/tzGvBZW76aJCC2CZUwlDV4KrHKj9E69BrFDQIA8RMOe5DXyUYPWOaeHjs/9EXbSyjEr6Doimc6/8C2XgSdYiswqdDL9D1QzGIEMGJUXYBPx9fBzM9A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=M05zpHKKsYouy1KeVll85YgoOnLuTEpR9IKjMEUOxM0=; b=aXJALDKXQJ2cZgxr489gR76bPx4E3rq7s5CNQlYdHZru+ZPZPXh5jzfVFZPsD0nUtXUBt3jW26IIAwB5/IHod/MZVr0wmhQHCRk+W4L4rHxYpdfYuIQ+wlZTQEt3HPvopJcHs7tf9rRYdzOU3umdwhyFUiNBejzTqj/EG6CkXzQ= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=wisamm@mellanox.com; Received: from AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) by AM0SPR01MB0078.eurprd05.prod.outlook.com (2603:10a6:208:156::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.15; Thu, 9 Apr 2020 15:43:19 +0000 Received: from AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6]) by AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6%5]) with mapi id 15.20.2878.023; Thu, 9 Apr 2020 15:43:19 +0000 From: Wisam Jaddo To: dev@dpdk.org, jackmin@mellanox.com, jerinjacobk@gmail.com Cc: thomas@monjalon.net, Suanming Mou Date: Thu, 9 Apr 2020 15:42:56 +0000 Message-Id: <20200409154257.11539-4-wisamm@mellanox.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200409154257.11539-1-wisamm@mellanox.com> References: <1584452772-31147-1-git-send-email-wisamm@mellanox.com> <20200409154257.11539-1-wisamm@mellanox.com> X-ClientProxiedBy: PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) To AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mellanox.com (37.142.13.130) by PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.17 via Frontend Transport; Thu, 9 Apr 2020 15:43:18 +0000 X-Mailer: git-send-email 2.17.1 X-Originating-IP: [37.142.13.130] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 79209967-27e2-42db-5ed3-08d7dc9cba24 X-MS-TrafficTypeDiagnostic: AM0SPR01MB0078:|AM0SPR01MB0078: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2276; X-Forefront-PRVS: 0368E78B5B X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM0PR05MB6610.eurprd05.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(10009020)(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(66946007)(7696005)(1076003)(55016002)(2906002)(52116002)(5660300002)(66556008)(8936002)(8886007)(81156014)(86362001)(316002)(6666004)(26005)(186003)(8676002)(66476007)(16526019)(4326008)(107886003)(956004)(478600001)(2616005)(81166007)(36756003); DIR:OUT; SFP:1101; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: F+dtb/pELqlaoAosPnXPMz2pSSXuD/rlhTnA/5KIaCujmQe3uj0Z22KFp4Aq5fPkrqmfXfrByNReDUhiArPn1qpezn0fIeNHz2Qpyb7eVVU2yCUVwHi39PfK2JeyKIxUsp4+OXX6x27KxB/aBKnzkwx6PYPssLcEH94BP1oAxe+Fq6mDxPdCBSshpOJw4gyMjWM9jK1zfWsvB1wVxM41h0wnbFRQq7tLzIApEc4KC+bHJ/m5IFchgcL2I4NhlrvQfIaWT8GUVUccMT8NjB0cyoSFy48dodH42akZRDV82KRxM42xpQKMViYudLOGe+SawKzYwM1p7BVrkop1lMCYM6k7dJDkcX4HZNyk+Ge3IBxA08OE5Iq6SIHnr1uDxGrb2ekK7Jw9HqlCwn7vrvhzhSRUhEHFkNBrp4mo07iZWGXOgvHoo0JuY9rGoGUaDCKY X-MS-Exchange-AntiSpam-MessageData: RSxd4D/MnicaslALBkKcK3HvxWRP1/qYVWmGyTpkFEf8bFPQW27Z9CQGQYkiG1ZhUUj+7TjCvo7d/DXqYTVj38Q10eSQZq8iBW+L4NZqt5XAS6ONkW1XZLXCVefgVfWTJNm8h9Tg3rAO8ZB7xFu8LQ== X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 79209967-27e2-42db-5ed3-08d7dc9cba24 X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Apr 2020 15:43:19.4437 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: NffjIEFR2RJ1TypblVEuiv3Y+t4wjyGqvhULCJt3y6ydrjct2mHNvcx9I/nKzJOVWh7nI17A9/NtFtpcFL/YPw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0SPR01MB0078 Subject: [dpdk-dev] [PATCH 4/5] app/test-flow-perf: add memory dump to app X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce new feature to dump memory statistics of each socket and a total for all before and after the creation. This will give two main advantage: 1- Check the memory consumption for large number of flows "insertion rate scenario alone" 2- Check that no memory leackage after doing insertion then deletion. Signed-off-by: Suanming Mou Signed-off-by: Wisam Jaddo Reviewed-by: Xiaoyu Min --- app/test-flow-perf/main.c | 69 ++++++++++++++++++++++++++++++++++ doc/guides/tools/flow-perf.rst | 6 ++- 2 files changed, 74 insertions(+), 1 deletion(-) diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c index 84f2c0c39b..438fbf850a 100644 --- a/app/test-flow-perf/main.c +++ b/app/test-flow-perf/main.c @@ -62,6 +62,7 @@ static uint16_t flow_actions; static uint8_t flow_attrs; static volatile bool force_quit; static volatile bool dump_iterations; +static volatile bool dump_socket_mem_flag; static volatile bool delete_flag; static struct rte_mempool *mbuf_mp; static uint32_t nb_lcores; @@ -78,6 +79,7 @@ static void usage(char *progname) " iteration\n"); printf(" --deletion-rate: Enable deletion rate" " calculations\n"); + printf(" --dump-socket-mem: to dump all socket memory\n"); printf("To set flow attributes:\n"); printf(" --ingress: set ingress attribute in flows\n"); @@ -127,6 +129,7 @@ args_parse(int argc, char **argv) { "flows-count", 1, 0, 0 }, { "dump-iterations", 0, 0, 0 }, { "deletion-rate", 0, 0, 0 }, + { "dump-socket-mem", 0, 0, 0 }, /* Attributes */ { "ingress", 0, 0, 0 }, { "egress", 0, 0, 0 }, @@ -310,6 +313,8 @@ args_parse(int argc, char **argv) dump_iterations = true; if (!strcmp(lgopts[opt_idx].name, "deletion-rate")) delete_flag = true; + if (!strcmp(lgopts[opt_idx].name, "dump-socket-mem")) + dump_socket_mem_flag = true; break; default: usage(argv[0]); @@ -321,6 +326,62 @@ args_parse(int argc, char **argv) printf("end_flow\n"); } +/* Dump the socket memory statistics on console */ +static size_t +dump_socket_mem(FILE *f) +{ + struct rte_malloc_socket_stats socket_stats; + unsigned int i = 0; + size_t total = 0; + size_t alloc = 0; + size_t free = 0; + unsigned int n_alloc = 0; + unsigned int n_free = 0; + bool active_nodes = false; + + + for (i = 0; i < RTE_MAX_NUMA_NODES; i++) { + if (rte_malloc_get_socket_stats(i, &socket_stats) || + !socket_stats.heap_totalsz_bytes) + continue; + active_nodes = true; + total += socket_stats.heap_totalsz_bytes; + alloc += socket_stats.heap_allocsz_bytes; + free += socket_stats.heap_freesz_bytes; + n_alloc += socket_stats.alloc_count; + n_free += socket_stats.free_count; + if (dump_socket_mem_flag) { + fprintf(f, "::::::::::::::::::::::::::::::::::::::::"); + fprintf(f, + "\nSocket %u:\nsize(M) total: %.6lf\nalloc:" + " %.6lf(%.3lf%%)\nfree: %.6lf" + "\nmax: %.6lf" + "\ncount alloc: %u\nfree: %u\n", + i, + socket_stats.heap_totalsz_bytes / 1.0e6, + socket_stats.heap_allocsz_bytes / 1.0e6, + (double)socket_stats.heap_allocsz_bytes * 100 / + (double)socket_stats.heap_totalsz_bytes, + socket_stats.heap_freesz_bytes / 1.0e6, + socket_stats.greatest_free_size / 1.0e6, + socket_stats.alloc_count, + socket_stats.free_count); + fprintf(f, "::::::::::::::::::::::::::::::::::::::::"); + } + } + if (dump_socket_mem_flag && active_nodes) { + fprintf(f, + "\nTotal: size(M)\ntotal: %.6lf" + "\nalloc: %.6lf(%.3lf%%)\nfree: %.6lf" + "\ncount alloc: %u\nfree: %u\n", + total / 1.0e6, alloc / 1.0e6, + (double)alloc * 100 / (double)total, free / 1.0e6, + n_alloc, n_free); + fprintf(f, "::::::::::::::::::::::::::::::::::::::::\n"); + } + return alloc; +} + static void print_flow_error(struct rte_flow_error error) { @@ -657,6 +718,7 @@ main(int argc, char **argv) uint16_t nr_ports; int ret; struct rte_flow_error error; + int64_t alloc, last_alloc; nr_ports = rte_eth_dev_count_avail(); ret = rte_eal_init(argc, argv); @@ -666,6 +728,7 @@ main(int argc, char **argv) force_quit = false; dump_iterations = false; delete_flag = false; + dump_socket_mem_flag = false; flows_count = 4000000; iterations_number = 100000; flow_group = 0; @@ -686,7 +749,13 @@ main(int argc, char **argv) if (nb_lcores <= 1) rte_exit(EXIT_FAILURE, "This app needs at least two cores\n"); + last_alloc = (int64_t)dump_socket_mem(stdout); flows_handler(); + alloc = (int64_t)dump_socket_mem(stdout); + + if (last_alloc) + fprintf(stdout, ":: Memory allocation change(M): %.6lf\n", + (alloc - last_alloc) / 1.0e6); RTE_LCORE_FOREACH_SLAVE(lcore_id) diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst index e07e659df5..28d452fd06 100644 --- a/doc/guides/tools/flow-perf.rst +++ b/doc/guides/tools/flow-perf.rst @@ -18,7 +18,8 @@ give different flow each time, and all other items will have open masks. The current design have single core insertion rate. In the future we may have a multi core insertion rate measurement support in the app. -The application also provide the ability to measure rte flow deletion rate. +The application also provide the ability to measure rte flow deletion rate, +in addition to memory consumption before and after the flows creation. Compiling the Application @@ -94,6 +95,9 @@ The command line options are: * ``--deletion-rate`` Enable deletion rate calculations. +* ``--dump-socket-mem`` + Dump the memory stats for each socket before the insertion and after. + Attributes: * ``--ingress`` From patchwork Thu Apr 9 15:42:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wisam Jaddo X-Patchwork-Id: 68061 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 185E8A0597; Thu, 9 Apr 2020 17:44:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8AA881D168; Thu, 9 Apr 2020 17:43:25 +0200 (CEST) Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00075.outbound.protection.outlook.com [40.107.0.75]) by dpdk.org (Postfix) with ESMTP id 6625F1D153 for ; Thu, 9 Apr 2020 17:43:21 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=afc2YZ3i87LbHP1IXYgYvbTNJRy75my1ebV7nW4JqGKfptdIzaJGjiZzXHwV5EaN8HJcBV3vnFmGzmMZc9X8rfn1ohexWQgSC/r6smqKc4b2oNvzzQ7kWe4kzsrLzMIuNq7fGuBlcUyEiqoaJCkiiXRJN1aiBFOsHOFkxcfCoioamNoukAhpWdIAwi6v0IDZ/pe6i8jBFsGcXLwm/4zaQZ0po1iiMyMStxOCX1zznqpn3oQa6Fuzxmt34vocREFHi+ZNhUC8NUpczoA6ohElOoqO9S2hAlXtq6Xl6Emrdk1OSWlNKrhA3NLOcQzhBK7sPH7VtQzL/WcCIe/rfOcuGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=T5cochS7Nc3TBYA6BO28EcNEJO4WjHilypIV/GJmNy8=; b=AyILMIvSzx/KFJIC9hdEf6oVoWmWEEjQszaolvFaXKuEnzIstB3DNLr7mbLVhKs5ODr8/douF2WXxQdhC/GeGVfUl0P6fHihUtbxeHCw0VPHpf4FXJoW24OMQJtkDTENO+5ZZOfrCkZcn1HY4WW4QuEPk/vzFKvLM+Mi/k2dE8qi/FC+eT0hjg6DZuS+ObU8hG4vfeMschMtK2NP7YBRa0PMt10apnYvtDEk4d7pVfhZbDnVbGIkoKoWXdvTHDR767kmeINhz/KPTusLIMmsT5pS9HE9wfL4suvtdPvKkBnWXkehdJFDi66b04UtZ/zc30n5HK0jtnTLnTnXBP5hUw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=T5cochS7Nc3TBYA6BO28EcNEJO4WjHilypIV/GJmNy8=; b=LqrCxbSbfVz5RFhct0RcGpX0rq71mYuONIA7gqNB7FRQemU+Amtj4uRDrBjzPZIfxArWu8B4X9e24om9t0Je1JKyuZ3YYMjgB0pgBhz8d+jjgzNZ8xIbD/4qirbFmBLTRsLa5BLqNovPAZ9gJROR8x9rLomjhUPSOHikd3PCiO0= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=wisamm@mellanox.com; Received: from AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) by AM0SPR01MB0078.eurprd05.prod.outlook.com (2603:10a6:208:156::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.15; Thu, 9 Apr 2020 15:43:20 +0000 Received: from AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6]) by AM0PR05MB6610.eurprd05.prod.outlook.com ([fe80::6dbd:6f77:4519:eeb6%5]) with mapi id 15.20.2878.023; Thu, 9 Apr 2020 15:43:20 +0000 From: Wisam Jaddo To: dev@dpdk.org, jackmin@mellanox.com, jerinjacobk@gmail.com Cc: thomas@monjalon.net Date: Thu, 9 Apr 2020 15:42:57 +0000 Message-Id: <20200409154257.11539-5-wisamm@mellanox.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200409154257.11539-1-wisamm@mellanox.com> References: <1584452772-31147-1-git-send-email-wisamm@mellanox.com> <20200409154257.11539-1-wisamm@mellanox.com> X-ClientProxiedBy: PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) To AM0PR05MB6610.eurprd05.prod.outlook.com (2603:10a6:208:12f::18) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mellanox.com (37.142.13.130) by PR3P192CA0040.EURP192.PROD.OUTLOOK.COM (2603:10a6:102:57::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.17 via Frontend Transport; Thu, 9 Apr 2020 15:43:19 +0000 X-Mailer: git-send-email 2.17.1 X-Originating-IP: [37.142.13.130] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: b823b5ad-9fa8-43ca-353a-08d7dc9cbac2 X-MS-TrafficTypeDiagnostic: AM0SPR01MB0078:|AM0SPR01MB0078: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:341; X-Forefront-PRVS: 0368E78B5B X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM0PR05MB6610.eurprd05.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(10009020)(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(66946007)(7696005)(1076003)(55016002)(2906002)(52116002)(5660300002)(66556008)(8936002)(8886007)(81156014)(86362001)(316002)(6666004)(26005)(186003)(8676002)(66476007)(16526019)(4326008)(956004)(478600001)(2616005)(81166007)(36756003)(30864003); DIR:OUT; SFP:1101; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OV5jz+b3elC8F+RX8QgnzX0u3yu9o3jA87+m04URv5RDMp+BJXWuCvGMW0jDVTQ3Z35RhRZLiGHxkWVj12yrLS5T64rzcx4cHgTvUQkwLHm37MG1FPTcSjgAQMXrTkdS24jxqH6OrQLBD1ingN94e5KfwoA8GX2P/XTlAbkjh1O5LdeoSGngL2KksrBE04lzZ7GRQkQHkiP29zOaQHEOHD/PVNFFd3mellqE6xLz25pC+xYEDPN4E3gmCvqs/L7zu9tivFbwVUvwiVBu67ZJoURmi4jXhy7w+ydPypajkgSIdDoTfLz8bmwtwj/mmHmr3glTqfPl4zhXtOVngJ1fVY/jmIhpKS025tgFVcbKiv2j0z1Q7rDy3y8/n/fCITc0rD/nQDTQELItoqg1tPwO2M7yLWeabp93ONd0gRl+KaCZQeCZK4/8v25v4fcQnbTx X-MS-Exchange-AntiSpam-MessageData: t7eqpcYCYzIt7MCcPwI1suf1w00ZMno3KxQazeNv8Bgakg13sEQCkHbQL+upF35FSBF6eeW7e8+OkVvLtFOPA9hBH6CMD25U6pMvrNuQv/C0cWAefCNnSvwkerrBULS7bHR+C1clVAGqwwIGRER20w== X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: b823b5ad-9fa8-43ca-353a-08d7dc9cbac2 X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Apr 2020 15:43:20.4841 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: eo/9pd2836bfSyTN774mx3zOTjGsPLOoNx5k+3ZAMSpXiPv5z7+lx61PTq5h8zVdGzypPGEWNvoOSpsfKc+KHw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0SPR01MB0078 Subject: [dpdk-dev] [PATCH 5/5] app/test-flow-perf: add packet forwarding support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce packet forwarding support to the app to do some performance measurements. The measurements are reported in term of packet per second unit. The forwarding will start after the end of insertion/deletion operations. The support has single and multi performance measurements. Signed-off-by: Wisam Jaddo Reviewed-by: Xiaoyu Min --- app/test-flow-perf/main.c | 300 +++++++++++++++++++++++++++++++++ doc/guides/tools/flow-perf.rst | 6 + 2 files changed, 306 insertions(+) diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c index 438fbf850a..96d9a71086 100644 --- a/app/test-flow-perf/main.c +++ b/app/test-flow-perf/main.c @@ -60,14 +60,45 @@ static uint8_t flow_group; static uint16_t flow_items; static uint16_t flow_actions; static uint8_t flow_attrs; + static volatile bool force_quit; static volatile bool dump_iterations; static volatile bool dump_socket_mem_flag; static volatile bool delete_flag; +static volatile bool enable_fwd; + static struct rte_mempool *mbuf_mp; static uint32_t nb_lcores; static uint32_t flows_count; static uint32_t iterations_number; +static uint32_t nb_lcores; + +#define MAX_PKT_BURST 32 +#define LCORE_MODE_PKT 1 +#define LCORE_MODE_STATS 2 +#define MAX_STREAMS 64 +#define MAX_LCORES 64 + +struct stream { + int tx_port; + int tx_queue; + int rx_port; + int rx_queue; +}; + +struct lcore_info { + int mode; + int streams_nb; + struct stream streams[MAX_STREAMS]; + /* stats */ + uint64_t tx_pkts; + uint64_t tx_drops; + uint64_t rx_pkts; + struct rte_mbuf *pkts[MAX_PKT_BURST]; +} __attribute__((__aligned__(64))); /* let it be cacheline aligned */ + + +static struct lcore_info lcore_infos[MAX_LCORES]; static void usage(char *progname) { @@ -80,6 +111,8 @@ static void usage(char *progname) printf(" --deletion-rate: Enable deletion rate" " calculations\n"); printf(" --dump-socket-mem: to dump all socket memory\n"); + printf(" --enable-fwd: to enable packets forwarding" + " after insertion\n"); printf("To set flow attributes:\n"); printf(" --ingress: set ingress attribute in flows\n"); @@ -130,6 +163,7 @@ args_parse(int argc, char **argv) { "dump-iterations", 0, 0, 0 }, { "deletion-rate", 0, 0, 0 }, { "dump-socket-mem", 0, 0, 0 }, + { "enable-fwd", 0, 0, 0 }, /* Attributes */ { "ingress", 0, 0, 0 }, { "egress", 0, 0, 0 }, @@ -315,6 +349,8 @@ args_parse(int argc, char **argv) delete_flag = true; if (!strcmp(lgopts[opt_idx].name, "dump-socket-mem")) dump_socket_mem_flag = true; + if (!strcmp(lgopts[opt_idx].name, "enable-fwd")) + enable_fwd = true; break; default: usage(argv[0]); @@ -582,6 +618,265 @@ signal_handler(int signum) } } +static inline uint16_t +do_rx(struct lcore_info *li, uint16_t rx_port, uint16_t rx_queue) +{ + uint16_t cnt = 0; + cnt = rte_eth_rx_burst(rx_port, rx_queue, li->pkts, MAX_PKT_BURST); + li->rx_pkts += cnt; + return cnt; +} + +static inline void +do_tx(struct lcore_info *li, uint16_t cnt, uint16_t tx_port, + uint16_t tx_queue) +{ + uint16_t nr_tx = 0; + uint16_t i; + + nr_tx = rte_eth_tx_burst(tx_port, tx_queue, li->pkts, cnt); + li->tx_pkts += nr_tx; + li->tx_drops += cnt - nr_tx; + + for (i = nr_tx; i < cnt; i++) + rte_pktmbuf_free(li->pkts[i]); +} + +/* + * Method to convert numbers into pretty numbers that easy + * to read. The design here is to add comma after each three + * digits and set all of this inside buffer. + * + * For example if n = 1799321, the output will be + * 1,799,321 after this method which is easier to read. + */ +static char * +pretty_number(uint64_t n, char *buf) +{ + char p[6][4]; + int i = 0; + int off = 0; + + while (n > 1000) { + sprintf(p[i], "%03d", (int)(n % 1000)); + n /= 1000; + i += 1; + } + + sprintf(p[i++], "%d", (int)n); + + while (i--) + off += sprintf(buf + off, "%s,", p[i]); + buf[strlen(buf) - 1] = '\0'; + + return buf; +} + +static void +packet_per_second_stats(void) +{ + struct lcore_info *old; + struct lcore_info *li, *oli; + int nr_lines = 0; + int i; + + old = rte_zmalloc("old", + sizeof(struct lcore_info) * MAX_LCORES, 0); + if (old == NULL) + rte_exit(EXIT_FAILURE, "No Memory available!"); + + memcpy(old, lcore_infos, + sizeof(struct lcore_info) * MAX_LCORES); + + while (!force_quit) { + uint64_t total_tx_pkts = 0; + uint64_t total_rx_pkts = 0; + uint64_t total_tx_drops = 0; + uint64_t tx_delta, rx_delta, drops_delta; + char buf[3][32]; + int nr_valid_core = 0; + + sleep(1); + + if (nr_lines) { + char go_up_nr_lines[16]; + + sprintf(go_up_nr_lines, "%c[%dA\r", 27, nr_lines); + printf("%s\r", go_up_nr_lines); + } + + printf("\n%6s %16s %16s %16s\n", "core", "tx", "tx drops", "rx"); + printf("%6s %16s %16s %16s\n", "------", "----------------", + "----------------", "----------------"); + nr_lines = 3; + for (i = 0; i < MAX_LCORES; i++) { + li = &lcore_infos[i]; + oli = &old[i]; + if (li->mode != LCORE_MODE_PKT) + continue; + + tx_delta = li->tx_pkts - oli->tx_pkts; + rx_delta = li->rx_pkts - oli->rx_pkts; + drops_delta = li->tx_drops - oli->tx_drops; + printf("%6d %16s %16s %16s\n", i, + pretty_number(tx_delta, buf[0]), + pretty_number(drops_delta, buf[1]), + pretty_number(rx_delta, buf[2])); + + total_tx_pkts += tx_delta; + total_rx_pkts += rx_delta; + total_tx_drops += drops_delta; + + nr_valid_core++; + nr_lines += 1; + } + + if (nr_valid_core > 1) { + printf("%6s %16s %16s %16s\n", "total", + pretty_number(total_tx_pkts, buf[0]), + pretty_number(total_tx_drops, buf[1]), + pretty_number(total_rx_pkts, buf[2])); + nr_lines += 1; + } + + memcpy(old, lcore_infos, + sizeof(struct lcore_info) * MAX_LCORES); + } +} + +static int +start_forwarding(void *data __rte_unused) +{ + int lcore = rte_lcore_id(); + int stream_id; + uint16_t cnt; + struct lcore_info *li = &lcore_infos[lcore]; + + if (!li->mode) + return 0; + + if (li->mode == LCORE_MODE_STATS) { + printf(":: started stats on lcore %u\n", lcore); + packet_per_second_stats(); + return 0; + } + + while (!force_quit) + for (stream_id = 0; stream_id < MAX_STREAMS; stream_id++) { + if (li->streams[stream_id].rx_port == -1) + continue; + + cnt = do_rx(li, + li->streams[stream_id].rx_port, + li->streams[stream_id].rx_queue); + if (cnt) + do_tx(li, cnt, + li->streams[stream_id].tx_port, + li->streams[stream_id].tx_queue); + } + return 0; +} + +static void +init_lcore_info(void) +{ + int i, j; + unsigned int lcore; + uint16_t nr_port; + uint16_t queue; + int port; + int stream_id = 0; + int streams_per_core; + int unassigned_streams; + int nb_fwd_streams; + nr_port = rte_eth_dev_count_avail(); + + /* First logical core is reserved for stats printing */ + lcore = rte_get_next_lcore(-1, 0, 0); + lcore_infos[lcore].mode = LCORE_MODE_STATS; + + /* + * Initialize all cores + * All cores at first must have -1 value in all streams + * This means that this stream is not used, or not set + * yet. + */ + for (i = 0; i < MAX_LCORES; i++) + for (j = 0; j < MAX_STREAMS; j++) { + lcore_infos[i].streams[j].tx_port = -1; + lcore_infos[i].streams[j].rx_port = -1; + lcore_infos[i].streams[j].tx_queue = -1; + lcore_infos[i].streams[j].rx_queue = -1; + lcore_infos[i].streams_nb = 0; + } + + /* + * Calculate the total streams count. + * Also distribute those streams count between the available + * logical cores except first core, since it's reserved for + * stats prints. + */ + nb_fwd_streams = nr_port * RXQs; + if ((int)(nb_lcores - 1) >= nb_fwd_streams) + for (i = 0; i < (int)(nb_lcores - 1); i++) { + lcore = rte_get_next_lcore(lcore, 0, 0); + lcore_infos[lcore].streams_nb = 1; + } + else { + streams_per_core = nb_fwd_streams / (nb_lcores - 1); + unassigned_streams = nb_fwd_streams % (nb_lcores - 1); + for (i = 0; i < (int)(nb_lcores - 1); i++) { + lcore = rte_get_next_lcore(lcore, 0, 0); + lcore_infos[lcore].streams_nb = streams_per_core; + if (unassigned_streams) { + lcore_infos[lcore].streams_nb++; + unassigned_streams--; + } + } + } + + /* + * Set the streams for the cores according to each logical + * core stream count. + * The streams is built on the design of what received should + * forward as well, this means that if you received packets on + * port 0 queue 0 then the same queue should forward the + * packets, using the same logical core. + */ + lcore = rte_get_next_lcore(-1, 0, 0); + for (port = 0; port < nr_port; port++) { + /** Create FWD stream **/ + for (queue = 0; queue < RXQs; queue++) { + if (!lcore_infos[lcore].streams_nb || + !(stream_id % lcore_infos[lcore].streams_nb)) { + lcore = rte_get_next_lcore(lcore, 0, 0); + lcore_infos[lcore].mode = LCORE_MODE_PKT; + stream_id = 0; + } + lcore_infos[lcore].streams[stream_id].rx_queue = queue; + lcore_infos[lcore].streams[stream_id].tx_queue = queue; + lcore_infos[lcore].streams[stream_id].rx_port = port; + lcore_infos[lcore].streams[stream_id].tx_port = port; + stream_id++; + } + } + + /** Print all streams **/ + printf(":: Stream -> core id[N]: (rx_port, rx_queue)->(tx_port, tx_queue)\n"); + for (i = 0; i < MAX_LCORES; i++) + for (j = 0; j < MAX_STREAMS; j++) { + /** No streams for this core **/ + if (lcore_infos[i].streams[j].tx_port == -1) + break; + printf("Stream -> core id[%d]: (%d,%d)->(%d,%d)\n", + i, + lcore_infos[i].streams[j].rx_port, + lcore_infos[i].streams[j].rx_queue, + lcore_infos[i].streams[j].tx_port, + lcore_infos[i].streams[j].tx_queue); + } +} + static void init_port(void) { @@ -757,6 +1052,11 @@ main(int argc, char **argv) fprintf(stdout, ":: Memory allocation change(M): %.6lf\n", (alloc - last_alloc) / 1.0e6); + if (enable_fwd) { + init_lcore_info(); + rte_eal_mp_remote_launch(start_forwarding, NULL, CALL_MASTER); + } + RTE_LCORE_FOREACH_SLAVE(lcore_id) if (rte_eal_wait_lcore(lcore_id) < 0) diff --git a/doc/guides/tools/flow-perf.rst b/doc/guides/tools/flow-perf.rst index 28d452fd06..ecd760de81 100644 --- a/doc/guides/tools/flow-perf.rst +++ b/doc/guides/tools/flow-perf.rst @@ -21,6 +21,8 @@ have a multi core insertion rate measurement support in the app. The application also provide the ability to measure rte flow deletion rate, in addition to memory consumption before and after the flows creation. +The app supports single and multi core performance measurements. + Compiling the Application ========================= @@ -98,6 +100,10 @@ The command line options are: * ``--dump-socket-mem`` Dump the memory stats for each socket before the insertion and after. +* ``enable-fwd`` + Enable packets forwarding after insertion/deletion operations. + + Attributes: * ``--ingress``