From patchwork Fri Feb 3 13:33:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 123037 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2323441BBD; Fri, 3 Feb 2023 14:34:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0FBA941133; Fri, 3 Feb 2023 14:34:16 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2042.outbound.protection.outlook.com [40.107.101.42]) by mails.dpdk.org (Postfix) with ESMTP id 3E18040EDF for ; Fri, 3 Feb 2023 14:34:14 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Rd2cklaTHoN/fad0wfY3uj54G6OlLww8VIdKb8Rn4LmPX0RZDB4puvAIfI1ZnJGYLNjtYeX0AuNh5BIL+S9d8r4IgM7llR5VPm0Ep9GqgQDpkRcmKqtnrZc46eitLgmv1dnerowBz10v0sqT5NI5In0GgBtnhWhDVv674NEu3Qr8oqPqrPW5VVvQ2qodxuUrdnq4noKr+/NtsfbOcN6ugK6Q4XNc/nFRv0BYeQkLcHb4hpYRIpI01EY8ja7v93adUl4VwWj1HbPS9/kcXDK+dkqPbwRqD00E4Etc2Jj1aNEp+VBwY1QRrdYjUjjU5bKc+3NaLBR7+orNNkJmnFg27A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vjY1pYo+XEij8fkYa8EqWLnxnSDifFTlMXxcy7OUvz0=; b=XhL5v3xXWqoSlsxePSQ4oIZ8Oocu/tkRP/6F3LI2hccAGxChJPEsx0tuDunJxv5I0ZHvF1HtgRIOOQtLOklGrYzeucNVozD0WgCS77Zn7UCyMHfcBswMTle8h58TPAFyzczQlJF0iMnaa1HrKN9nSt7HSDaE2pFxkryGjyJCDLqQ2WhBzZIBhI7juRkp6DXNDyQuhKXYxCpdDQB+h1A62lYb8UokUl8kLtJoJjx+RwX1VxQY+2OwWhl/28FMTxK1jAZBOfSJwEbkPyEqh0GHkwE/P/4qL9k1PQd/W0337WkPRQMct+T+c05hswoBSv13FKsg+SSmHjbQOd13x2Ah6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vjY1pYo+XEij8fkYa8EqWLnxnSDifFTlMXxcy7OUvz0=; b=alzvvFgltJvoHOgubyIqt84gReOfKBCyedSUN6iwldy7qmMrRGmvf9fd2eaQCfWOxmk9GKPmFNB0Jgsh9dStpP2sLv+uqYirharEhrzUQ7h6g1lCgMzHD+4ygp4yviZJfLxMNYT3mpnKGQxFBVhB8CoVemJRP9Tfxwlu6h2FYLWq1ritnpYm3zLfIoWaH1ouLknj33HOGuiwS8uZZN/euTqy12wrBN9RwfxKbh6HMAFKiJkFqFbkrlpBCMRiuo5+zeH7lubpUfB5kGRUU4DtzpphKoOY4EZjwvbZM3C2oQvP01/pperR66Ij81iH3Gvmd9k2aenFzFW+fTOcNezo+A== Received: from DM5PR07CA0099.namprd07.prod.outlook.com (2603:10b6:4:ae::28) by SA1PR12MB7248.namprd12.prod.outlook.com (2603:10b6:806:2be::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.27; Fri, 3 Feb 2023 13:34:10 +0000 Received: from DS1PEPF0000E65D.namprd02.prod.outlook.com (2603:10b6:4:ae:cafe::86) by DM5PR07CA0099.outlook.office365.com (2603:10b6:4:ae::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.31 via Frontend Transport; Fri, 3 Feb 2023 13:34:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS1PEPF0000E65D.mail.protection.outlook.com (10.167.18.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.21 via Frontend Transport; Fri, 3 Feb 2023 13:34:09 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Fri, 3 Feb 2023 05:33:59 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Fri, 3 Feb 2023 05:33:56 -0800 From: Jiawei Wang To: , , , , Aman Singh , Yuying Zhang , Ferruh Yigit CC: , Subject: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx queue API Date: Fri, 3 Feb 2023 15:33:38 +0200 Message-ID: <20230203133339.30271-2-jiaweiw@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230203133339.30271-1-jiaweiw@nvidia.com> References: <20230203050717.46914-1-jiaweiw@nvidia.com> <20230203133339.30271-1-jiaweiw@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E65D:EE_|SA1PR12MB7248:EE_ X-MS-Office365-Filtering-Correlation-Id: 88dd677d-25e1-4e25-3841-08db05eb54b6 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GPNGAC95oz0CQvC+Vn9ppvqsE4e1LGKU5jwcBcQaMkioUyfjYbIgM9D+VtROKRoRZ4vo9/8vT8XYo0ukVrosSbzMO1dOL9ngbo7TYEnfyu1Jy+4IHrZF4MabJEsLy8JcVliDNL0N994ruSVYubx6ZPqa9hpSrsbay/R8stuZG8SZcZ34PCfMBL2pG3RqsGhJCLudeD5lP5GTwcpqTaP+3YqNWXro4fZzgGtvyQZPh7odsXSP27HehsiXfR7bY/adc9U0U98GSrYOWgOB6uNfdAFfg1U3/BhysUp8IbEcmcGQM2dx1wuCvL/K/kSpsiI0Thr1oYi9Q29vAyE8KaQ17umDU/RFxRNmAg8Qjq11ZK6mwxO1juxhBNHnnaQ1Ztd8ytQOgCex9l2Db4uWBXSjKv/kwwTKNAJ69B5MvZ17WP7YQQFIYM6M5wz1oV3ts99AcqrqDvyc3It/IlcgQtIlhRl9JpzhSG+QWBia0Pk3dGu20ZsWJHuJahbKr2bnUT7gAkLlxXML3PupNt3UkpvPL46WLuMDPGLPgXGHCMn/g4eGHLYmu2t9EMvr/wBXUtg/jr+Qbf0XsoMvN6Hx7rPJMCiu1fl13ibOUGE9EVSZMuNsy/Mmki1xiVajQhPXvbTPriKsBmymSLwt3lcrzM7faA6n6Z2Z5d24vk9Wooco1yXmy/p8TNbdcK/PtUvwRxwZjaYBELQ7sDzGLq4vXl+ztQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(376002)(39860400002)(346002)(136003)(396003)(451199018)(46966006)(36840700001)(40470700004)(110136005)(70206006)(7696005)(70586007)(356005)(36756003)(86362001)(7636003)(40460700003)(8676002)(6666004)(107886003)(426003)(83380400001)(6286002)(336012)(16526019)(36860700001)(26005)(186003)(1076003)(2616005)(82310400005)(41300700001)(5660300002)(54906003)(8936002)(316002)(4326008)(82740400003)(2906002)(478600001)(47076005)(55016003)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 13:34:09.8176 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 88dd677d-25e1-4e25-3841-08db05eb54b6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E65D.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7248 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When multiple physical ports are connected to a single DPDK port, (example: kernel bonding, DPDK bonding, failsafe, etc.), we want to know which physical port is used for Rx and Tx. This patch maps a DPDK Tx queue with a physical port, by adding tx_phy_affinity setting in Tx queue. The affinity number is the physical port ID where packets will be sent. Value 0 means no affinity and traffic could be routed to any connected physical ports, this is the default current behavior. The number of physical ports is reported with rte_eth_dev_info_get(). The new tx_phy_affinity field is added into the padding hole of rte_eth_txconf structure, the size of rte_eth_txconf keeps the same. An ABI check rule needs to be added to avoid false warning. Add the testpmd command line: testpmd> port config (port_id) txq (queue_id) phy_affinity (value) For example, there're two physical ports connected to a single DPDK port (port id 0), and phy_affinity 1 stood for the first physical port and phy_affinity 2 stood for the second physical port. Use the below commands to config tx phy affinity for per Tx Queue: port config 0 txq 0 phy_affinity 1 port config 0 txq 1 phy_affinity 1 port config 0 txq 2 phy_affinity 2 port config 0 txq 3 phy_affinity 2 These commands config the Tx Queue index 0 and Tx Queue index 1 with phy affinity 1, uses Tx Queue 0 or Tx Queue 1 send packets, these packets will be sent from the first physical port, and similar with the second physical port if sending packets with Tx Queue 2 or Tx Queue 3. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- app/test-pmd/cmdline.c | 100 ++++++++++++++++++++ app/test-pmd/config.c | 1 + devtools/libabigail.abignore | 5 + doc/guides/rel_notes/release_23_03.rst | 4 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 13 +++ lib/ethdev/rte_ethdev.h | 10 ++ 6 files changed, 133 insertions(+) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index cb8c174020..f771fcf8ac 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -776,6 +776,10 @@ static void cmd_help_long_parsed(void *parsed_result, "port cleanup (port_id) txq (queue_id) (free_cnt)\n" " Cleanup txq mbufs for a specific Tx queue\n\n" + + "port config (port_id) txq (queue_id) phy_affinity (value)\n" + " Set the physical affinity value " + "on a specific Tx queue\n\n" ); } @@ -12633,6 +12637,101 @@ static cmdline_parse_inst_t cmd_show_port_flow_transfer_proxy = { } }; +/* *** configure port txq phy_affinity value *** */ +struct cmd_config_tx_phy_affinity { + cmdline_fixed_string_t port; + cmdline_fixed_string_t config; + portid_t portid; + cmdline_fixed_string_t txq; + uint16_t qid; + cmdline_fixed_string_t phy_affinity; + uint8_t value; +}; + +static void +cmd_config_tx_phy_affinity_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_config_tx_phy_affinity *res = parsed_result; + struct rte_eth_dev_info dev_info; + struct rte_port *port; + int ret; + + if (port_id_is_invalid(res->portid, ENABLED_WARN)) + return; + + if (res->portid == (portid_t)RTE_PORT_ALL) { + printf("Invalid port id\n"); + return; + } + + port = &ports[res->portid]; + + if (strcmp(res->txq, "txq")) { + printf("Unknown parameter\n"); + return; + } + if (tx_queue_id_is_invalid(res->qid)) + return; + + ret = eth_dev_info_get_print_err(res->portid, &dev_info); + if (ret != 0) + return; + + if (dev_info.nb_phy_ports == 0) { + printf("Number of physical ports is 0 which is invalid for PHY Affinity\n"); + return; + } + printf("The number of physical ports is %u\n", dev_info.nb_phy_ports); + if (dev_info.nb_phy_ports < res->value) { + printf("The PHY affinity value %u is Invalid, exceeds the " + "number of physical ports\n", res->value); + return; + } + port->txq[res->qid].conf.tx_phy_affinity = res->value; + + cmd_reconfig_device_queue(res->portid, 0, 1); +} + +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_port = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + port, "port"); +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_config = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + config, "config"); +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_portid = + TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity, + portid, RTE_UINT16); +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_txq = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + txq, "txq"); +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_qid = + TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity, + qid, RTE_UINT16); +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_hwport = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + phy_affinity, "phy_affinity"); +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_value = + TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity, + value, RTE_UINT8); + +static cmdline_parse_inst_t cmd_config_tx_phy_affinity = { + .f = cmd_config_tx_phy_affinity_parsed, + .data = (void *)0, + .help_str = "port config txq phy_affinity ", + .tokens = { + (void *)&cmd_config_tx_phy_affinity_port, + (void *)&cmd_config_tx_phy_affinity_config, + (void *)&cmd_config_tx_phy_affinity_portid, + (void *)&cmd_config_tx_phy_affinity_txq, + (void *)&cmd_config_tx_phy_affinity_qid, + (void *)&cmd_config_tx_phy_affinity_hwport, + (void *)&cmd_config_tx_phy_affinity_value, + NULL, + }, +}; + /* ******************************************************************************** */ /* list of instructions */ @@ -12866,6 +12965,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = { (cmdline_parse_inst_t *)&cmd_show_port_cman_capa, (cmdline_parse_inst_t *)&cmd_show_port_cman_config, (cmdline_parse_inst_t *)&cmd_set_port_cman_config, + (cmdline_parse_inst_t *)&cmd_config_tx_phy_affinity, NULL, }; diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index acccb6b035..b83fb17cfa 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -936,6 +936,7 @@ port_infos_display(portid_t port_id) printf("unknown\n"); break; } + printf("Current number of physical ports: %u\n", dev_info.nb_phy_ports); } void diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 7a93de3ba1..ac7d3fb2da 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -34,3 +34,8 @@ ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Temporary exceptions till next major ABI version ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +; Ignore fields inserted in padding hole of rte_eth_txconf +[suppress_type] + name = rte_eth_txconf + has_data_member_inserted_between = {offset_of(tx_deferred_start), offset_of(offloads)} diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst index 73f5d94e14..e99bd2dcb6 100644 --- a/doc/guides/rel_notes/release_23_03.rst +++ b/doc/guides/rel_notes/release_23_03.rst @@ -55,6 +55,10 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added affinity for multiple physical ports connected to a single DPDK port.** + + * Added Tx affinity in queue setup to map a physical port. + * **Updated AMD axgbe driver.** * Added multi-process support. diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 79a1fa9cb7..5c716f7679 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -1605,6 +1605,19 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue:: This command should be run when the port is stopped, or else it will fail. +config per queue Tx physical affinity +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure a per queue physical affinity value only on a specific Tx queue:: + + testpmd> port (port_id) txq (queue_id) phy_affinity (value) + +* ``phy_affinity``: physical port to use for sending, + when multiple physical ports are connected to + a single DPDK port. + +This command should be run when the port is stopped, otherwise it fails. + Config VXLAN Encap outer layers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index c129ca1eaf..2fd971b7b5 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1138,6 +1138,14 @@ struct rte_eth_txconf { less free descriptors than this value. */ uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ + /** + * Affinity with one of the multiple physical ports connected to the DPDK port. + * Value 0 means no affinity and traffic could be routed to any connected + * physical port. + * The first physical port is number 1 and so on. + * Number of physical ports is reported by nb_phy_ports in rte_eth_dev_info. + */ + uint8_t tx_phy_affinity; /** * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags. * Only offloads set on tx_queue_offload_capa or tx_offload_capa @@ -1744,6 +1752,8 @@ struct rte_eth_dev_info { /** Device redirection table size, the total number of entries. */ uint16_t reta_size; uint8_t hash_key_size; /**< Hash key size in bytes */ + /** Number of physical ports connected with DPDK port. */ + uint8_t nb_phy_ports; /** Bit mask of RSS offloads, the bit offset also means flow type */ uint64_t flow_type_rss_offloads; struct rte_eth_rxconf default_rxconf; /**< Default Rx configuration */ From patchwork Fri Feb 3 13:33:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 123038 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44B9E41BBD; Fri, 3 Feb 2023 14:34:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2CDC941611; Fri, 3 Feb 2023 14:34:18 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2056.outbound.protection.outlook.com [40.107.244.56]) by mails.dpdk.org (Postfix) with ESMTP id DF6A541148 for ; Fri, 3 Feb 2023 14:34:16 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cd2BvqPVYAmniqAdXxSmLVQFZ9EvLXaswWbvceEd3CHn7ViDjice0m0E0ZwSiMotG68NdG8izqIYrFcAFa7E0NIpVPT3x5q7QX2wSsiijOsJwoowVuB1ZkL87YDDjULEW9uxsXjENNR1xeSmDH50I5l6tGTjeV9waPsIyBlapC594ZaHltQ7Y5F3p2TzCDNdc1tItzm7GSoVCP4jEa09N5qgF+8hAtPNI+sJ4v6gWSTBWtdLQIK1jSmzPAtq0NIYNXdcvRUSX85phPuvt5r/Npg0ow9TOmb0s0IW148fLMZeMrgX7r4xPPIG/W3Q9NT7BNqQI49Kke5z5mpayF/7cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AM2sNpxJ6ih/br/xuX9JBtKP3MkPkWUtNqeiAu7xKYg=; b=aZgUwrc6LM2H0pK79TDLIEq+gU/6dekfAsPjgJQMDseI7wuCFTEhxy85/5Ue42ljTyr/f8zjXS8PLwxWvGDH3fBaCFQPHxtVwT3CW7zr6jVAFH/OWdt6a5TNKZDYV33xOkFuakt8Yq7H1exuwCQw3LHAfPaHjml5txXTE6VP6PyFDcBX5LzSfGpDZnva2Nt6qWThCtNNiUmnJTRAGn4RMIeAKE5hg4ED+6hMmPMyA0pgowVMgW8O/4zL12BXP5Uzd1pp+vjqfdxPvgO7OzujKZkY7kozFqs6pcfiNS+u4Jrtgxh47VHgOkGC27CNsjvDfIF0EaXIgvxe1Vm8daoKeA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AM2sNpxJ6ih/br/xuX9JBtKP3MkPkWUtNqeiAu7xKYg=; b=suODKBS3pJTEfl9vcJOSzSNONWWSjRSjCOADoFlTWUhXVPWVkVMYRt5layxT21hnpVMZgswEQdpwM0iQx7SSjdUj30k6bM0vhOgnf+z8RCn3l0ozhnMjup13IOYEY8VGwxjSfSzVnnmIgk+s5PwylBZIk02qJGwQvEruWxdAw1nODCNrgMlgT3snTlPJcoLqZcv4x/Vib/hqsF4WzWxEDjq1djQod4MvcktMtMm45Gu0NFRtfK7UbO77xdwsCzXJnj+61SMTNRgERBMfM5EMV6BgMYcQ2uGwz8wmT54cx3vOb2XxoK7TXvTYcreXdSEsZENXI/VM6nijRSbs3Ll66g== Received: from DM5PR07CA0110.namprd07.prod.outlook.com (2603:10b6:4:ae::39) by DS0PR12MB7629.namprd12.prod.outlook.com (2603:10b6:8:13e::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.27; Fri, 3 Feb 2023 13:34:14 +0000 Received: from DS1PEPF0000E65D.namprd02.prod.outlook.com (2603:10b6:4:ae:cafe::18) by DM5PR07CA0110.outlook.office365.com (2603:10b6:4:ae::39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.31 via Frontend Transport; Fri, 3 Feb 2023 13:34:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS1PEPF0000E65D.mail.protection.outlook.com (10.167.18.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.21 via Frontend Transport; Fri, 3 Feb 2023 13:34:14 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Fri, 3 Feb 2023 05:34:05 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Fri, 3 Feb 2023 05:34:02 -0800 From: Jiawei Wang To: , , , , Aman Singh , Yuying Zhang , Ferruh Yigit CC: , Subject: [PATCH v4 2/2] ethdev: add PHY affinity match item Date: Fri, 3 Feb 2023 15:33:39 +0200 Message-ID: <20230203133339.30271-3-jiaweiw@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230203133339.30271-1-jiaweiw@nvidia.com> References: <20230203050717.46914-1-jiaweiw@nvidia.com> <20230203133339.30271-1-jiaweiw@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E65D:EE_|DS0PR12MB7629:EE_ X-MS-Office365-Filtering-Correlation-Id: 731068c2-c408-41c6-b0a3-08db05eb579b X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jWmxpq5pWlbSXZhLY3yGKVscvkjc98b/9YN6+zJC3jXLs8MZ2ds17/2ZQ/2Bywnv82EKCkeVmKqFP+bm0j6vJdipIi87W6y+mKyj6Ac0HTnvvHW4x0GRkJOrbYgchHAtiTmvmpiO3UQCKSvNw0u17S4EG2wvp/jgDwz084KVs5iWEDytpUb4YgyQj3XsriVOMXSOjjb/IZ41I+29axeXh0VNno/4UwUfLxqD9MHaKZe6EkErD91Qqx06yKp4xE5Y3lhNsUOUcnKxKZ/az/YFJwMlL4RSpmB6Bk+XQh9Q8Ym1xAX3sG254Py7D7AHUC81yX8T5ooXgnFTwR2fvCLpw77LsIZLMRqxqmbE2k46ydyxMiavhE8TEBBBgAW/B/tIN56q0h2Kmi3l0+Fl+QYgifDec7xumFbRvqsBGXX+ZXvtBHABOFzZRJaAENg40igE1aUSEhOVHbQuf/nngLmLEJ5NreddoMpspzTNCkl1KPr4Ao2NT3qc877Uv7NbV11BhaX3J0D2y1Scak//31ENErpNZRgtclMgX+gele9PATv/aJkSn68ERsalJRVWWe0OJFMkhX3aTcOAnYzjG1nScTYcZ47dw54V0g00xOgWnUN/AHLdAIW4zN+1t9evSGQou0q1kO6IWcfUVFMilLXXoNKGpVlnD/WToK9grH4x5sMfvxYXyRjiImLjPJv7FkQaGSNY9cv4Sb43+cV34wHahA== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(376002)(396003)(346002)(39860400002)(136003)(451199018)(36840700001)(40470700004)(46966006)(356005)(55016003)(40480700001)(47076005)(7636003)(336012)(2616005)(426003)(83380400001)(82740400003)(36860700001)(40460700003)(8676002)(70586007)(70206006)(8936002)(41300700001)(5660300002)(86362001)(2906002)(4326008)(107886003)(7696005)(478600001)(54906003)(6666004)(1076003)(316002)(186003)(82310400005)(6286002)(26005)(110136005)(36756003)(16526019); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 13:34:14.6769 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 731068c2-c408-41c6-b0a3-08db05eb579b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E65D.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7629 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When multiple physical ports are connected to a single DPDK port, (example: kernel bonding, DPDK bonding, failsafe, etc.), we want to know which physical port is used for Rx and Tx. This patch allows to map a Rx queue with a physical port by using a flow rule. The new item is called RTE_FLOW_ITEM_TYPE_PHY_AFFINITY. While uses the phy affinity as a matching item in the flow rule, and sets the same phy_affinity value on the Tx queue, then the packet can be sent from the same physical port as the receiving one. The physical affinity numbering starts from 1, then trying to match on phy_affinity 0 will result in an error. Add the testpmd command line to match the new item: flow create 0 ingress group 0 pattern phy_affinity affinity is 1 / end actions queue index 0 / end The above command means that creates a flow on a single DPDK port and matches the packet from the first physical port and redirects these packets into Rx queue 0. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 28 +++++++++++++++++ doc/guides/prog_guide/rte_flow.rst | 8 +++++ doc/guides/rel_notes/release_23_03.rst | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 +++ lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 35 +++++++++++++++++++++ 6 files changed, 77 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 88108498e0..5e9e617674 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -465,6 +465,8 @@ enum index { ITEM_METER, ITEM_METER_COLOR, ITEM_METER_COLOR_NAME, + ITEM_PHY_AFFINITY, + ITEM_PHY_AFFINITY_VALUE, /* Validate/create actions. */ ACTIONS, @@ -1355,6 +1357,7 @@ static const enum index next_item[] = { ITEM_L2TPV2, ITEM_PPP, ITEM_METER, + ITEM_PHY_AFFINITY, END_SET, ZERO, }; @@ -1821,6 +1824,12 @@ static const enum index item_meter[] = { ZERO, }; +static const enum index item_phy_affinity[] = { + ITEM_PHY_AFFINITY_VALUE, + ITEM_NEXT, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -6443,6 +6452,22 @@ static const struct token token_list[] = { ARGS_ENTRY(struct buffer, port)), .call = parse_mp, }, + [ITEM_PHY_AFFINITY] = { + .name = "phy_affinity", + .help = "match on the physical port receiving the packets", + .priv = PRIV_ITEM(PHY_AFFINITY, + sizeof(struct rte_flow_item_phy_affinity)), + .next = NEXT(item_phy_affinity), + .call = parse_vc, + }, + [ITEM_PHY_AFFINITY_VALUE] = { + .name = "affinity", + .help = "physical affinity value", + .next = NEXT(item_phy_affinity, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_phy_affinity, + affinity)), + }, }; /** Remove and return last entry from argument stack. */ @@ -10981,6 +11006,9 @@ flow_item_default_mask(const struct rte_flow_item *item) case RTE_FLOW_ITEM_TYPE_METER_COLOR: mask = &rte_flow_item_meter_color_mask; break; + case RTE_FLOW_ITEM_TYPE_PHY_AFFINITY: + mask = &rte_flow_item_phy_affinity_mask; + break; default: break; } diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 3e6242803d..fa43b9bb66 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1544,6 +1544,14 @@ Matches Color Marker set by a Meter. - ``color``: Metering color marker. +Item: ``PHY_AFFINITY`` +^^^^^^^^^^^^^^^^^^^^^^ + +Matches on the physical port of the received packet. +In case of multiple physical ports, the affinity numbering starts from 1. + +- ``affinity``: Physical affinity. + Actions ~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst index e99bd2dcb6..320b7f2efc 100644 --- a/doc/guides/rel_notes/release_23_03.rst +++ b/doc/guides/rel_notes/release_23_03.rst @@ -58,6 +58,7 @@ New Features * **Added affinity for multiple physical ports connected to a single DPDK port.** * Added Tx affinity in queue setup to map a physical port. + * Added Rx affinity flow matching of a physical port. * **Updated AMD axgbe driver.** diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 5c716f7679..6079ca1ed6 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3754,6 +3754,10 @@ This section lists supported pattern items and their attributes, if any. - ``color {value}``: meter color value (green/yellow/red). +- ``phy_affinity``: match physical port. + + - ``affinity {value}``: physical port (starts from 1). + - ``send_to_kernel``: send packets to kernel. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 7d0c24366c..0c2d3b679b 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -157,6 +157,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(L2TPV2, sizeof(struct rte_flow_item_l2tpv2)), MK_FLOW_ITEM(PPP, sizeof(struct rte_flow_item_ppp)), MK_FLOW_ITEM(METER_COLOR, sizeof(struct rte_flow_item_meter_color)), + MK_FLOW_ITEM(PHY_AFFINITY, sizeof(struct rte_flow_item_phy_affinity)), }; /** Generate flow_action[] entry. */ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index b60987db4b..da32f9e383 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -624,6 +624,15 @@ enum rte_flow_item_type { * See struct rte_flow_item_meter_color. */ RTE_FLOW_ITEM_TYPE_METER_COLOR, + + /** + * Matches on the physical port of the received packet. + * Used in case multiple physical ports are connected to the DPDK port. + * First port is number 1. + * + * @see struct rte_flow_item_phy_affinity. + */ + RTE_FLOW_ITEM_TYPE_PHY_AFFINITY, }; /** @@ -2103,6 +2112,32 @@ static const struct rte_flow_item_meter_color rte_flow_item_meter_color_mask = { }; #endif +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ITEM_TYPE_PHY_AFFINITY + * + * For multiple physical ports connected to a single DPDK port, + * match the physical port receiving the packets. + */ +struct rte_flow_item_phy_affinity { + /** + * Physical port receiving the packets. + * Numbering starts from 1. + * Number of physical ports is reported by nb_phy_ports in rte_eth_dev_info. + */ + uint8_t affinity; +}; + +/** Default mask for RTE_FLOW_ITEM_TYPE_PHY_AFFINITY. */ +#ifndef __cplusplus +static const struct rte_flow_item_phy_affinity +rte_flow_item_phy_affinity_mask = { + .affinity = 0xff, +}; +#endif + /** * Action types. *