From patchwork Tue Sep 19 10:10:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 131619 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F80342601; Tue, 19 Sep 2023 12:10:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 01A50406B8; Tue, 19 Sep 2023 12:10:49 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2064.outbound.protection.outlook.com [40.107.223.64]) by mails.dpdk.org (Postfix) with ESMTP id F357E406A2 for ; Tue, 19 Sep 2023 12:10:46 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=e7FuzdBao6RwiOa4HdN520S2MiXGt+4335+ZuLgZPjQbVarFS2qhc9lwzn+Mz03inlTO9fW2BSC1ZNkXMH3M+Y1NsyBEcw7/6CSq7szmITDJJwAdREDIwO38KHdWIgCIWVBCh2HD3Dr4buKNnxMC5H2wBp7ekUJcb9iUPqBB+pf2WyWbREEZaWTyxmcJ7dsBb4rJGBUK5RqaINd4R/zY09S5rbUJrq5B2SLqIyCDktx9ysOymqYlxlwVjbm7KScvShG2gqF35ULpX3UFA6WiJj3+wtoKIRkwCS2+Q2/75pM0UnKbDMoavWBNAGVTou4RAcmp+aOcp4S42bYsgEn5Jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QypT/rE7X1IvLCaUUIPANcJAlunZW69zNDUTfbEDDMo=; b=QBE804e8idHJEIlxH7r+jVrytt2feYpNF5BFepf3JVLbaGZ2G+L8DYVLA9latCiRehNWY/Yy7+XNQoasPgQYxVMPn7Dy1X0NV93+yEFiMsfS7akiMLnR/0GMmVVGErkqBG7SzTJcv9YLuVdbl2fZdvnCAS9E9xzsqUiksEWcA6b/gNA3BT+TQTl1qlXtla2RM6JUltfr0vbHdw9Z6tR3R2pp6f7FdonrM6HcyEF//cChRhAuIGbG/o4OwHI+cBy1tuVyVbHON3cfXSHm1c8TZbHwt/KdAdablbkc1Tk3zzENAj9NF9lTvri12WYkNIaDp+QYG6Q2My/q2PpeSEDXfQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QypT/rE7X1IvLCaUUIPANcJAlunZW69zNDUTfbEDDMo=; b=X8X2FX2hOAAngzfFhyFirZzTDBcNQdpflyidkhy7Dt95JE6Wis9UPy75ccfzTPEQ0UlfwvzerAI0tm4jBWipnBDx4GV4yTeTWVEPziy65ZTH3p0bkyLf2CUF7zCgT76t8/qGnwALEJhssMLkNLBeNkixhHmJDgCCcgN1dr9z6lrmiwxkJZWD79OB1pw3l51vwKcHyUQGuMDYy+X81dKMn6SuR+xsxdDrai8eoEmC9zCCgx6Mc25o1vlXYgVReKvAsuYN7rvfXVgA71Ye5HiqBg2SC1FsCbp3Y2DtValu3rKXWGxn48IRakfQl5W7E3yzfJyvIBTuQW6TmNaG6+rEwQ== Received: from SJ2PR07CA0012.namprd07.prod.outlook.com (2603:10b6:a03:505::13) by CY8PR12MB8412.namprd12.prod.outlook.com (2603:10b6:930:6f::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.27; Tue, 19 Sep 2023 10:10:38 +0000 Received: from CO1PEPF000044F3.namprd05.prod.outlook.com (2603:10b6:a03:505:cafe::bc) by SJ2PR07CA0012.outlook.office365.com (2603:10b6:a03:505::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.28 via Frontend Transport; Tue, 19 Sep 2023 10:10:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044F3.mail.protection.outlook.com (10.167.241.73) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.19 via Frontend Transport; Tue, 19 Sep 2023 10:10:37 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 19 Sep 2023 03:10:25 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 19 Sep 2023 03:10:23 -0700 From: Gregory Etelson To: CC: , =?utf-8?b?wqA=?= , "Aman Singh" , Yuying Zhang Subject: [PATCH] testpmd: add hairpin-map parameter Date: Tue, 19 Sep 2023 13:10:06 +0300 Message-ID: <20230919101006.19936-1-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail202.nvidia.com (10.129.68.7) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044F3:EE_|CY8PR12MB8412:EE_ X-MS-Office365-Filtering-Correlation-Id: 585cfd47-a7fb-4453-002b-08dbb8f8abaf X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zXeQlIKtj8Kl1TVwZL1HdzmEKheJiW2RlmUWnj9an9AHloUyzIw8BH1DLEbM+6YoklSkDXRABeHYA5hJB54qjq8nQwNc/oKBPt9eY2WERSSlxwez+Y02hnZFiJ9hLNlnEnWVUR5XzZZES5qyU6gpI7hvLZ2aYHakAe7satvYfD0LBBPNVcwCrGLV9HYZlq7UsIjUVmWYax16qcfJJxy7mrg5VIJEpNaTNaV5kg5IIhVnFVf8xEE5876x5jWzUGKiDs24/FRxcziNDUfFBuUkrCLNVeoxk1gAUYfHlT/vvrJ2UkEqHYODA+cuHRPJv87JSV6JvqtJEtiHSd1PdK0AGDAmbnljVF+/6prUbfxBzJMn5+B5W1BaUhA5qJxG+cuAAhwbVHwFcJE4FDOkzPehTItOTOWfk34e6Bun+3r3Q5zFZTpSj2Cb1XoTNfrJZmJVfQTSdxMIYnZmEeGKqJn5B4BM+HSWMgnwLxfWkLSo9F6kDXxKzhb1hr6tac3Eb9owq1wba+rNbnsl2EVW74Z3urwsTbj9J23zIR8vQJ0bBwmnYzMTwzC543oCTiZyYRb2/dhzgwF+8thyOqB3MVqiva+V7TYvWh4GbjHxpaDHXijNrexGlp3OUgVVSv/uw9Pn60bnSlcb8z7iRT8eXmmeB+F+jmvOLvaY4HmLdgqzn052YlHbtvDlWZAgL9MM/oZbZm2SkzaETbCE+YbXHALNjnfKcQvWdF/F86nM5crlX44CYzQAB9Ltpx89InrYOfPozGlgMhCVv9B9fYRDs2oO2Q== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(396003)(376002)(39860400002)(136003)(346002)(451199024)(186009)(82310400011)(1800799009)(46966006)(40470700004)(36840700001)(30864003)(478600001)(41300700001)(55016003)(83380400001)(5660300002)(40480700001)(6666004)(2906002)(70206006)(316002)(70586007)(6916009)(54906003)(8676002)(4326008)(8936002)(7696005)(1076003)(40460700003)(26005)(2616005)(16526019)(6286002)(426003)(336012)(36860700001)(47076005)(36756003)(7636003)(356005)(86362001)(82740400003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Sep 2023 10:10:37.3892 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 585cfd47-a7fb-4453-002b-08dbb8f8abaf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044F3.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8412 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Testpmd hairpin implementation always sets the next valid port to complete hairpin binding. That limits hairpin configuration options. The new parameter allows explicit selection of Rx and Tx ports and queues in hairpin configuration. The new `hairpin-map` parameter is provided with 5 parameters, separated by `:` `--hairpin-map=Rx port id:Rx queue:Tx port id:Tx queue:queues number` Testpmd operator can provide several `hairpin-map` parameters for different hairpin maps. Example: dpdk-testpmd -- \ \ --rxq=2 --txq=2 --hairpinq=2 --hairpin-mode=0x12 \ --hairpin-map=0:2:1:2:1 \ # [1] --hairpin-map=0:3:2:2:3 # [2] Hairpin map [1] binds Rx port 0, queue 2 with Tx port 1, queue 2. Hairpin map [2] binds Rx port 0, queue 3 with Tx port 2, queue 2, Rx port 0, queue 4 with Tx port 2, queue 3, Rx port 0, queue 5 with Tx port 2, queue 4. The new `hairpin-map` parameter is optional. If omitted, testpmd will create "default" hairpin maps. Signed-off-by: Gregory Etelson --- app/test-pmd/parameters.c | 63 ++++++++ app/test-pmd/testpmd.c | 212 ++++++++++++++++++-------- app/test-pmd/testpmd.h | 18 +++ doc/guides/testpmd_app_ug/run_app.rst | 3 + 4 files changed, 230 insertions(+), 66 deletions(-) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index a9ca58339d..675de81861 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -206,6 +206,12 @@ usage(char* progname) printf(" --hairpin-mode=0xXX: bitmask set the hairpin port mode.\n" " 0x10 - explicit Tx rule, 0x02 - hairpin ports paired\n" " 0x01 - hairpin ports loop, 0x00 - hairpin port self\n"); + printf(" --hairpin-map=rxpi:rxq:txpi:txq:n: hairpin map.\n" + " rxpi - Rx port index.\n" + " rxq - Rx queue.\n" + " txpi - Tx port index.\n" + " txq - Tx queue.\n" + " n - hairpin queues number.\n"); } #ifdef RTE_LIB_CMDLINE @@ -588,6 +594,55 @@ parse_link_speed(int n) return speed; } +static __rte_always_inline +char *parse_hairpin_map_entry(char *input, char **next) +{ + char *tail = strchr(input, ':'); + + if (!tail) + return NULL; + tail[0] = '\0'; + *next = tail + 1; + return input; +} + +static int +parse_hairpin_map(char *hpmap) +{ + /* + * Testpmd hairpin map format: + * + */ + char *head, *next = hpmap; + struct hairpin_map *map = calloc(1, sizeof(*map)); + + if (!map) + return -ENOMEM; + + head = parse_hairpin_map_entry(next, &next); + if (!head) + goto err; + map->rx_port = atoi(head); + head = parse_hairpin_map_entry(next, &next); + if (!head) + goto err; + map->rxq_head = atoi(head); + head = parse_hairpin_map_entry(next, &next); + if (!head) + goto err; + map->tx_port = atoi(head); + head = parse_hairpin_map_entry(next, &next); + if (!head) + goto err; + map->txq_head = atoi(head); + map->qnum = atoi(next); + hairpin_add_multiport_map(map); + return 0; +err: + free(map); + return -EINVAL; +} + void launch_args_parse(int argc, char** argv) { @@ -663,6 +718,7 @@ launch_args_parse(int argc, char** argv) { "txd", 1, 0, 0 }, { "hairpinq", 1, 0, 0 }, { "hairpin-mode", 1, 0, 0 }, + { "hairpin-map", 1, 0, 0 }, { "burst", 1, 0, 0 }, { "flowgen-clones", 1, 0, 0 }, { "flowgen-flows", 1, 0, 0 }, @@ -1111,6 +1167,13 @@ launch_args_parse(int argc, char** argv) else hairpin_mode = (uint32_t)n; } + if (!strcmp(lgopts[opt_idx].name, "hairpin-map")) { + hairpin_multiport_mode = true; + ret = parse_hairpin_map(optarg); + if (ret) + rte_exit(EXIT_FAILURE, "invalid hairpin map\n"); + + } if (!strcmp(lgopts[opt_idx].name, "burst")) { n = atoi(optarg); if (n == 0) { diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 938ca035d4..2ba1727c51 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -434,6 +434,16 @@ uint8_t clear_ptypes = true; /* Hairpin ports configuration mode. */ uint32_t hairpin_mode; +bool hairpin_multiport_mode = false; + +static LIST_HEAD(,hairpin_map) hairpin_map_head = LIST_HEAD_INITIALIZER(); + +void +hairpin_add_multiport_map(struct hairpin_map *map) +{ + LIST_INSERT_HEAD(&hairpin_map_head, map, entry); +} + /* Pretty printing of ethdev events */ static const char * const eth_event_desc[] = { [RTE_ETH_EVENT_UNKNOWN] = "unknown", @@ -2677,28 +2687,107 @@ port_is_started(portid_t port_id) #define HAIRPIN_MODE_TX_LOCKED_MEMORY RTE_BIT32(16) #define HAIRPIN_MODE_TX_RTE_MEMORY RTE_BIT32(17) - -/* Configure the Rx and Tx hairpin queues for the selected port. */ static int -setup_hairpin_queues(portid_t pi, portid_t p_pi, uint16_t cnt_pi) +port_config_hairpin_rxq(portid_t pi, uint16_t peer_tx_port, + queueid_t rxq_head, queueid_t txq_head, + uint16_t qcount, uint32_t manual_bind) { - queueid_t qi; + int diag; + queueid_t i, qi; + uint32_t tx_explicit = !!(hairpin_mode & 0x10); + uint32_t force_mem = !!(hairpin_mode & HAIRPIN_MODE_RX_FORCE_MEMORY); + uint32_t locked_mem = !!(hairpin_mode & HAIRPIN_MODE_RX_LOCKED_MEMORY); + uint32_t rte_mem = !!(hairpin_mode & HAIRPIN_MODE_RX_RTE_MEMORY); + struct rte_port *port = &ports[pi]; struct rte_eth_hairpin_conf hairpin_conf = { .peer_count = 1, }; - int i; + + for (qi = rxq_head, i = 0; qi < rxq_head + qcount; qi++) { + hairpin_conf.peers[0].port = peer_tx_port; + hairpin_conf.peers[0].queue = i + txq_head; + hairpin_conf.manual_bind = manual_bind; + hairpin_conf.tx_explicit = tx_explicit; + hairpin_conf.force_memory = force_mem; + hairpin_conf.use_locked_device_memory = locked_mem; + hairpin_conf.use_rte_memory = rte_mem; + diag = rte_eth_rx_hairpin_queue_setup + (pi, qi, nb_rxd, &hairpin_conf); + i++; + if (diag == 0) + continue; + + /* Fail to setup rx queue, return */ + if (port->port_status == RTE_PORT_HANDLING) + port->port_status = RTE_PORT_STOPPED; + else + fprintf(stderr, + "Port %d can not be set back to stopped\n", pi); + fprintf(stderr, + "Port %d failed to configure hairpin on rxq %u.\n" + "Peer port: %u peer txq: %u\n", + pi, qi, peer_tx_port, i); + /* try to reconfigure queues next time */ + port->need_reconfig_queues = 1; + return -1; + } + return 0; +} + +static int +port_config_hairpin_txq(portid_t pi, uint16_t peer_rx_port, + queueid_t rxq_head, queueid_t txq_head, + uint16_t qcount, uint32_t manual_bind) +{ int diag; + queueid_t i, qi; + uint32_t tx_explicit = !!(hairpin_mode & 0x10); + uint32_t force_mem = !!(hairpin_mode & HAIRPIN_MODE_TX_FORCE_MEMORY); + uint32_t locked_mem = !!(hairpin_mode & HAIRPIN_MODE_TX_LOCKED_MEMORY); + uint32_t rte_mem = !!(hairpin_mode & HAIRPIN_MODE_TX_RTE_MEMORY); struct rte_port *port = &ports[pi]; + struct rte_eth_hairpin_conf hairpin_conf = { + .peer_count = 1, + }; + + for (qi = txq_head, i = 0; qi < txq_head + qcount; qi++) { + hairpin_conf.peers[0].port = peer_rx_port; + hairpin_conf.peers[0].queue = i + rxq_head; + hairpin_conf.manual_bind = manual_bind; + hairpin_conf.tx_explicit = tx_explicit; + hairpin_conf.force_memory = force_mem; + hairpin_conf.use_locked_device_memory = locked_mem; + hairpin_conf.use_rte_memory = rte_mem; + diag = rte_eth_tx_hairpin_queue_setup + (pi, qi, nb_txd, &hairpin_conf); + i++; + if (diag == 0) + continue; + + /* Fail to setup rx queue, return */ + if (port->port_status == RTE_PORT_HANDLING) + port->port_status = RTE_PORT_STOPPED; + else + fprintf(stderr, + "Port %d can not be set back to stopped\n", pi); + fprintf(stderr, + "Port %d failed to configure hairpin on txq %u.\n" + "Peer port: %u peer rxq: %u\n", + pi, qi, peer_rx_port, i); + /* try to reconfigure queues next time */ + port->need_reconfig_queues = 1; + return -1; + } + return 0; +} + +static int +setup_legacy_hairpin_queus(portid_t pi, portid_t p_pi, uint16_t cnt_pi) +{ + int diag; uint16_t peer_rx_port = pi; uint16_t peer_tx_port = pi; uint32_t manual = 1; - uint32_t tx_exp = hairpin_mode & 0x10; - uint32_t rx_force_memory = hairpin_mode & HAIRPIN_MODE_RX_FORCE_MEMORY; - uint32_t rx_locked_memory = hairpin_mode & HAIRPIN_MODE_RX_LOCKED_MEMORY; - uint32_t rx_rte_memory = hairpin_mode & HAIRPIN_MODE_RX_RTE_MEMORY; - uint32_t tx_force_memory = hairpin_mode & HAIRPIN_MODE_TX_FORCE_MEMORY; - uint32_t tx_locked_memory = hairpin_mode & HAIRPIN_MODE_TX_LOCKED_MEMORY; - uint32_t tx_rte_memory = hairpin_mode & HAIRPIN_MODE_TX_RTE_MEMORY; if (!(hairpin_mode & 0xf)) { peer_rx_port = pi; @@ -2706,10 +2795,10 @@ setup_hairpin_queues(portid_t pi, portid_t p_pi, uint16_t cnt_pi) manual = 0; } else if (hairpin_mode & 0x1) { peer_tx_port = rte_eth_find_next_owned_by(pi + 1, - RTE_ETH_DEV_NO_OWNER); + RTE_ETH_DEV_NO_OWNER); if (peer_tx_port >= RTE_MAX_ETHPORTS) peer_tx_port = rte_eth_find_next_owned_by(0, - RTE_ETH_DEV_NO_OWNER); + RTE_ETH_DEV_NO_OWNER); if (p_pi != RTE_MAX_ETHPORTS) { peer_rx_port = p_pi; } else { @@ -2725,69 +2814,60 @@ setup_hairpin_queues(portid_t pi, portid_t p_pi, uint16_t cnt_pi) peer_rx_port = p_pi; } else { peer_rx_port = rte_eth_find_next_owned_by(pi + 1, - RTE_ETH_DEV_NO_OWNER); + RTE_ETH_DEV_NO_OWNER); if (peer_rx_port >= RTE_MAX_ETHPORTS) peer_rx_port = pi; } peer_tx_port = peer_rx_port; manual = 1; } + diag = port_config_hairpin_txq(pi, peer_rx_port, nb_rxq, nb_txq, + nb_hairpinq, manual); + if (diag) + return diag; + diag = port_config_hairpin_rxq(pi, peer_tx_port, nb_rxq, nb_txq, + nb_hairpinq, manual); + if (diag) + return diag; + return 0; +} - for (qi = nb_txq, i = 0; qi < nb_hairpinq + nb_txq; qi++) { - hairpin_conf.peers[0].port = peer_rx_port; - hairpin_conf.peers[0].queue = i + nb_rxq; - hairpin_conf.manual_bind = !!manual; - hairpin_conf.tx_explicit = !!tx_exp; - hairpin_conf.force_memory = !!tx_force_memory; - hairpin_conf.use_locked_device_memory = !!tx_locked_memory; - hairpin_conf.use_rte_memory = !!tx_rte_memory; - diag = rte_eth_tx_hairpin_queue_setup - (pi, qi, nb_txd, &hairpin_conf); - i++; - if (diag == 0) - continue; - - /* Fail to setup rx queue, return */ - if (port->port_status == RTE_PORT_HANDLING) - port->port_status = RTE_PORT_STOPPED; - else - fprintf(stderr, - "Port %d can not be set back to stopped\n", pi); - fprintf(stderr, "Fail to configure port %d hairpin queues\n", - pi); - /* try to reconfigure queues next time */ - port->need_reconfig_queues = 1; - return -1; - } - for (qi = nb_rxq, i = 0; qi < nb_hairpinq + nb_rxq; qi++) { - hairpin_conf.peers[0].port = peer_tx_port; - hairpin_conf.peers[0].queue = i + nb_txq; - hairpin_conf.manual_bind = !!manual; - hairpin_conf.tx_explicit = !!tx_exp; - hairpin_conf.force_memory = !!rx_force_memory; - hairpin_conf.use_locked_device_memory = !!rx_locked_memory; - hairpin_conf.use_rte_memory = !!rx_rte_memory; - diag = rte_eth_rx_hairpin_queue_setup - (pi, qi, nb_rxd, &hairpin_conf); - i++; - if (diag == 0) - continue; - - /* Fail to setup rx queue, return */ - if (port->port_status == RTE_PORT_HANDLING) - port->port_status = RTE_PORT_STOPPED; - else - fprintf(stderr, - "Port %d can not be set back to stopped\n", pi); - fprintf(stderr, "Fail to configure port %d hairpin queues\n", - pi); - /* try to reconfigure queues next time */ - port->need_reconfig_queues = 1; - return -1; +static int +setup_mapped_harpin_queues(portid_t pi) +{ + int ret = 0; + struct hairpin_map *map; + + LIST_FOREACH(map, &hairpin_map_head, entry) { + if (map->rx_port == pi) { + ret = port_config_hairpin_rxq(pi, map->tx_port, + map->rxq_head, + map->txq_head, + map->qnum, true); + if (ret) + return ret; + } + if (map->tx_port == pi) { + ret = port_config_hairpin_txq(pi, map->rx_port, + map->rxq_head, + map->txq_head, + map->qnum, true); + if (ret) + return ret; + } } return 0; } +/* Configure the Rx and Tx hairpin queues for the selected port. */ +static int +setup_hairpin_queues(portid_t pi, portid_t p_pi, uint16_t cnt_pi) +{ + return !hairpin_multiport_mode ? + setup_legacy_hairpin_queus(pi, p_pi, cnt_pi) : + setup_mapped_harpin_queues(pi); +} + /* Configure the Rx with optional split. */ int rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index f1df6a8faf..208e8e9514 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -125,6 +125,24 @@ enum noisy_fwd_mode { NOISY_FWD_MODE_MAX, }; +struct hairpin_map { + LIST_ENTRY(hairpin_map) entry; /**< List entry. */ + portid_t rx_port; /**< Hairpin Rx port ID. */ + portid_t tx_port; /**< Hairpin Tx port ID. */ + uint16_t rxq_head; /**< Hairpin Rx queue head. */ + uint16_t txq_head; /**< Hairpin Tx queue head. */ + uint16_t qnum; /**< Hairpin queues number. */ +}; + +/** + * Command line arguments parser sets `hairpin_multiport_mode` to True + * if explicit hairpin map configuration mode was used. + */ +extern bool hairpin_multiport_mode; + +/** Hairpin maps list. */ +extern void hairpin_add_multiport_map(struct hairpin_map *map); + /** * The data structure associated with RX and TX packet burst statistics * that are recorded for each forwarding stream. diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index 6e9c552e76..a202c98b4c 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -566,6 +566,9 @@ The command line options are: The default value is 0. Hairpin will use single port mode and implicit Tx flow mode. +* ``--hairpin-map=Rx port id:Rx queue:Tx port id:Tx queue:queues number`` + + Set explicit hairpin configuration. Testpmd Multi-Process Command-line Options ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~