From patchwork Mon Jul 12 10:24:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 95702 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B11B9A0C48; Mon, 12 Jul 2021 12:25:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 306CD4069D; Mon, 12 Jul 2021 12:25:04 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2058.outbound.protection.outlook.com [40.107.236.58]) by mails.dpdk.org (Postfix) with ESMTP id B214340685; Mon, 12 Jul 2021 12:25:03 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WuvXvcpIdDGC2UZTxierOhyLJHS9TrhPHfEB/IKobLVmmOhAsoe7ByQQiAIYeOCaxCr6a2hgpATJrX5h8F+AabAcwUP8CmO1N2xLIxWORbeubmuzkndfvfd+KP5hEeYJORZHpOzfNA/08a+JGhCIYz3cxvEqlk9mo1t9zos9MGvRFIzhg3OKNc22vgeBRun9KXOarND3P1FoxmBFwtMigbpNU2b5b5PgqaapEHabTBDaC3XCw0tfjAEWu3MpRfoRYzYBa3fJDN6IZb9SKjV4n9Vl/CR91bwAr/31qErWSY1VlY37NDvnXk+DWuo61qWb262f7FNmSeIzkKadHUHElw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ow/3iKOYCQVFg2Ub+XxNJZDZo84KnMmUEBVfTBBO0do=; b=ZxR/5Hh9T0KwLU8Vh776gBxziEjdX/mK6FHZuoAHw0mdfuxnetilWmgtSBGMZmxIej2wr0Y3ZbsPxqW5fkLJ/EYJeo3U7FdNG5weUP8J5YtRi1nA9JcbW6eVOEkxaeZ585tSozNQkJs0/vFBiiksAI/wNo0EIMMRrI/gwOyhDUSdv16E+j/CNmvzAseoAM5k4vYYZWbLQnbsodJ0xPADKEXOZkCy+3xE//gVqeeYbfqbzjlHT63litEV82AirlRiS8fPR163koQYNDSPpWHNVt4upIQWufeWgmmthWjwQxM12T5Ol+MiNQzxSXx0RwbI9ybEF/kQtcpj5aj43rPkZQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ow/3iKOYCQVFg2Ub+XxNJZDZo84KnMmUEBVfTBBO0do=; b=CHsEE7i604v26xnofSWyGcWex06bSRzdYsgj0wgAihVVQhLZOdfp7jmQINMiN+ozsZ6K1u6JlDe3dxZdHa+khTPEFdfBVxtS0YaibJsY0phMoRb2MnjQDYzkSCHjm7xN5ynLeHECCh5aNrqHDj/Z7lE9mW0bWXMysaT2B9gCGXKLBkXLACdKGGSOZahGOVbtUwJ2yHp0TbS433IVZSfGmNk+qizqTLDrO6E595QsIGoZG4rmtVd6mN3c7w53zZxnELE5BTCtqZ71RDb8pZbaYn5L2xzo6FSvD4wULkPE7AyD5xZmwc/FG6yop1zovzHgMa5xhb5PN7NC15rkB1ypIw== Received: from MW2PR16CA0053.namprd16.prod.outlook.com (2603:10b6:907:1::30) by BN9PR12MB5113.namprd12.prod.outlook.com (2603:10b6:408:136::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.22; Mon, 12 Jul 2021 10:25:02 +0000 Received: from CO1NAM11FT006.eop-nam11.prod.protection.outlook.com (2603:10b6:907:1:cafe::7) by MW2PR16CA0053.outlook.office365.com (2603:10b6:907:1::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.20 via Frontend Transport; Mon, 12 Jul 2021 10:25:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT006.mail.protection.outlook.com (10.13.174.246) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4308.20 via Frontend Transport; Mon, 12 Jul 2021 10:25:01 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 12 Jul 2021 10:24:59 +0000 From: Viacheslav Ovsiienko To: CC: , , , Date: Mon, 12 Jul 2021 13:24:40 +0300 Message-ID: <20210712102440.12491-1-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210619154012.27295-1-viacheslavo@nvidia.com> References: <20210619154012.27295-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 613958e5-b171-44b5-e0a5-08d9451f4e6f X-MS-TrafficTypeDiagnostic: BN9PR12MB5113: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:159; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TVV4sF7EEcEtDvc2K9CCWHbuyGTEgAlFc3SjXKMvK0Ns6HYsmSuJbvB99MekVOpHUWkxSszvUKVtKNBA3ciULy4q5yKcuZdjz8Hy1dmGter8ESZpXPU4qcicRydvUwlLucDhVh2IoKKP/vNJYEUqlCtB70p5/ObScn+ZnoKKg/cspjuszX7yX8A9fhFW9kWJNxrEjhiho7cjyGWDi1K/UDQDf2rpAAzdvkhRykqWsL+UQUvYtKjDtaYk1JIL5fcJv9l6EpQrrAHfyNQX9oMPEJdtkRDI+yMPgRl37OTt3m9tGboNwFN4B5q9nWLJ3TdQ2dMrkVCQ7y67CHR4bcNZniCJKaaDuB+x1eiEn/k3VRT+N7+KoO4snKMV6fQcTAqBgOuWbPAvuoyXuIsgjVtElBPiq8nFUqyuMu7qHbMXbFU93DRmt9TI5WHfPIvMtA08LMxyvBilCguz9MY+Pv8+uLP6/sOo+mKKk/go9qRd4cYEuOekcUeq1TDf9jVpo3vGR8zZCHLBIehuj4pg27Lyzur8Zdd6VmwAxAMyai+QMHmXZJywdRMgHzzDJ8wP172hRFeo9jfAzvAtZ4hRzzrg6Z4IP6kq/pzHrSJwPBUxckqGVnuSSxebdBobNzFnoCioHAVcm1u1qdvRtGKgy7TiwZSaOIX6Gvwu9UumdXEIIkKa+quALmwh8rjssh5Au7AozYW9SKWMVDPamj/6xRqsKOicApDJOEqBUqVvnW8sQ/NdXYZEXgifcHZKQQ8JAFZj6IUedVsxVPQEA7fB2D/7kDtszgAftZFiVR606yJmT2ajy7zr71+BCcVkabqqd8bm X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(46966006)(36840700001)(7636003)(70206006)(34020700004)(356005)(82310400003)(6286002)(2616005)(55016002)(83380400001)(186003)(1076003)(2906002)(336012)(36860700001)(426003)(70586007)(6666004)(36756003)(8676002)(6916009)(316002)(8936002)(26005)(86362001)(7696005)(47076005)(16526019)(4326008)(82740400003)(54906003)(478600001)(966005)(5660300002)(36906005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jul 2021 10:25:01.0864 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 613958e5-b171-44b5-e0a5-08d9451f4e6f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT006.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5113 Subject: [dpdk-dev] [PATCH v2] app/testpmd: fix offloads for the newly attached port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For the newly attached ports (with "port attach" command) the default offloads settings, configured from application command line, were not applied, causing port start failure following the attach. For example, if scattering offload was configured in command line and rxpkts was configured for multiple segments, the newly attached port start was failed due to missing scattering offload enable in the new port settings. The missing code to apply the offloads to the new device and its queues is added. The new local routine init_config_port_offloads() is introduced, embracing the shared part of port offloads initialization code. Cc: stable@dpdk.org Fixes: c9cce42876f5 ("ethdev: remove deprecated attach/detach functions") Signed-off-by: Viacheslav Ovsiienko --- v1: http://patches.dpdk.org/project/dpdk/patch/20210619154012.27295-1-viacheslavo@nvidia.com/ v2: comments addressed - common code is presented as dedicated routine app/test-pmd/testpmd.c | 142 +++++++++++++++++++---------------------- 1 file changed, 65 insertions(+), 77 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 1cdd3cdd12..55aa8e504b 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1417,21 +1417,74 @@ check_nb_hairpinq(queueid_t hairpinq) return 0; } +static void +init_config_port_offloads(portid_t pid, uint32_t socket_id) +{ + struct rte_port *port = &ports[pid]; + uint16_t data_size; + int ret; + int i; + + port->dev_conf.txmode = tx_mode; + port->dev_conf.rxmode = rx_mode; + + ret = eth_dev_info_get_print_err(pid, &port->dev_info); + if (ret != 0) + rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n"); + + ret = update_jumbo_frame_offload(pid); + if (ret != 0) + printf("Updating jumbo frame offload failed for port %u\n", + pid); + + if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)) + port->dev_conf.txmode.offloads &= + ~DEV_TX_OFFLOAD_MBUF_FAST_FREE; + + /* Apply Rx offloads configuration */ + for (i = 0; i < port->dev_info.max_rx_queues; i++) + port->rx_conf[i].offloads = port->dev_conf.rxmode.offloads; + /* Apply Tx offloads configuration */ + for (i = 0; i < port->dev_info.max_tx_queues; i++) + port->tx_conf[i].offloads = port->dev_conf.txmode.offloads; + + if (eth_link_speed) + port->dev_conf.link_speeds = eth_link_speed; + + /* set flag to initialize port/queue */ + port->need_reconfig = 1; + port->need_reconfig_queues = 1; + port->socket_id = socket_id; + port->tx_metadata = 0; + + /* + * Check for maximum number of segments per MTU. + * Accordingly update the mbuf data size. + */ + if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && + port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) { + data_size = rx_mode.max_rx_pkt_len / + port->dev_info.rx_desc_lim.nb_mtu_seg_max; + + if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) { + mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM; + TESTPMD_LOG(WARNING, + "Configured mbuf size of the first segment %hu\n", + mbuf_data_size[0]); + } + } +} + static void init_config(void) { portid_t pid; - struct rte_port *port; struct rte_mempool *mbp; unsigned int nb_mbuf_per_pool; lcoreid_t lc_id; uint8_t port_per_socket[RTE_MAX_NUMA_NODES]; struct rte_gro_param gro_param; uint32_t gso_types; - uint16_t data_size; - bool warning = 0; - int k; - int ret; memset(port_per_socket,0,RTE_MAX_NUMA_NODES); @@ -1455,30 +1508,14 @@ init_config(void) } RTE_ETH_FOREACH_DEV(pid) { - port = &ports[pid]; - /* Apply default TxRx configuration for all ports */ - port->dev_conf.txmode = tx_mode; - port->dev_conf.rxmode = rx_mode; - - ret = eth_dev_info_get_print_err(pid, &port->dev_info); - if (ret != 0) - rte_exit(EXIT_FAILURE, - "rte_eth_dev_info_get() failed\n"); - - ret = update_jumbo_frame_offload(pid); - if (ret != 0) - printf("Updating jumbo frame offload failed for port %u\n", - pid); + uint32_t socket_id; - if (!(port->dev_info.tx_offload_capa & - DEV_TX_OFFLOAD_MBUF_FAST_FREE)) - port->dev_conf.txmode.offloads &= - ~DEV_TX_OFFLOAD_MBUF_FAST_FREE; if (numa_support) { - if (port_numa[pid] != NUMA_NO_CONFIG) + socket_id = port_numa[pid]; + if (socket_id != NUMA_NO_CONFIG) port_per_socket[port_numa[pid]]++; else { - uint32_t socket_id = rte_eth_dev_socket_id(pid); + socket_id = rte_eth_dev_socket_id(pid); /* * if socket_id is invalid, @@ -1489,45 +1526,9 @@ init_config(void) port_per_socket[socket_id]++; } } - - /* Apply Rx offloads configuration */ - for (k = 0; k < port->dev_info.max_rx_queues; k++) - port->rx_conf[k].offloads = - port->dev_conf.rxmode.offloads; - /* Apply Tx offloads configuration */ - for (k = 0; k < port->dev_info.max_tx_queues; k++) - port->tx_conf[k].offloads = - port->dev_conf.txmode.offloads; - - if (eth_link_speed) - port->dev_conf.link_speeds = eth_link_speed; - - /* set flag to initialize port/queue */ - port->need_reconfig = 1; - port->need_reconfig_queues = 1; - port->tx_metadata = 0; - - /* Check for maximum number of segments per MTU. Accordingly - * update the mbuf data size. - */ - if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && - port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) { - data_size = rx_mode.max_rx_pkt_len / - port->dev_info.rx_desc_lim.nb_mtu_seg_max; - - if ((data_size + RTE_PKTMBUF_HEADROOM) > - mbuf_data_size[0]) { - mbuf_data_size[0] = data_size + - RTE_PKTMBUF_HEADROOM; - warning = 1; - } - } + /* Apply default TxRx configuration for all ports */ + init_config_port_offloads(pid, socket_id); } - - if (warning) - TESTPMD_LOG(WARNING, - "Configured mbuf size of the first segment %hu\n", - mbuf_data_size[0]); /* * Create pools of mbuf. * If NUMA support is disabled, create a single pool of mbuf in @@ -1610,21 +1611,8 @@ init_config(void) void reconfig(portid_t new_port_id, unsigned socket_id) { - struct rte_port *port; - int ret; - /* Reconfiguration of Ethernet ports. */ - port = &ports[new_port_id]; - - ret = eth_dev_info_get_print_err(new_port_id, &port->dev_info); - if (ret != 0) - return; - - /* set flag to initialize port/queue */ - port->need_reconfig = 1; - port->need_reconfig_queues = 1; - port->socket_id = socket_id; - + init_config_port_offloads(new_port_id, socket_id); init_port_config(); }