From patchwork Fri Sep 17 08:01:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xueming(Steven) Li" X-Patchwork-Id: 99073 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 261C3A0C46; Fri, 17 Sep 2021 10:03:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12D99410EB; Fri, 17 Sep 2021 10:03:00 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2064.outbound.protection.outlook.com [40.107.223.64]) by mails.dpdk.org (Postfix) with ESMTP id B880D410E9 for ; Fri, 17 Sep 2021 10:02:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lF02j/A656Evhued5sLNT1P0LQWED55WJFDMUtWMSUGgYADpTZ0xAOa9WRoZAS7bjUaPQdhd6/gpsBhsyVbfqWeRaSpwRQpIbLBsZLi2fgAduWwaTujPaiwoWQF9q+4LN+BnUQ+Z1jkMsTmBXXPZqIa350lm+3JTNi+HyiQeAOBlTS+tnrPcHXW/5Ieny+MyFWyOtwtPJDrJA36KD9F9ZW48oiRMHwvvUjDF/6fjf6ozwn32xrXqdQIMZCy0pgru+sONxkoTjBewUdTqlEy4geItBPBuFKeXfAFotGtpN7j3/Nbix4wX9nXNUADp0zqCW8qcRPSb+LX0TLFKyRA14A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=65IqaSr3+9OwMBor5CO0NuHAYf+ALdVhn1sGChWJ46M=; b=Rn5wF1H5YXu4ArSP3xjtlIN7d/U+PQtRKvpa01Ee4VwddAC886cZyN27SrthpdsjLfTHjOfH9EaKfTT7x3EGt98nlA3My2qvR36vpDYZbEiRVvTBkPYjK9xuh+REa4oeqIjzDD19D5O7gzF4NqtVVfoJme7MXJaOA5XLTBogsbqOE++BfzCbhnYm+ZTNiaEqaoJEFvYzbnsZ5oWtFhnueviE99K7ZYX2RHzT9nD5v/vfaHa+PczsmZRx9S08WacwViCF5PzU77pV+poxabJlBjBBUF9+1qJ1R/co2R7kCc31fi/JpIB35a0q8VhtONhdUbs78bV3DErHXqbuCsKQgw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=65IqaSr3+9OwMBor5CO0NuHAYf+ALdVhn1sGChWJ46M=; b=uO9XCc7oo4wape6+L8NQnSY1ovbfM923U4GRKs/VlV0SyseJEYLXGpe5i9ffgOcXEHDjkKCFHhJjDo6H6kkK30FmgXMVtSK8Rj8PtSysoo5IlIP1m+99x7+UEW6SPXZhc3Py/YpjkSA/mgswY98dG5/STw2rDRmT/5Po1Bbvx3X9mLr8n8WbHFAqDOmsxgxX/DTv10SPTD5KldlKfxhIG5CNrBGv0MuuBFJ/5ILrNQXojFRhrt+Ldb5cTgMRb8c8gBSkZ3j/H/zVRZSG/7LEpD7AUrZmp+Z31HV2AZTjmcvZ6EYqnLE02sixWnAlDYstuMxaIdAsK3+8RV78Iatenw== Received: from DM5PR18CA0075.namprd18.prod.outlook.com (2603:10b6:3:3::13) by MWHPR12MB1293.namprd12.prod.outlook.com (2603:10b6:300:9::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14; Fri, 17 Sep 2021 08:02:56 +0000 Received: from DM6NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:3:3:cafe::e8) by DM5PR18CA0075.outlook.office365.com (2603:10b6:3:3::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by DM6NAM11FT036.mail.protection.outlook.com (10.13.172.64) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:56 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:56 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:52 +0000 From: Xueming Li To: CC: , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , Xiaoyun Li Date: Fri, 17 Sep 2021 16:01:18 +0800 Message-ID: <20210917080121.329373-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7a3d4380-9d74-4892-b9fa-08d979b18efd X-MS-TrafficTypeDiagnostic: MWHPR12MB1293: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: s9YCMSVfI3EArY1EqZMwDauVWk+3HxL96MuTXYJwjYKfzmgD2FObDkzKZxT5SC1eI9zeaT9CFe9XKAZq5z9u7bSsF1UeGFLt+J8LcommGHWJrg9E1vD78i0H2RgXR2q+hhb792BUvcy0FHawvbNH23Hx5TY2EcW/8b6E5infy88ObpQfvXMps6gq9WugdRJvJPJXNVYlsI1/qTq5PoeB+ELfhTqtkPFBa8WY5d2aXx0XvFkoA8uET4ctF04Nzwu/GXTI181BkOZToPZ+SEgI3pRZWjHejOTy9b+Vb0ERbQ5oPJM1IC/epUFXbgXBuU107QUciwhu3RKcfjoPgLf0bs9dLZo0SyOuiZkpf9EqsjqIefUbADRacEDOTdHeHwJaetP0GW4sFVCNmtnUUYzkI561+R/S3Dbe3S1sJuELU2OC5cVaRuazLmzC4exRVw1kKIGTKOn590yLnqNj28apO/nTSURKtlHBzXwjMYldbNgf3IsJ9dCag3hwW/8SNx/6vaRHD2UewY5qkp8Co/wLgh7PfCpFSsPCQvnkYfhiB3cQGEk0mdCR3uNHOzWiAZSJ9vy6HAnZVT1PFeTLfwvxE1mbAHeBlS9EauthRfJykkEmDM3llEd18kvnohHX9yJS4hAJdcxgKWUyziSJ/vVXXG1G16XhpDQfzhLjb9wD0C8fSXQLHjmvRYqQL+CjhFWT2fu6yGWpe/jMmfG653ixkg== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(47076005)(2616005)(508600001)(336012)(36756003)(1076003)(6666004)(4326008)(55016002)(36906005)(426003)(356005)(7636003)(2906002)(8676002)(86362001)(82310400003)(6286002)(316002)(16526019)(54906003)(83380400001)(6916009)(186003)(7696005)(26005)(5660300002)(70586007)(70206006)(36860700001)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:02:56.3331 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7a3d4380-9d74-4892-b9fa-08d979b18efd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1293 Subject: [dpdk-dev] [PATCH v3 5/8] app/testpmd: force shared Rx queue polled on same core X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Shared rxqs shares one set rx queue of groups zero. Shared Rx queue must must be polled from one core. Checks and stops forwarding if shared rxq being scheduled on multiple cores. Signed-off-by: Xueming Li --- app/test-pmd/config.c | 96 ++++++++++++++++++++++++++++++++++++++++++ app/test-pmd/testpmd.c | 4 +- app/test-pmd/testpmd.h | 2 + 3 files changed, 101 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 8ec5f87ef3..035247c33f 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2883,6 +2883,102 @@ port_rss_hash_key_update(portid_t port_id, char rss_type[], uint8_t *hash_key, } } +/* + * Check whether a shared rxq scheduled on other lcores. + */ +static bool +fwd_stream_on_other_lcores(uint16_t domain_id, portid_t src_port, + queueid_t src_rxq, lcoreid_t src_lc, + uint32_t shared_group) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + struct rte_port *port; + struct rte_eth_rxconf *rxq_conf; + + nb_fc = cur_fwd_config.nb_fwd_lcores; + for (lc_id = src_lc + 1; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + port = &ports[fs->rx_port]; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((rxq_conf->offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) + == 0) + /* Not shared rxq. */ + continue; + if (domain_id != port->dev_info.switch_info.domain_id) + continue; + if (fs->rx_queue != src_rxq) + continue; + if (rxq_conf->shared_group != shared_group) + continue; + printf("Shared RX queue group %u can't be scheduled on different cores:\n", + shared_group); + printf(" lcore %hhu Port %hu queue %hu\n", + src_lc, src_port, src_rxq); + printf(" lcore %hhu Port %hu queue %hu\n", + lc_id, fs->rx_port, fs->rx_queue); + printf(" please use --nb-cores=%hu to limit forwarding cores\n", + nb_rxq); + return true; + } + } + return false; +} + +/* + * Check shared rxq configuration. + * + * Shared group must not being scheduled on different core. + */ +bool +pkt_fwd_shared_rxq_check(void) +{ + streamid_t sm_id; + streamid_t nb_fs_per_lcore; + lcoreid_t nb_fc; + lcoreid_t lc_id; + struct fwd_stream *fs; + uint16_t domain_id; + struct rte_port *port; + struct rte_eth_rxconf *rxq_conf; + + nb_fc = cur_fwd_config.nb_fwd_lcores; + /* + * Check streams on each core, make sure the same switch domain + + * group + queue doesn't get scheduled on other cores. + */ + for (lc_id = 0; lc_id < nb_fc; lc_id++) { + sm_id = fwd_lcores[lc_id]->stream_idx; + nb_fs_per_lcore = fwd_lcores[lc_id]->stream_nb; + for (; sm_id < fwd_lcores[lc_id]->stream_idx + nb_fs_per_lcore; + sm_id++) { + fs = fwd_streams[sm_id]; + /* Update lcore info stream being scheduled. */ + fs->lcore = fwd_lcores[lc_id]; + port = &ports[fs->rx_port]; + rxq_conf = &port->rx_conf[fs->rx_queue]; + if ((rxq_conf->offloads & RTE_ETH_RX_OFFLOAD_SHARED_RXQ) + == 0) + /* Not shared rxq. */ + continue; + /* Check shared rxq not scheduled on remaining cores. */ + domain_id = port->dev_info.switch_info.domain_id; + if (fwd_stream_on_other_lcores(domain_id, fs->rx_port, + fs->rx_queue, lc_id, + rxq_conf->shared_group)) + return false; + } + } + return true; +} + /* * Setup forwarding configuration for each logical core. */ diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 417e92ade1..cab4b36b04 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2241,10 +2241,12 @@ start_packet_forwarding(int with_tx_first) fwd_config_setup(); + pkt_fwd_config_display(&cur_fwd_config); + if (!pkt_fwd_shared_rxq_check()) + return; if(!no_flush_rx) flush_fwd_rx_queues(); - pkt_fwd_config_display(&cur_fwd_config); rxtx_config_display(); fwd_stats_reset(); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 3dfaaad94c..f121a2da90 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -144,6 +144,7 @@ struct fwd_stream { uint64_t core_cycles; /**< used for RX and TX processing */ struct pkt_burst_stats rx_burst_stats; struct pkt_burst_stats tx_burst_stats; + struct fwd_lcore *lcore; /**< Lcore being scheduled. */ }; /** @@ -795,6 +796,7 @@ void port_summary_header_display(void); void rx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void tx_queue_infos_display(portid_t port_idi, uint16_t queue_id); void fwd_lcores_config_display(void); +bool pkt_fwd_shared_rxq_check(void); void pkt_fwd_config_display(struct fwd_config *cfg); void rxtx_config_display(void); void fwd_config_setup(void);