From patchwork Mon Mar 7 12:53:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 108574 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E5976A0093; Mon, 7 Mar 2022 13:54:11 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8622B4068F; Mon, 7 Mar 2022 13:54:11 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2044.outbound.protection.outlook.com [40.107.244.44]) by mails.dpdk.org (Postfix) with ESMTP id DC4DD4014E; Mon, 7 Mar 2022 13:54:09 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=E8LWpRkagpKa07bQUFr7NpQwYabTf/8MFqzpoK7A+ZbU/LHRArXG5XfCSEXQvKNqpZvWCovD6w4+iPjgNwyYWixXLxPH/lmhk7ox37iWDWFtOAdVW0uPczL1tQwjyWMT1ho8NhYW2ljv9ZfqiwI22wgmoUYq0tLTsNcl4XNEsGiqhBn+FKTxxj+Q5rBHjmI2qDPqpaNJR0V0MaVIzcbHKbqOrSVvLoXeoNkwMC+oSDOjNCfxT/5kP1fPLt4Aih5Zo4tIyzuPEc5banyEQewirrQzrYoqJ6tC4UkDD8+3IB42WRotX1XMzkrNOM5dXLjHfm3M1TZAcFXjchZ2kcvXYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TwLHgzmLficksESEUTBlfljexjADCKnc4JrHZKlMhds=; b=Wa7pH0NgdHs1jeHOqegaKNlrEPDxKk7fB3LgF8AtsZ6e5DfmamHArHnS2f7OzzDLz84r54e+7kBb+pqTalO4nECIBSiJb2sZJ5MpsTz62ZG5WhNBrOl7qlTZRWqXr7Tj/rsVk/gS22QY4vGKWubUR9NoTaHDZigm6dU4Bi+lPjB/5Y2oQlS9gNXAKx6CxVMQLLg1bsh/uiCC8CP72pKyOd46gFT7+vPrsrrLS19kJi8W55++pLuCezEgf1flrhApn8mQjXxWrV7q1CsNzPC8Pe7w+A0QVefWjB6O4PlJe68Hp2bb6ex7GNpHwasX90qn2NlYt9ke3J6IWRPc9Auwlg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TwLHgzmLficksESEUTBlfljexjADCKnc4JrHZKlMhds=; b=eXJIJPuRH4acnWwSatlTXThQstPI4jQ8/1gCGlvtN1AB2Za6cKF/XRDM74uhgNy/UG5qiB42ydxZ4zSYNd48NdI4v9ZSGbzTV/ovkitfrdhwPjX82a1kH0XoDBkelswA5QkF9c7VIywKyYEOZxGgHYxEWjVTNAY+lmef/sIRERf7oblcBjPxiXdpHKkyHJxu6ZVuezXqcR5+Td6cagcouPjNbeQDJGfy5uH3LzFULWqggGIm+M2jt0NOLH2E8ARsIDWbMN6x1ySD0ioUq7Z7/bcBk2iKx5IuBb+h8jKGZJ9IDTuaqdf1Mn04fiIJjF6uaYu0NybpOVODUtQm+Zanwg== Received: from MW4PR04CA0273.namprd04.prod.outlook.com (2603:10b6:303:89::8) by SN6PR12MB2767.namprd12.prod.outlook.com (2603:10b6:805:75::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.14; Mon, 7 Mar 2022 12:54:07 +0000 Received: from CO1NAM11FT027.eop-nam11.prod.protection.outlook.com (2603:10b6:303:89:cafe::77) by MW4PR04CA0273.outlook.office365.com (2603:10b6:303:89::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.14 via Frontend Transport; Mon, 7 Mar 2022 12:54:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT027.mail.protection.outlook.com (10.13.174.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5038.14 via Frontend Transport; Mon, 7 Mar 2022 12:54:06 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Mar 2022 12:54:05 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Mar 2022 04:54:04 -0800 Received: from nvidia.com (10.127.8.11) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Mar 2022 04:54:02 -0800 From: Dmitry Kozlyuk To: CC: , Matan Azrad , Xiaoyun Li , Aman Singh , Yuying Zhang Subject: [PATCH v3 1/2] app/testpmd: do not poll stopped queues Date: Mon, 7 Mar 2022 14:53:50 +0200 Message-ID: <20220307125351.697936-2-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220307125351.697936-1-dkozlyuk@nvidia.com> References: <20220306232310.613552-1-dkozlyuk@nvidia.com> <20220307125351.697936-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8b1a6f82-afe4-4862-8aa0-08da00399089 X-MS-TrafficTypeDiagnostic: SN6PR12MB2767:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eCBq4PXRdGkxK1e4epSzx40vv2AhIDRClNt+2AAEOWhAuY7BCtOQAzMOIYGO9cWoJHAbVogPdjgi/pkvu9EpRAt0xlXf9H6YQwZ/ZhRMn/g0BrLCkIy9eUTiUWstZcETcSPoWBG0VXaeMqbs0ik1taIGBfLBYohagogjFx1z6o18L0YH3q6yyEvx+r3fl6N6PErFsJoR1DgHgFpJVd/37vvUposN5nkR6og+lXHg6sup2AU/BFv4s0Ov7hGbe6bvBPvQny31+DfusCRlfvHlHkbw6bNnHwmPoJ364wcRxD8acAWusCI0y9siiMDn0CnuPkqNI14ihLEhOiKTR1b+Ohbk/XdomEHD8YibKfuBR6PeWhFXPUyPmXKrdbCyoh1ItYg2yD8GqvqyVtNiJn89oRrl0TDUUeOXg9E6R+0nu5wSvAIzMLsZdhZV1zDR/FoYY7FxfMJXDHykIWgkylDPx8AeQMzyMKy5thKOzR50iKJASKhcc43dJe8s3A5+19TxB+t1Mg22pzGTIuZyM1q9dC9cdNQfxkkRtZBKO4cCN7vA/88/KT7uLIQiQb8bVt9DvVDgthvGwzEfNO+09txD+uiSB+9PxH2bFdv7SQM3NDa/0d/BEc1hCfXtK25/Jhz6Na6FHKjAOIzSwf5MTgrJS+D3vwZej/4PDfJGuhBRuZ+DlP1+oAT7uAnnkaDgNDP8u7TfF7QMphOIxXIPO7IhaQ== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(8936002)(81166007)(336012)(426003)(30864003)(356005)(70206006)(316002)(5660300002)(70586007)(6286002)(54906003)(82310400004)(26005)(508600001)(8676002)(4326008)(86362001)(186003)(1076003)(6916009)(6666004)(36756003)(7696005)(55016003)(36860700001)(47076005)(83380400001)(2906002)(2616005)(40460700003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Mar 2022 12:54:06.3009 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8b1a6f82-afe4-4862-8aa0-08da00399089 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT027.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR12MB2767 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Calling Rx/Tx functions on a stopped queue is not supported. Do not run packet forwarding for streams that use stopped queues. Each stream has a read-only "disabled" field, so that lcore function can skip such streams. Forwarding engines can set this field using a new "stream_init" callback function by checking relevant queue states, which are stored along with queue configurations (not all PMDs implement rte_eth_rx/tx_queue_info_get() to query the state from there). Fixes: 5f4ec54f1d16 ("testpmd: queue start and stop") Cc: stable@dpdk.org Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- app/test-pmd/5tswap.c | 13 ++++++ app/test-pmd/cmdline.c | 45 ++++++++++-------- app/test-pmd/config.c | 8 ++-- app/test-pmd/csumonly.c | 13 ++++++ app/test-pmd/flowgen.c | 13 ++++++ app/test-pmd/icmpecho.c | 13 ++++++ app/test-pmd/ieee1588fwd.c | 13 ++++++ app/test-pmd/iofwd.c | 13 ++++++ app/test-pmd/macfwd.c | 13 ++++++ app/test-pmd/macswap.c | 13 ++++++ app/test-pmd/noisy_vnf.c | 13 ++++++ app/test-pmd/rxonly.c | 8 ++++ app/test-pmd/shared_rxq_fwd.c | 8 ++++ app/test-pmd/testpmd.c | 87 ++++++++++++++++++++++------------- app/test-pmd/testpmd.h | 19 +++++++- app/test-pmd/txonly.c | 8 ++++ 16 files changed, 244 insertions(+), 56 deletions(-) diff --git a/app/test-pmd/5tswap.c b/app/test-pmd/5tswap.c index 629d3e0d31..f041a5e1d5 100644 --- a/app/test-pmd/5tswap.c +++ b/app/test-pmd/5tswap.c @@ -185,9 +185,22 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static void +stream_init_5tuple_swap(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + + rx_stopped = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + tx_stopped = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + fs->disabled = rx_stopped || tx_stopped; +} + struct fwd_engine five_tuple_swap_fwd_engine = { .fwd_mode_name = "5tswap", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_5tuple_swap, .packet_fwd = pkt_burst_5tuple_swap, }; diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 7ab0575e64..2ca935f086 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -2658,8 +2658,10 @@ cmd_config_rxtx_queue_parsed(void *parsed_result, __rte_unused void *data) { struct cmd_config_rxtx_queue *res = parsed_result; + struct rte_port *port; uint8_t isrx; uint8_t isstart; + uint8_t *state; int ret = 0; if (test_done == 0) { @@ -2707,8 +2709,15 @@ cmd_config_rxtx_queue_parsed(void *parsed_result, else ret = rte_eth_dev_tx_queue_stop(res->portid, res->qid); - if (ret == -ENOTSUP) + if (ret == -ENOTSUP) { fprintf(stderr, "Function not supported in PMD\n"); + return; + } + + port = &ports[res->portid]; + state = isrx ? &port->rxq[res->qid].state : &port->txq[res->qid].state; + *state = isstart ? RTE_ETH_QUEUE_STATE_STARTED : + RTE_ETH_QUEUE_STATE_STOPPED; } cmdline_parse_token_string_t cmd_config_rxtx_queue_port = @@ -2777,11 +2786,11 @@ cmd_config_deferred_start_rxtx_queue_parsed(void *parsed_result, ison = !strcmp(res->state, "on"); - if (isrx && port->rx_conf[res->qid].rx_deferred_start != ison) { - port->rx_conf[res->qid].rx_deferred_start = ison; + if (isrx && port->rxq[res->qid].conf.rx_deferred_start != ison) { + port->rxq[res->qid].conf.rx_deferred_start = ison; needreconfig = 1; - } else if (!isrx && port->tx_conf[res->qid].tx_deferred_start != ison) { - port->tx_conf[res->qid].tx_deferred_start = ison; + } else if (!isrx && port->txq[res->qid].conf.tx_deferred_start != ison) { + port->txq[res->qid].conf.tx_deferred_start = ison; needreconfig = 1; } @@ -2899,7 +2908,7 @@ cmd_setup_rxtx_queue_parsed( res->qid, port->nb_rx_desc[res->qid], socket_id, - &port->rx_conf[res->qid], + &port->rxq[res->qid].conf, mp); if (ret) fprintf(stderr, "Failed to setup RX queue\n"); @@ -2917,7 +2926,7 @@ cmd_setup_rxtx_queue_parsed( res->qid, port->nb_tx_desc[res->qid], socket_id, - &port->tx_conf[res->qid]); + &port->txq[res->qid].conf); if (ret) fprintf(stderr, "Failed to setup TX queue\n"); } @@ -4693,7 +4702,7 @@ cmd_config_queue_tx_offloads(struct rte_port *port) /* Apply queue tx offloads configuration */ for (k = 0; k < port->dev_info.max_tx_queues; k++) - port->tx_conf[k].offloads = + port->txq[k].conf.offloads = port->dev_conf.txmode.offloads; } @@ -16204,7 +16213,7 @@ cmd_rx_offload_get_configuration_parsed( nb_rx_queues = dev_info.nb_rx_queues; for (q = 0; q < nb_rx_queues; q++) { - queue_offloads = port->rx_conf[q].offloads; + queue_offloads = port->rxq[q].conf.offloads; printf(" Queue[%2d] :", q); print_rx_offloads(queue_offloads); printf("\n"); @@ -16324,11 +16333,11 @@ cmd_config_per_port_rx_offload_parsed(void *parsed_result, if (!strcmp(res->on_off, "on")) { port->dev_conf.rxmode.offloads |= single_offload; for (q = 0; q < nb_rx_queues; q++) - port->rx_conf[q].offloads |= single_offload; + port->rxq[q].conf.offloads |= single_offload; } else { port->dev_conf.rxmode.offloads &= ~single_offload; for (q = 0; q < nb_rx_queues; q++) - port->rx_conf[q].offloads &= ~single_offload; + port->rxq[q].conf.offloads &= ~single_offload; } cmd_reconfig_device_queue(port_id, 1, 1); @@ -16434,9 +16443,9 @@ cmd_config_per_queue_rx_offload_parsed(void *parsed_result, } if (!strcmp(res->on_off, "on")) - port->rx_conf[queue_id].offloads |= single_offload; + port->rxq[queue_id].conf.offloads |= single_offload; else - port->rx_conf[queue_id].offloads &= ~single_offload; + port->rxq[queue_id].conf.offloads &= ~single_offload; cmd_reconfig_device_queue(port_id, 1, 1); } @@ -16623,7 +16632,7 @@ cmd_tx_offload_get_configuration_parsed( nb_tx_queues = dev_info.nb_tx_queues; for (q = 0; q < nb_tx_queues; q++) { - queue_offloads = port->tx_conf[q].offloads; + queue_offloads = port->txq[q].conf.offloads; printf(" Queue[%2d] :", q); print_tx_offloads(queue_offloads); printf("\n"); @@ -16747,11 +16756,11 @@ cmd_config_per_port_tx_offload_parsed(void *parsed_result, if (!strcmp(res->on_off, "on")) { port->dev_conf.txmode.offloads |= single_offload; for (q = 0; q < nb_tx_queues; q++) - port->tx_conf[q].offloads |= single_offload; + port->txq[q].conf.offloads |= single_offload; } else { port->dev_conf.txmode.offloads &= ~single_offload; for (q = 0; q < nb_tx_queues; q++) - port->tx_conf[q].offloads &= ~single_offload; + port->txq[q].conf.offloads &= ~single_offload; } cmd_reconfig_device_queue(port_id, 1, 1); @@ -16860,9 +16869,9 @@ cmd_config_per_queue_tx_offload_parsed(void *parsed_result, } if (!strcmp(res->on_off, "on")) - port->tx_conf[queue_id].offloads |= single_offload; + port->txq[queue_id].conf.offloads |= single_offload; else - port->tx_conf[queue_id].offloads &= ~single_offload; + port->txq[queue_id].conf.offloads &= ~single_offload; cmd_reconfig_device_queue(port_id, 1, 1); } diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index cc8e7aa138..c4ab3f8cae 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -3551,8 +3551,8 @@ rxtx_config_display(void) nb_fwd_lcores, nb_fwd_ports); RTE_ETH_FOREACH_DEV(pid) { - struct rte_eth_rxconf *rx_conf = &ports[pid].rx_conf[0]; - struct rte_eth_txconf *tx_conf = &ports[pid].tx_conf[0]; + struct rte_eth_rxconf *rx_conf = &ports[pid].rxq[0].conf; + struct rte_eth_txconf *tx_conf = &ports[pid].txq[0].conf; uint16_t *nb_rx_desc = &ports[pid].nb_rx_desc[0]; uint16_t *nb_tx_desc = &ports[pid].nb_tx_desc[0]; struct rte_eth_rxq_info rx_qinfo; @@ -3810,7 +3810,7 @@ fwd_stream_on_other_lcores(uint16_t domain_id, lcoreid_t src_lc, fs = fwd_streams[sm_id]; port = &ports[fs->rx_port]; dev_info = &port->dev_info; - rxq_conf = &port->rx_conf[fs->rx_queue]; + rxq_conf = &port->rxq[fs->rx_queue].conf; if ((dev_info->dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) == 0 || rxq_conf->share_group == 0) /* Not shared rxq. */ @@ -3870,7 +3870,7 @@ pkt_fwd_shared_rxq_check(void) fs->lcore = fwd_lcores[lc_id]; port = &ports[fs->rx_port]; dev_info = &port->dev_info; - rxq_conf = &port->rx_conf[fs->rx_queue]; + rxq_conf = &port->rxq[fs->rx_queue].conf; if ((dev_info->dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) == 0 || rxq_conf->share_group == 0) /* Not shared rxq. */ diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 5274d498ee..eb58ca1906 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -1178,9 +1178,22 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static void +stream_init_checksum_forward(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + + rx_stopped = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + tx_stopped = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + fs->disabled = rx_stopped || tx_stopped; +} + struct fwd_engine csum_fwd_engine = { .fwd_mode_name = "csum", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_checksum_forward, .packet_fwd = pkt_burst_checksum_forward, }; diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index 9ceef3b54a..1e01120ae9 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -207,9 +207,22 @@ flowgen_begin(portid_t pi) return 0; } +static void +flowgen_stream_init(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + + rx_stopped = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + tx_stopped = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + fs->disabled = rx_stopped || tx_stopped; +} + struct fwd_engine flow_gen_engine = { .fwd_mode_name = "flowgen", .port_fwd_begin = flowgen_begin, .port_fwd_end = NULL, + .stream_init = flowgen_stream_init, .packet_fwd = pkt_burst_flow_gen, }; diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c index 99c94cb282..066f2a3ab7 100644 --- a/app/test-pmd/icmpecho.c +++ b/app/test-pmd/icmpecho.c @@ -512,9 +512,22 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static void +icmpecho_stream_init(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + + rx_stopped = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + tx_stopped = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + fs->disabled = rx_stopped || tx_stopped; +} + struct fwd_engine icmp_echo_engine = { .fwd_mode_name = "icmpecho", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = icmpecho_stream_init, .packet_fwd = reply_to_icmp_echo_rqsts, }; diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c index 9ff817aa68..fc4e2d014c 100644 --- a/app/test-pmd/ieee1588fwd.c +++ b/app/test-pmd/ieee1588fwd.c @@ -211,9 +211,22 @@ port_ieee1588_fwd_end(portid_t pi) rte_eth_timesync_disable(pi); } +static void +port_ieee1588_stream_init(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + + rx_stopped = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + tx_stopped = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + fs->disabled = rx_stopped || tx_stopped; +} + struct fwd_engine ieee1588_fwd_engine = { .fwd_mode_name = "ieee1588", .port_fwd_begin = port_ieee1588_fwd_begin, .port_fwd_end = port_ieee1588_fwd_end, + .stream_init = port_ieee1588_stream_init, .packet_fwd = ieee1588_packet_fwd, }; diff --git a/app/test-pmd/iofwd.c b/app/test-pmd/iofwd.c index 19cd920f70..71849aaf96 100644 --- a/app/test-pmd/iofwd.c +++ b/app/test-pmd/iofwd.c @@ -88,9 +88,22 @@ pkt_burst_io_forward(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static void +stream_init_forward(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + + rx_stopped = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + tx_stopped = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + fs->disabled = rx_stopped || tx_stopped; +} + struct fwd_engine io_fwd_engine = { .fwd_mode_name = "io", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_forward, .packet_fwd = pkt_burst_io_forward, }; diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index 812a0c721f..79c9241d00 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -119,9 +119,22 @@ pkt_burst_mac_forward(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static void +stream_init_mac_forward(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + + rx_stopped = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + tx_stopped = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + fs->disabled = rx_stopped || tx_stopped; +} + struct fwd_engine mac_fwd_engine = { .fwd_mode_name = "mac", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_mac_forward, .packet_fwd = pkt_burst_mac_forward, }; diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index 4627ff83e9..acb0fd7fb4 100644 --- a/app/test-pmd/macswap.c +++ b/app/test-pmd/macswap.c @@ -97,9 +97,22 @@ pkt_burst_mac_swap(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static void +stream_init_mac_swap(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + + rx_stopped = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + tx_stopped = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + fs->disabled = rx_stopped || tx_stopped; +} + struct fwd_engine mac_swap_engine = { .fwd_mode_name = "macswap", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_mac_swap, .packet_fwd = pkt_burst_mac_swap, }; diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c index e4434bea95..a92e810190 100644 --- a/app/test-pmd/noisy_vnf.c +++ b/app/test-pmd/noisy_vnf.c @@ -277,9 +277,22 @@ noisy_fwd_begin(portid_t pi) return 0; } +static void +stream_init_noisy_vnf(struct fwd_stream *fs) +{ + bool rx_stopped, tx_stopped; + + rx_stopped = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + tx_stopped = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; + fs->disabled = rx_stopped || tx_stopped; +} + struct fwd_engine noisy_vnf_engine = { .fwd_mode_name = "noisy", .port_fwd_begin = noisy_fwd_begin, .port_fwd_end = noisy_fwd_end, + .stream_init = stream_init_noisy_vnf, .packet_fwd = pkt_burst_noisy_vnf, }; diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index d1a579d8d8..04457010f4 100644 --- a/app/test-pmd/rxonly.c +++ b/app/test-pmd/rxonly.c @@ -68,9 +68,17 @@ pkt_burst_receive(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static void +stream_init_receive(struct fwd_stream *fs) +{ + fs->disabled = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; +} + struct fwd_engine rx_only_engine = { .fwd_mode_name = "rxonly", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = stream_init_receive, .packet_fwd = pkt_burst_receive, }; diff --git a/app/test-pmd/shared_rxq_fwd.c b/app/test-pmd/shared_rxq_fwd.c index da54a383fd..2e9047804b 100644 --- a/app/test-pmd/shared_rxq_fwd.c +++ b/app/test-pmd/shared_rxq_fwd.c @@ -107,9 +107,17 @@ shared_rxq_fwd(struct fwd_stream *fs) get_end_cycles(fs, start_tsc); } +static void +shared_rxq_stream_init(struct fwd_stream *fs) +{ + fs->disabled = ports[fs->rx_port].rxq[fs->rx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; +} + struct fwd_engine shared_rxq_engine = { .fwd_mode_name = "shared_rxq", .port_fwd_begin = NULL, .port_fwd_end = NULL, + .stream_init = shared_rxq_stream_init, .packet_fwd = shared_rxq_fwd, }; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index fe2ce19f99..52175a6cd2 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1573,10 +1573,10 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id) /* Apply Rx offloads configuration */ for (i = 0; i < port->dev_info.max_rx_queues; i++) - port->rx_conf[i].offloads = port->dev_conf.rxmode.offloads; + port->rxq[i].conf.offloads = port->dev_conf.rxmode.offloads; /* Apply Tx offloads configuration */ for (i = 0; i < port->dev_info.max_tx_queues; i++) - port->tx_conf[i].offloads = port->dev_conf.txmode.offloads; + port->txq[i].conf.offloads = port->dev_conf.txmode.offloads; if (eth_link_speed) port->dev_conf.link_speeds = eth_link_speed; @@ -1763,7 +1763,6 @@ reconfig(portid_t new_port_id, unsigned socket_id) init_port_config(); } - int init_fwd_streams(void) { @@ -2156,6 +2155,12 @@ flush_fwd_rx_queues(void) for (rxp = 0; rxp < cur_fwd_config.nb_fwd_ports; rxp++) { for (rxq = 0; rxq < nb_rxq; rxq++) { port_id = fwd_ports_ids[rxp]; + + /* Polling stopped queues is prohibited. */ + if (ports[port_id].rxq[rxq].state == + RTE_ETH_QUEUE_STATE_STOPPED) + continue; + /** * testpmd can stuck in the below do while loop * if rte_eth_rx_burst() always returns nonzero @@ -2201,7 +2206,8 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd) nb_fs = fc->stream_nb; do { for (sm_id = 0; sm_id < nb_fs; sm_id++) - (*pkt_fwd)(fsm[sm_id]); + if (!fsm[sm_id]->disabled) + (*pkt_fwd)(fsm[sm_id]); #ifdef RTE_LIB_BITRATESTATS if (bitrate_enabled != 0 && bitrate_lcore_id == rte_lcore_id()) { @@ -2283,6 +2289,7 @@ start_packet_forwarding(int with_tx_first) { port_fwd_begin_t port_fwd_begin; port_fwd_end_t port_fwd_end; + stream_init_t stream_init = cur_fwd_eng->stream_init; unsigned int i; if (strcmp(cur_fwd_eng->fwd_mode_name, "rxonly") == 0 && !nb_rxq) @@ -2313,6 +2320,10 @@ start_packet_forwarding(int with_tx_first) if (!pkt_fwd_shared_rxq_check()) return; + if (stream_init != NULL) + for (i = 0; i < cur_fwd_config.nb_fwd_streams; i++) + stream_init(fwd_streams[i]); + port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin; if (port_fwd_begin != NULL) { for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { @@ -2574,7 +2585,7 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc, socket_id, rx_conf, mp); - return ret; + goto exit; } for (i = 0; i < rx_pkt_nb_segs; i++) { struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split; @@ -2599,6 +2610,10 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, socket_id, rx_conf, NULL); rx_conf->rx_seg = NULL; rx_conf->rx_nseg = 0; +exit: + ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ? + RTE_ETH_QUEUE_STATE_STOPPED : + RTE_ETH_QUEUE_STATE_STARTED; return ret; } @@ -2801,7 +2816,7 @@ start_port(portid_t pid) for (k = 0; k < port->dev_info.max_rx_queues; k++) - port->rx_conf[k].offloads |= + port->rxq[k].conf.offloads |= dev_conf.rxmode.offloads; } /* Apply Tx offloads configuration */ @@ -2812,7 +2827,7 @@ start_port(portid_t pid) for (k = 0; k < port->dev_info.max_tx_queues; k++) - port->tx_conf[k].offloads |= + port->txq[k].conf.offloads |= dev_conf.txmode.offloads; } } @@ -2820,20 +2835,28 @@ start_port(portid_t pid) port->need_reconfig_queues = 0; /* setup tx queues */ for (qi = 0; qi < nb_txq; qi++) { + struct rte_eth_txconf *conf = + &port->txq[qi].conf; + if ((numa_support) && (txring_numa[pi] != NUMA_NO_CONFIG)) diag = rte_eth_tx_queue_setup(pi, qi, port->nb_tx_desc[qi], txring_numa[pi], - &(port->tx_conf[qi])); + &(port->txq[qi].conf)); else diag = rte_eth_tx_queue_setup(pi, qi, port->nb_tx_desc[qi], port->socket_id, - &(port->tx_conf[qi])); + &(port->txq[qi].conf)); - if (diag == 0) + if (diag == 0) { + port->txq[qi].state = + conf->tx_deferred_start ? + RTE_ETH_QUEUE_STATE_STOPPED : + RTE_ETH_QUEUE_STATE_STARTED; continue; + } /* Fail to setup tx queue, return */ if (port->port_status == RTE_PORT_HANDLING) @@ -2866,7 +2889,7 @@ start_port(portid_t pid) diag = rx_queue_setup(pi, qi, port->nb_rx_desc[qi], rxring_numa[pi], - &(port->rx_conf[qi]), + &(port->rxq[qi].conf), mp); } else { struct rte_mempool *mp = @@ -2881,7 +2904,7 @@ start_port(portid_t pid) diag = rx_queue_setup(pi, qi, port->nb_rx_desc[qi], port->socket_id, - &(port->rx_conf[qi]), + &(port->rxq[qi].conf), mp); } if (diag == 0) @@ -3656,59 +3679,59 @@ rxtx_port_config(portid_t pid) struct rte_port *port = &ports[pid]; for (qid = 0; qid < nb_rxq; qid++) { - offloads = port->rx_conf[qid].offloads; - port->rx_conf[qid] = port->dev_info.default_rxconf; + offloads = port->rxq[qid].conf.offloads; + port->rxq[qid].conf = port->dev_info.default_rxconf; if (rxq_share > 0 && (port->dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE)) { /* Non-zero share group to enable RxQ share. */ - port->rx_conf[qid].share_group = pid / rxq_share + 1; - port->rx_conf[qid].share_qid = qid; /* Equal mapping. */ + port->rxq[qid].conf.share_group = pid / rxq_share + 1; + port->rxq[qid].conf.share_qid = qid; /* Equal mapping. */ } if (offloads != 0) - port->rx_conf[qid].offloads = offloads; + port->rxq[qid].conf.offloads = offloads; /* Check if any Rx parameters have been passed */ if (rx_pthresh != RTE_PMD_PARAM_UNSET) - port->rx_conf[qid].rx_thresh.pthresh = rx_pthresh; + port->rxq[qid].conf.rx_thresh.pthresh = rx_pthresh; if (rx_hthresh != RTE_PMD_PARAM_UNSET) - port->rx_conf[qid].rx_thresh.hthresh = rx_hthresh; + port->rxq[qid].conf.rx_thresh.hthresh = rx_hthresh; if (rx_wthresh != RTE_PMD_PARAM_UNSET) - port->rx_conf[qid].rx_thresh.wthresh = rx_wthresh; + port->rxq[qid].conf.rx_thresh.wthresh = rx_wthresh; if (rx_free_thresh != RTE_PMD_PARAM_UNSET) - port->rx_conf[qid].rx_free_thresh = rx_free_thresh; + port->rxq[qid].conf.rx_free_thresh = rx_free_thresh; if (rx_drop_en != RTE_PMD_PARAM_UNSET) - port->rx_conf[qid].rx_drop_en = rx_drop_en; + port->rxq[qid].conf.rx_drop_en = rx_drop_en; port->nb_rx_desc[qid] = nb_rxd; } for (qid = 0; qid < nb_txq; qid++) { - offloads = port->tx_conf[qid].offloads; - port->tx_conf[qid] = port->dev_info.default_txconf; + offloads = port->txq[qid].conf.offloads; + port->txq[qid].conf = port->dev_info.default_txconf; if (offloads != 0) - port->tx_conf[qid].offloads = offloads; + port->txq[qid].conf.offloads = offloads; /* Check if any Tx parameters have been passed */ if (tx_pthresh != RTE_PMD_PARAM_UNSET) - port->tx_conf[qid].tx_thresh.pthresh = tx_pthresh; + port->txq[qid].conf.tx_thresh.pthresh = tx_pthresh; if (tx_hthresh != RTE_PMD_PARAM_UNSET) - port->tx_conf[qid].tx_thresh.hthresh = tx_hthresh; + port->txq[qid].conf.tx_thresh.hthresh = tx_hthresh; if (tx_wthresh != RTE_PMD_PARAM_UNSET) - port->tx_conf[qid].tx_thresh.wthresh = tx_wthresh; + port->txq[qid].conf.tx_thresh.wthresh = tx_wthresh; if (tx_rs_thresh != RTE_PMD_PARAM_UNSET) - port->tx_conf[qid].tx_rs_thresh = tx_rs_thresh; + port->txq[qid].conf.tx_rs_thresh = tx_rs_thresh; if (tx_free_thresh != RTE_PMD_PARAM_UNSET) - port->tx_conf[qid].tx_free_thresh = tx_free_thresh; + port->txq[qid].conf.tx_free_thresh = tx_free_thresh; port->nb_tx_desc[qid] = nb_txd; } @@ -3789,7 +3812,7 @@ init_port_config(void) for (i = 0; i < port->dev_info.nb_rx_queues; i++) - port->rx_conf[i].offloads &= + port->rxq[i].conf.offloads &= ~RTE_ETH_RX_OFFLOAD_RSS_HASH; } } @@ -3963,7 +3986,7 @@ init_port_dcb_config(portid_t pid, if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) { port_conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_RSS_HASH; for (i = 0; i < nb_rxq; i++) - rte_port->rx_conf[i].offloads &= + rte_port->rxq[i].conf.offloads &= ~RTE_ETH_RX_OFFLOAD_RSS_HASH; } diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 31f766c965..1eda7b97ab 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -134,6 +134,7 @@ struct fwd_stream { portid_t tx_port; /**< forwarding port of received packets */ queueid_t tx_queue; /**< TX queue to send forwarded packets */ streamid_t peer_addr; /**< index of peer ethernet address of packets */ + bool disabled; /**< the stream is disabled and should not run */ unsigned int retry_enabled; @@ -238,6 +239,18 @@ struct xstat_display_info { bool allocated; }; +/** RX queue configuration and state. */ +struct port_rxqueue { + struct rte_eth_rxconf conf; + uint8_t state; /**< RTE_ETH_QUEUE_STATE_* value. */ +}; + +/** TX queue configuration and state. */ +struct port_txqueue { + struct rte_eth_txconf conf; + uint8_t state; /**< RTE_ETH_QUEUE_STATE_* value. */ +}; + /** * The data structure associated with each port. */ @@ -260,8 +273,8 @@ struct rte_port { uint8_t dcb_flag; /**< enable dcb */ uint16_t nb_rx_desc[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue rx desc number */ uint16_t nb_tx_desc[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx desc number */ - struct rte_eth_rxconf rx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue rx configuration */ - struct rte_eth_txconf tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */ + struct port_rxqueue rxq[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue rx configuration and state */ + struct port_txqueue txq[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration and state */ struct rte_ether_addr *mc_addr_pool; /**< pool of multicast addrs */ uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */ queueid_t queue_nb; /**< nb. of queues for flow rules */ @@ -323,12 +336,14 @@ struct fwd_lcore { */ typedef int (*port_fwd_begin_t)(portid_t pi); typedef void (*port_fwd_end_t)(portid_t pi); +typedef void (*stream_init_t)(struct fwd_stream *fs); typedef void (*packet_fwd_t)(struct fwd_stream *fs); struct fwd_engine { const char *fwd_mode_name; /**< Forwarding mode name. */ port_fwd_begin_t port_fwd_begin; /**< NULL if nothing special to do. */ port_fwd_end_t port_fwd_end; /**< NULL if nothing special to do. */ + stream_init_t stream_init; /**< NULL if nothing special to do. */ packet_fwd_t packet_fwd; /**< Mandatory. */ }; diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index fc039a622c..e1bc78b73d 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -504,9 +504,17 @@ tx_only_begin(portid_t pi) return 0; } +static void +tx_only_stream_init(struct fwd_stream *fs) +{ + fs->disabled = ports[fs->tx_port].txq[fs->tx_queue].state == + RTE_ETH_QUEUE_STATE_STOPPED; +} + struct fwd_engine tx_only_engine = { .fwd_mode_name = "txonly", .port_fwd_begin = tx_only_begin, .port_fwd_end = NULL, + .stream_init = tx_only_stream_init, .packet_fwd = pkt_burst_transmit, }; From patchwork Mon Mar 7 12:53:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 108576 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7EBD1A0093; Mon, 7 Mar 2022 13:54:25 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 25C8C411B6; Mon, 7 Mar 2022 13:54:16 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2072.outbound.protection.outlook.com [40.107.223.72]) by mails.dpdk.org (Postfix) with ESMTP id 927F140688; Mon, 7 Mar 2022 13:54:14 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mqYBAzLYPJDhmXv1TCFvP+4j7LoYcNfxgr976pIJizMoZcWJho7M+wF5FAOLdCd6+sFB2ns9NqIAJeaYOwGPg+WTL1XWfKvNoUkd7w1ydj0k3eiEQcpqYetPG/+8yvGhpTdM4H9t2P/xq1XE/cFYninNMWtTeitjTjfpvekdtHHS0xplJmg4Wku/jaVq+zWYah8xb+qx6Z4HRZJivqxMQrYt9AJERf/17tXxbdv/hJfKWvh8W4JXzwiRLILUizmncB7ZZa0+WN3hxzhGE23tevg57hn+KImwkCKhUBywU9tPfQgdZE5gUhbRGqtFnvXKE8PY+ikDsT4h30sUxHU2XQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Y7w1Y9VBoWKh9RD/mRWrejOrEUnCDc78oCZBrpbVhRE=; b=YFny9VnezjGbzRRPe+gcl9Ju4k8amX58NH4cLzs32LguEvXXBawz2ItHChxbnMHDh9NU/uxVHbCUX8hII7ICzKskaD0ed7KksMrMnBIJ0nbkGlHYrocxMUEw9S2L4hZFuhtY7/b478Jmr1U1gX69ApWGURSvT1PJwkeDfA24nspNMOLIGuAMqZsoeby27pbau7oI5JCXtPQ/wSWTRdj057/sqicXVL6vKF5OMIJBmUyB3IdZjdbDCLYJa3w8RxXz+M5ssQv+VXG8IAiUwzh/VsuRfXk63bmD3hx9CvGsiurnPMU9RUDCCuJ+hvUMINL/dmaEU1BPI9+IffKMbt+aug== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Y7w1Y9VBoWKh9RD/mRWrejOrEUnCDc78oCZBrpbVhRE=; b=nn5JNbaziBoRd950uB0MHSz5HVNfO2AderU8tAIWgaAafOKTY/SsPCZkTWLD4x+KHDwiEublFc5YOvKKlto6PCxhGkijVqn2cF4H866XNsg5j3+PTwQBQkdrQ6OoBwk6ZZERTBk10MTXPo1X7gtCsGsKxqKT0PNdAHcoU19djae07aiMU9NL2sU+XIa4gmNv/Vys3sRVUghCoIqEEZSphuiUwdEmQgw7RLEWVvuHc9LvZaBN3zcjXkncRCUeForVwDhHnqAHyoXQFlnthcrACUS17ZzEyt21V3zPRAsVMBorHDd2IlHdu4oceU337JF2nOUyH/HWxcrrJwL8gDCMDA== Received: from BN0PR04CA0073.namprd04.prod.outlook.com (2603:10b6:408:ea::18) by SN1PR12MB2365.namprd12.prod.outlook.com (2603:10b6:802:2e::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.17; Mon, 7 Mar 2022 12:54:12 +0000 Received: from BN8NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ea:cafe::2b) by BN0PR04CA0073.outlook.office365.com (2603:10b6:408:ea::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.14 via Frontend Transport; Mon, 7 Mar 2022 12:54:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT029.mail.protection.outlook.com (10.13.177.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5038.14 via Frontend Transport; Mon, 7 Mar 2022 12:54:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 7 Mar 2022 12:54:08 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Mon, 7 Mar 2022 04:54:07 -0800 Received: from nvidia.com (10.127.8.11) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server id 15.2.986.9 via Frontend Transport; Mon, 7 Mar 2022 04:54:05 -0800 From: Dmitry Kozlyuk To: CC: , Matan Azrad , Thomas Monjalon , Ferruh Yigit , "Andrew Rybchenko" Subject: [PATCH v3 2/2] ethdev: prohibit polling of a stopped queue Date: Mon, 7 Mar 2022 14:53:51 +0200 Message-ID: <20220307125351.697936-3-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220307125351.697936-1-dkozlyuk@nvidia.com> References: <20220306232310.613552-1-dkozlyuk@nvidia.com> <20220307125351.697936-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0d8ea05c-9eca-49fd-56ec-08da0039933f X-MS-TrafficTypeDiagnostic: SN1PR12MB2365:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YKKR+KylufX4r1HOic5322iZ3R0nVfk1H3JLUpBd/lYUWK1v7Cw4xHXjlr7gUs7Ju350GWmOe8333qTBtVEd8iLRvyUOHh9huZybSqb5cMpyPu/Yk88k8DZsUloK9VgA0WhlsBEUS8PxL/KJZ0yrgU/aiTyOidKt6Rc35qQmBiaT5PS+9I7pREHDzJVDsY6jcBkMx+SO85YgeW+4+V6QDskpUfDGPUSSYWED7wuNePAhYJbNneW+vSHT5i37dUAsq+lnfQTZXKvs9EiDyBa6mvmmluyJP1O2NQnXj234Sk98mPtNSokCN08C9VBcNzmw9m37U+0DyGWwBtC950urRVVxQ8yEm03hH6y10EVM760ihUmJK08lUkAgnUC8YzU+LvPm6+T+OFemx6IRSiX6j77FZQiZygALKA/M9G7plDMLGCWmlSyE4Pw01FU65JaWl9SjkvYyULq/PSOFn3VKpjV+0IUm/6Agx+mPasaDVMrUuwJj0nTs2xOMx6j4JfeKKI92geOzX6RLRkGPs/elZmI9HzFGmihi9jVk9XQlyshLCf5xT9sIQP6P9AgcQjwEc+ZjHzhGsbtIpA42Db6PL0lKY7RpC3bA8d0ZhzZCAPMKBRwyrF6uOJLafyZ2BJK3C4fdW+mM5V3Ey6LBkhphMvGdEPD3KDigfKyG1l36UT9LdejWB3wyWThX8kbbwzmzj8MaQMao/hqshgGoXz49yA== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(5660300002)(54906003)(83380400001)(356005)(36860700001)(8936002)(47076005)(36756003)(2616005)(26005)(336012)(186003)(1076003)(426003)(55016003)(2906002)(81166007)(6916009)(82310400004)(508600001)(8676002)(4326008)(86362001)(316002)(40460700003)(70586007)(70206006)(7696005)(6666004)(6286002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Mar 2022 12:54:10.8165 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0d8ea05c-9eca-49fd-56ec-08da0039933f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB2365 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Whether it is allowed to call Rx/Tx functions for a stopped queue was undocumented. Some PMDs make this behavior a no-op either by explicitly checking the queue state or by the way how their routines are implemented or HW works. No-op behavior may be convenient for application developers. But it also means that pollers of stopped queues would go all the way down to PMD Rx/Tx routines, wasting cycles. Some PMDs would do a check for the queue state on data path, even though it may never be needed for a particular application. Also, use cases for stopping queues or starting them deferred do not logically require polling stopped queues. Use case 1: a secondary that was polling the queue has crashed, the primary is doing a recovery to free all mbufs. By definition the queue to be restarted is not polled. Use case 2: deferred queue start or queue reconfiguration. The polling thread must be synchronized anyway, because queue start and stop are non-atomic. Prohibit calling Rx/Tx functions on stopped queues. Fixes: 0748be2cf9a2 ("ethdev: queue start and stop") Cc: stable@dpdk.org Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- lib/ethdev/rte_ethdev.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index c2d1f9a972..9f12a6043c 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -74,7 +74,7 @@ * rte_eth_rx_queue_setup()), it must call rte_eth_dev_stop() first to stop the * device and then do the reconfiguration before calling rte_eth_dev_start() * again. The transmit and receive functions should not be invoked when the - * device is stopped. + * device is stopped or when the queue is stopped (for that queue). * * Please note that some configuration is not stored between calls to * rte_eth_dev_stop()/rte_eth_dev_start(). The following configuration will