From patchwork Wed Feb 7 02:18:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Boyer X-Patchwork-Id: 136456 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF99F43A1B; Wed, 7 Feb 2024 03:21:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0E67242DF6; Wed, 7 Feb 2024 03:20:11 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2080.outbound.protection.outlook.com [40.107.93.80]) by mails.dpdk.org (Postfix) with ESMTP id D0C814281D for ; Wed, 7 Feb 2024 03:20:03 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dzro9feKKsEwAmmk75CRlsb7m+QDe9AkA0yVVcvNZqekNEWVsZ4II04IRznaUR56ZyJkBR0tu1IYlMrKg1Eixci9c1VZ6atTn2VEXDHoixXKjm8nCPt5vEZXJ1hjy1GXmrB91aBs2T7YdP7z90D54XbIyrh5y6TzlHtBihNUCEY5iDiJDwx5LXl0FnnVaLGeU05Fxo2ywEas7lZeCpvjrk/Xk5etSDl6aOl2vATwYi2R5QKv1TjHy0MbT1sMfGlj8gfK0kbnO3SobnAl0cKCAdskN2qvY+SygZj3peOV1f3neJKnYBir6rirxtoWCS2anLCv0mxIlFrcgD7WMLAaDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=okDj1gGOQDERFOPPHnEUfrA7tZ9RXZ88UMHOjZEblJg=; b=m20HmMykaAvf05timY0ux1ownj6uC+X37G5fC2vxMZcxW+krf2M8JVqr++vhhF2XjRsig/u3D0sNTwn9iZTprrbFY+kmGj+Xc5t4vssqcMDyROpzHpv4Wg3GO+IXDTWUtkGlVHLs8ty41iDFGO4CBDVvJQJxzVMRocRCnB38qzwizgCQFYfTLw5TUFte0sojXQGO+gFnhEBVBX9ExGDsX64isO93GZ3P7KbCq8FHON5diqKmcn+YuWyN8yl2lxfqhWmmT2gWaRsSgFgC+9VCgVo0xwcAbQSMSercq/YB9gqp4F/B1Ag6dVNBvQqeNA6OWqNrlY8m2RvhSj5gGmRUpQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=dpdk.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=okDj1gGOQDERFOPPHnEUfrA7tZ9RXZ88UMHOjZEblJg=; b=S5cGlRaQzfABt0attxf1IP9CGohrJtW/LrkM5ToNGbAnNHsS7pL/o4StzehlOvdROa6M4XFbzUlITusECAUaKfi1vtuP/4bR2TxnCEKJWKlbw9CywjIChLyUqaLLEkXQJlIA2BkOmKV3ruR7lFwVIANc2unZWjRKBknmRba9Zhg= Received: from PR3PR09CA0025.eurprd09.prod.outlook.com (2603:10a6:102:b7::30) by DM4PR12MB5938.namprd12.prod.outlook.com (2603:10b6:8:69::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7270.16; Wed, 7 Feb 2024 02:20:01 +0000 Received: from SA2PEPF000015CC.namprd03.prod.outlook.com (2603:10a6:102:b7:cafe::87) by PR3PR09CA0025.outlook.office365.com (2603:10a6:102:b7::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.36 via Frontend Transport; Wed, 7 Feb 2024 02:20:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by SA2PEPF000015CC.mail.protection.outlook.com (10.167.241.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7249.19 via Frontend Transport; Wed, 7 Feb 2024 02:20:00 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.34; Tue, 6 Feb 2024 20:19:53 -0600 From: Andrew Boyer To: CC: Andrew Boyer Subject: [PATCH v2 13/13] net/ionic: optimize device start operation Date: Tue, 6 Feb 2024 18:18:49 -0800 Message-ID: <20240207021849.52988-14-andrew.boyer@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240202193238.62669-1-andrew.boyer@amd.com> References: <20240202193238.62669-1-andrew.boyer@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF000015CC:EE_|DM4PR12MB5938:EE_ X-MS-Office365-Filtering-Correlation-Id: 09547da0-50f6-4e5a-8534-08dc2783494e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xZPx5xa3nvUa5G0m6lFxqFawlu00SXj4hdofeb9zaTmWFAWxd8nbNl5iO+DOczBsNiBEhEm4XotMxOTCFJ/1+9pY/RNMmLEIekmjOEP71RMa6BtVxGCQ0/KaDUo1yvZIgBkQcNF+UNIIejX73tyztgMdnTsOQ0MZVSFRIhrWG2jR1fm9RK6tUvEDE7QeTLd67mNjXQkvUMezu3iJnm8fqv0XCLzC+R8TuCGmWDdsTRdCdC6EI0JzC1w6ME+m9UjWALaBHQ1rhTIzKRRzZoUr+5QgduZ+MChf/ogLD4lni9DsZP6cBgcRsSLJiExwTNl3SQItIUEurbSDAzKcG1exSoWEHdxnmWNuLERFzz1Nz6dGgQEgkeJmwG5ImbkjlcIBcaU7uWhZnfIe4jK8l32NKVVL0SAWDsDiKi1JRxiC00PLwlHT8AXCbucC2H3n66lhgdOPnWRpn5p0RdbhvA7vp9M8HXvhIMs0lY5oQQ+4TGsvoW+bGqypibnFaBYRqOb4gmua6m+ZkRy+jlcq0u2CPdN+w9ONtW6V14IEgnDCMUqiO5eNoiGbiSFbStQ57BW1F6FbzROvgFAmBldLKzzeVLaELOgLR+CG/8RuhGEkd55KCh0eEjMsZe4IX/mJ+DAibdLqAkJKNrtSu+c6pe5j+9KSBT35ajwEqhHhc24mowGFjLPhx5QYCmha71pSlp1Z+Xy2OVIqkopnmDDEUi0Mz1TlLtMSgpej1QmzsTqDzzn0cmDfhEr3RBFHY0aGY5z7kKH98asz5HEO7PIUTbu0bA== X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(136003)(376002)(396003)(346002)(230922051799003)(64100799003)(82310400011)(186009)(451199024)(1800799012)(46966006)(40470700004)(36840700001)(36756003)(40460700003)(40480700001)(2616005)(81166007)(83380400001)(47076005)(1076003)(336012)(426003)(82740400003)(41300700001)(16526019)(26005)(356005)(6916009)(6666004)(4326008)(316002)(70586007)(30864003)(70206006)(5660300002)(478600001)(2906002)(86362001)(36860700001)(44832011)(8936002)(8676002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Feb 2024 02:20:00.3068 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 09547da0-50f6-4e5a-8534-08dc2783494e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF000015CC.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5938 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split the queue_start operation into first-half and second-half helpers. This allows us to batch up the queue commands during dev_start(), reducing the outage window when restarting the process by about 1ms per queue. Signed-off-by: Andrew Boyer --- drivers/net/ionic/ionic_lif.c | 136 +++++++++++++++++++++++---------- drivers/net/ionic/ionic_lif.h | 6 +- drivers/net/ionic/ionic_rxtx.c | 81 ++++++++++++++++---- drivers/net/ionic/ionic_rxtx.h | 10 +++ 4 files changed, 176 insertions(+), 57 deletions(-) diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c index 45317590fa..93a1011772 100644 --- a/drivers/net/ionic/ionic_lif.c +++ b/drivers/net/ionic/ionic_lif.c @@ -1601,13 +1601,16 @@ ionic_lif_set_features(struct ionic_lif *lif) } int -ionic_lif_txq_init(struct ionic_tx_qcq *txq) +ionic_lif_txq_init_nowait(struct ionic_tx_qcq *txq) { struct ionic_qcq *qcq = &txq->qcq; struct ionic_queue *q = &qcq->q; struct ionic_lif *lif = qcq->lif; struct ionic_cq *cq = &qcq->cq; - struct ionic_admin_ctx ctx = { + struct ionic_admin_ctx *ctx = &txq->admin_ctx; + int err; + + *ctx = (struct ionic_admin_ctx) { .pending_work = true, .cmd.q_init = { .opcode = IONIC_CMD_Q_INIT, @@ -1621,32 +1624,41 @@ ionic_lif_txq_init(struct ionic_tx_qcq *txq) .sg_ring_base = rte_cpu_to_le_64(q->sg_base_pa), }, }; - int err; if (txq->flags & IONIC_QCQ_F_SG) - ctx.cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_SG); + ctx->cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_SG); if (txq->flags & IONIC_QCQ_F_CMB) { - ctx.cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_CMB); - ctx.cmd.q_init.ring_base = rte_cpu_to_le_64(q->cmb_base_pa); + ctx->cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_CMB); + ctx->cmd.q_init.ring_base = rte_cpu_to_le_64(q->cmb_base_pa); } else { - ctx.cmd.q_init.ring_base = rte_cpu_to_le_64(q->base_pa); + ctx->cmd.q_init.ring_base = rte_cpu_to_le_64(q->base_pa); } IONIC_PRINT(DEBUG, "txq_init.index %d", q->index); IONIC_PRINT(DEBUG, "txq_init.ring_base 0x%" PRIx64 "", q->base_pa); IONIC_PRINT(DEBUG, "txq_init.ring_size %d", - ctx.cmd.q_init.ring_size); - IONIC_PRINT(DEBUG, "txq_init.ver %u", ctx.cmd.q_init.ver); + ctx->cmd.q_init.ring_size); + IONIC_PRINT(DEBUG, "txq_init.ver %u", ctx->cmd.q_init.ver); ionic_q_reset(q); ionic_cq_reset(cq); - err = ionic_adminq_post_wait(lif, &ctx); + /* Caller responsible for calling ionic_lif_txq_init_done() */ + err = ionic_adminq_post(lif, ctx); if (err) - return err; + ctx->pending_work = false; + return err; +} - q->hw_type = ctx.comp.q_init.hw_type; - q->hw_index = rte_le_to_cpu_32(ctx.comp.q_init.hw_index); +void +ionic_lif_txq_init_done(struct ionic_tx_qcq *txq) +{ + struct ionic_lif *lif = txq->qcq.lif; + struct ionic_queue *q = &txq->qcq.q; + struct ionic_admin_ctx *ctx = &txq->admin_ctx; + + q->hw_type = ctx->comp.q_init.hw_type; + q->hw_index = rte_le_to_cpu_32(ctx->comp.q_init.hw_index); q->db = ionic_db_map(lif, q); IONIC_PRINT(DEBUG, "txq->hw_type %d", q->hw_type); @@ -1654,18 +1666,19 @@ ionic_lif_txq_init(struct ionic_tx_qcq *txq) IONIC_PRINT(DEBUG, "txq->db %p", q->db); txq->flags |= IONIC_QCQ_F_INITED; - - return 0; } int -ionic_lif_rxq_init(struct ionic_rx_qcq *rxq) +ionic_lif_rxq_init_nowait(struct ionic_rx_qcq *rxq) { struct ionic_qcq *qcq = &rxq->qcq; struct ionic_queue *q = &qcq->q; struct ionic_lif *lif = qcq->lif; struct ionic_cq *cq = &qcq->cq; - struct ionic_admin_ctx ctx = { + struct ionic_admin_ctx *ctx = &rxq->admin_ctx; + int err; + + *ctx = (struct ionic_admin_ctx) { .pending_work = true, .cmd.q_init = { .opcode = IONIC_CMD_Q_INIT, @@ -1679,32 +1692,41 @@ ionic_lif_rxq_init(struct ionic_rx_qcq *rxq) .sg_ring_base = rte_cpu_to_le_64(q->sg_base_pa), }, }; - int err; if (rxq->flags & IONIC_QCQ_F_SG) - ctx.cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_SG); + ctx->cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_SG); if (rxq->flags & IONIC_QCQ_F_CMB) { - ctx.cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_CMB); - ctx.cmd.q_init.ring_base = rte_cpu_to_le_64(q->cmb_base_pa); + ctx->cmd.q_init.flags |= rte_cpu_to_le_16(IONIC_QINIT_F_CMB); + ctx->cmd.q_init.ring_base = rte_cpu_to_le_64(q->cmb_base_pa); } else { - ctx.cmd.q_init.ring_base = rte_cpu_to_le_64(q->base_pa); + ctx->cmd.q_init.ring_base = rte_cpu_to_le_64(q->base_pa); } IONIC_PRINT(DEBUG, "rxq_init.index %d", q->index); IONIC_PRINT(DEBUG, "rxq_init.ring_base 0x%" PRIx64 "", q->base_pa); IONIC_PRINT(DEBUG, "rxq_init.ring_size %d", - ctx.cmd.q_init.ring_size); - IONIC_PRINT(DEBUG, "rxq_init.ver %u", ctx.cmd.q_init.ver); + ctx->cmd.q_init.ring_size); + IONIC_PRINT(DEBUG, "rxq_init.ver %u", ctx->cmd.q_init.ver); ionic_q_reset(q); ionic_cq_reset(cq); - err = ionic_adminq_post_wait(lif, &ctx); + /* Caller responsible for calling ionic_lif_rxq_init_done() */ + err = ionic_adminq_post(lif, ctx); if (err) - return err; + ctx->pending_work = false; + return err; +} - q->hw_type = ctx.comp.q_init.hw_type; - q->hw_index = rte_le_to_cpu_32(ctx.comp.q_init.hw_index); +void +ionic_lif_rxq_init_done(struct ionic_rx_qcq *rxq) +{ + struct ionic_lif *lif = rxq->qcq.lif; + struct ionic_queue *q = &rxq->qcq.q; + struct ionic_admin_ctx *ctx = &rxq->admin_ctx; + + q->hw_type = ctx->comp.q_init.hw_type; + q->hw_index = rte_le_to_cpu_32(ctx->comp.q_init.hw_index); q->db = ionic_db_map(lif, q); rxq->flags |= IONIC_QCQ_F_INITED; @@ -1712,8 +1734,6 @@ ionic_lif_rxq_init(struct ionic_rx_qcq *rxq) IONIC_PRINT(DEBUG, "rxq->hw_type %d", q->hw_type); IONIC_PRINT(DEBUG, "rxq->hw_index %d", q->hw_index); IONIC_PRINT(DEBUG, "rxq->db %p", q->db); - - return 0; } static int @@ -1962,9 +1982,11 @@ ionic_lif_configure(struct ionic_lif *lif) int ionic_lif_start(struct ionic_lif *lif) { + struct rte_eth_dev *dev = lif->eth_dev; uint32_t rx_mode; - uint32_t i; + uint32_t i, j, chunk; int err; + bool fatal = false; err = ionic_lif_rss_setup(lif); if (err) @@ -1985,25 +2007,57 @@ ionic_lif_start(struct ionic_lif *lif) "on port %u", lif->nrxqcqs, lif->ntxqcqs, lif->port_id); - for (i = 0; i < lif->nrxqcqs; i++) { - struct ionic_rx_qcq *rxq = lif->rxqcqs[i]; - if (!(rxq->flags & IONIC_QCQ_F_DEFERRED)) { - err = ionic_dev_rx_queue_start(lif->eth_dev, i); + chunk = ionic_adminq_space_avail(lif); + + for (i = 0; i < lif->nrxqcqs; i += chunk) { + if (lif->rxqcqs[0]->flags & IONIC_QCQ_F_DEFERRED) { + IONIC_PRINT(DEBUG, "Rx queue start deferred"); + break; + } + + for (j = 0; j < chunk && i + j < lif->nrxqcqs; j++) { + err = ionic_dev_rx_queue_start_firsthalf(dev, i + j); + if (err) { + fatal = true; + break; + } + } + for (j = 0; j < chunk && i + j < lif->nrxqcqs; j++) { + /* Commands that failed to post return immediately */ + err = ionic_dev_rx_queue_start_secondhalf(dev, i + j); if (err) - return err; + /* Don't break */ + fatal = true; } } + if (fatal) + return -EIO; - for (i = 0; i < lif->ntxqcqs; i++) { - struct ionic_tx_qcq *txq = lif->txqcqs[i]; - if (!(txq->flags & IONIC_QCQ_F_DEFERRED)) { - err = ionic_dev_tx_queue_start(lif->eth_dev, i); + for (i = 0; i < lif->ntxqcqs; i += chunk) { + if (lif->txqcqs[0]->flags & IONIC_QCQ_F_DEFERRED) { + IONIC_PRINT(DEBUG, "Tx queue start deferred"); + break; + } + + for (j = 0; j < chunk && i + j < lif->ntxqcqs; j++) { + err = ionic_dev_tx_queue_start_firsthalf(dev, i + j); + if (err) { + fatal = true; + break; + } + } + for (j = 0; j < chunk && i + j < lif->ntxqcqs; j++) { + /* Commands that failed to post return immediately */ + err = ionic_dev_tx_queue_start_secondhalf(dev, i + j); if (err) - return err; + /* Don't break */ + fatal = true; } } + if (fatal) + return -EIO; /* Carrier ON here */ lif->state |= IONIC_LIF_F_UP; diff --git a/drivers/net/ionic/ionic_lif.h b/drivers/net/ionic/ionic_lif.h index ee13f5b7c8..591cf1a2ff 100644 --- a/drivers/net/ionic/ionic_lif.h +++ b/drivers/net/ionic/ionic_lif.h @@ -228,11 +228,13 @@ int ionic_tx_qcq_alloc(struct ionic_lif *lif, uint32_t socket_id, struct ionic_tx_qcq **qcq_out); void ionic_qcq_free(struct ionic_qcq *qcq); -int ionic_lif_rxq_init(struct ionic_rx_qcq *rxq); +int ionic_lif_rxq_init_nowait(struct ionic_rx_qcq *rxq); +void ionic_lif_rxq_init_done(struct ionic_rx_qcq *rxq); void ionic_lif_rxq_deinit_nowait(struct ionic_rx_qcq *rxq); void ionic_lif_rxq_stats(struct ionic_rx_qcq *rxq); -int ionic_lif_txq_init(struct ionic_tx_qcq *txq); +int ionic_lif_txq_init_nowait(struct ionic_tx_qcq *txq); +void ionic_lif_txq_init_done(struct ionic_tx_qcq *txq); void ionic_lif_txq_deinit_nowait(struct ionic_tx_qcq *txq); void ionic_lif_txq_stats(struct ionic_tx_qcq *txq); diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c index 774dc596c0..ad04e987eb 100644 --- a/drivers/net/ionic/ionic_rxtx.c +++ b/drivers/net/ionic/ionic_rxtx.c @@ -203,27 +203,54 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id, * Start Transmit Units for specified queue. */ int __rte_cold -ionic_dev_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id) +ionic_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - uint8_t *tx_queue_state = eth_dev->data->tx_queue_state; - struct ionic_tx_qcq *txq; int err; + err = ionic_dev_tx_queue_start_firsthalf(dev, tx_queue_id); + if (err) + return err; + + return ionic_dev_tx_queue_start_secondhalf(dev, tx_queue_id); +} + +int __rte_cold +ionic_dev_tx_queue_start_firsthalf(struct rte_eth_dev *dev, + uint16_t tx_queue_id) +{ + uint8_t *tx_queue_state = dev->data->tx_queue_state; + struct ionic_tx_qcq *txq = dev->data->tx_queues[tx_queue_id]; + if (tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { IONIC_PRINT(DEBUG, "TX queue %u already started", tx_queue_id); return 0; } - txq = eth_dev->data->tx_queues[tx_queue_id]; - IONIC_PRINT(DEBUG, "Starting TX queue %u, %u descs", tx_queue_id, txq->qcq.q.num_descs); - err = ionic_lif_txq_init(txq); + return ionic_lif_txq_init_nowait(txq); +} + +int __rte_cold +ionic_dev_tx_queue_start_secondhalf(struct rte_eth_dev *dev, + uint16_t tx_queue_id) +{ + uint8_t *tx_queue_state = dev->data->tx_queue_state; + struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(dev); + struct ionic_tx_qcq *txq = dev->data->tx_queues[tx_queue_id]; + int err; + + if (tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + err = ionic_adminq_wait(lif, &txq->admin_ctx); if (err) return err; + ionic_lif_txq_init_done(txq); + tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; return 0; @@ -680,22 +707,31 @@ ionic_rx_init_descriptors(struct ionic_rx_qcq *rxq) * Start Receive Units for specified queue. */ int __rte_cold -ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) +ionic_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - uint8_t *rx_queue_state = eth_dev->data->rx_queue_state; - struct ionic_rx_qcq *rxq; - struct ionic_queue *q; int err; + err = ionic_dev_rx_queue_start_firsthalf(dev, rx_queue_id); + if (err) + return err; + + return ionic_dev_rx_queue_start_secondhalf(dev, rx_queue_id); +} + +int __rte_cold +ionic_dev_rx_queue_start_firsthalf(struct rte_eth_dev *dev, + uint16_t rx_queue_id) +{ + uint8_t *rx_queue_state = dev->data->rx_queue_state; + struct ionic_rx_qcq *rxq = dev->data->rx_queues[rx_queue_id]; + struct ionic_queue *q = &rxq->qcq.q; + if (rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STARTED) { IONIC_PRINT(DEBUG, "RX queue %u already started", rx_queue_id); return 0; } - rxq = eth_dev->data->rx_queues[rx_queue_id]; - q = &rxq->qcq.q; - rxq->frame_size = rxq->qcq.lif->frame_size - RTE_ETHER_CRC_LEN; /* Recalculate segment count based on MTU */ @@ -707,10 +743,27 @@ ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) ionic_rx_init_descriptors(rxq); - err = ionic_lif_rxq_init(rxq); + return ionic_lif_rxq_init_nowait(rxq); +} + +int __rte_cold +ionic_dev_rx_queue_start_secondhalf(struct rte_eth_dev *dev, + uint16_t rx_queue_id) +{ + uint8_t *rx_queue_state = dev->data->rx_queue_state; + struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(dev); + struct ionic_rx_qcq *rxq = dev->data->rx_queues[rx_queue_id]; + int err; + + if (rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + err = ionic_adminq_wait(lif, &rxq->admin_ctx); if (err) return err; + ionic_lif_rxq_init_done(rxq); + /* Allocate buffers for descriptor ring */ if (rxq->flags & IONIC_QCQ_F_SG) err = ionic_rx_fill_sg(rxq); diff --git a/drivers/net/ionic/ionic_rxtx.h b/drivers/net/ionic/ionic_rxtx.h index 7ca23178cc..a342afec54 100644 --- a/drivers/net/ionic/ionic_rxtx.h +++ b/drivers/net/ionic/ionic_rxtx.h @@ -46,6 +46,16 @@ void ionic_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid); int ionic_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); int ionic_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); +/* Helpers for optimized dev_start() */ +int ionic_dev_rx_queue_start_firsthalf(struct rte_eth_dev *dev, + uint16_t rx_queue_id); +int ionic_dev_rx_queue_start_secondhalf(struct rte_eth_dev *dev, + uint16_t rx_queue_id); +int ionic_dev_tx_queue_start_firsthalf(struct rte_eth_dev *dev, + uint16_t tx_queue_id); +int ionic_dev_tx_queue_start_secondhalf(struct rte_eth_dev *dev, + uint16_t tx_queue_id); + /* Helpers for optimized dev_stop() */ void ionic_dev_rx_queue_stop_firsthalf(struct rte_eth_dev *dev, uint16_t rx_queue_id);