From patchwork Thu Sep 8 17:12:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Naga Harish K, S V" X-Patchwork-Id: 116098 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3AC2EA0548; Thu, 8 Sep 2022 19:12:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0B3B440DDC; Thu, 8 Sep 2022 19:12:49 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id C04E34021F for ; Thu, 8 Sep 2022 19:12:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662657166; x=1694193166; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=as0046BZ1o8/Xz30r+2I+p+BXnvVm2y6LxSqDytNJeM=; b=HLa022fl3PBgDObqNZ3nEfopupXtC32G9nswJHcPJnhNV8ajMgbTU8ck g4T7dGeYFDRk6pJrwTdAKIalKBtdCPpnCy2f23hk6TvU3dkaI4UcFk5EB t5LwPyAbzvq9LDttg9XTCePpSs/qJFb5IAVAuF7qMM7FCFflEoGN3ahUC R+++YxfHU23hQQdHOaR491VrZNNhKL1bVwvdikrowo+ki8BJ7vpDmJi6Y 3WQwDcZmHFxd/CIkYAo2Bib06HBy86/ST66i06wnbZGENmDL9ial/5rjz RgykBxWOEAbakVJIb+KojrqGrYfvN7zXLxO77XTFw1UHHKSRS1XVBDAyf A==; X-IronPort-AV: E=McAfee;i="6500,9779,10464"; a="295999001" X-IronPort-AV: E=Sophos;i="5.93,300,1654585200"; d="scan'208";a="295999001" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Sep 2022 10:12:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,300,1654585200"; d="scan'208";a="943433068" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by fmsmga005.fm.intel.com with ESMTP; 08 Sep 2022 10:12:45 -0700 From: Naga Harish K S V To: jay.jayatheerthan@intel.com, jerinj@marvell.com Cc: dev@dpdk.org Subject: [PATCH 1/3] eventdev/eth_tx: add queue start stop API Date: Thu, 8 Sep 2022 12:12:40 -0500 Message-Id: <20220908171242.3804375-1-s.v.naga.harish.k@intel.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support to start or stop a particular queue that is associated with the adapter. Start function enables the Tx Adapter to start enqueueing packets to the Tx queue. Stop function stops the Tx Adapter from transmitting any mbufs to the Tx queue. The Tx Adapter also frees any mbufs that it may have buffered for this queue. All inflight packets destined to the queue are freed until the queue is started again. Signed-off-by: Naga Harish K S V --- lib/eventdev/eventdev_pmd.h | 41 +++++++++ lib/eventdev/rte_event_eth_tx_adapter.c | 113 +++++++++++++++++++++++- lib/eventdev/rte_event_eth_tx_adapter.h | 39 ++++++++ lib/eventdev/version.map | 2 + 4 files changed, 191 insertions(+), 4 deletions(-) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index f514a37575..a27c0883c6 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -1294,6 +1294,43 @@ typedef int (*eventdev_eth_tx_adapter_stats_reset_t)(uint8_t id, typedef int (*eventdev_eth_tx_adapter_instance_get_t) (uint16_t eth_dev_id, uint16_t tx_queue_id, uint8_t *txa_inst_id); +/** + * Start a Tx queue that is assigned to TX adapter instance + * + * @param id + * Adapter identifier + * + * @param eth_dev_id + * Port identifier of Ethernet device + * + * @param tx_queue_id + * Ethernet device TX queue index + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +typedef int (*eventdev_eth_tx_adapter_queue_start) + (uint8_t id, uint16_t eth_dev_id, uint16_t tx_queue_id); + +/** + * Stop a Tx queue that is assigned to TX adapter instance + * + * @param id + * Adapter identifier + * + * @param eth_dev_id + * Port identifier of Ethernet device + * + * @param tx_queue_id + * Ethernet device TX queue index + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +typedef int (*eventdev_eth_tx_adapter_queue_stop) + (uint8_t id, uint16_t eth_dev_id, uint16_t tx_queue_id); /** Event device operations function pointer table */ struct eventdev_ops { @@ -1409,6 +1446,10 @@ struct eventdev_ops { /**< Reset eth Tx adapter statistics */ eventdev_eth_tx_adapter_instance_get_t eth_tx_adapter_instance_get; /**< Get Tx adapter instance id for Tx queue */ + eventdev_eth_tx_adapter_queue_start eth_tx_adapter_queue_start; + /**< Start Tx queue assigned to Tx adapter instance */ + eventdev_eth_tx_adapter_queue_stop eth_tx_adapter_queue_stop; + /**< Stop Tx queue assigned to Tx adapter instance */ eventdev_selftest dev_selftest; /**< Start eventdev Selftest */ diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c index aaef352f5c..d0ed11ade5 100644 --- a/lib/eventdev/rte_event_eth_tx_adapter.c +++ b/lib/eventdev/rte_event_eth_tx_adapter.c @@ -47,6 +47,12 @@ #define txa_dev_instance_get(id) \ txa_evdev(id)->dev_ops->eth_tx_adapter_instance_get +#define txa_dev_queue_start(id) \ + txa_evdev(id)->dev_ops->eth_tx_adapter_queue_start + +#define txa_dev_queue_stop(id) \ + txa_evdev(id)->dev_ops->eth_tx_adapter_queue_stop + #define RTE_EVENT_ETH_TX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \ do { \ if (!txa_valid_id(id)) { \ @@ -94,6 +100,8 @@ struct txa_retry { struct txa_service_queue_info { /* Queue has been added */ uint8_t added; + /* Queue is stopped */ + bool stopped; /* Retry callback argument */ struct txa_retry txa_retry; /* Tx buffer */ @@ -557,7 +565,7 @@ txa_process_event_vector(struct txa_service_data *txa, port = vec->port; queue = vec->queue; tqi = txa_service_queue(txa, port, queue); - if (unlikely(tqi == NULL || !tqi->added)) { + if (unlikely(tqi == NULL || !tqi->added || tqi->stopped)) { rte_pktmbuf_free_bulk(mbufs, vec->nb_elem); rte_mempool_put(rte_mempool_from_obj(vec), vec); return 0; @@ -571,7 +579,8 @@ txa_process_event_vector(struct txa_service_data *txa, port = mbufs[i]->port; queue = rte_event_eth_tx_adapter_txq_get(mbufs[i]); tqi = txa_service_queue(txa, port, queue); - if (unlikely(tqi == NULL || !tqi->added)) { + if (unlikely(tqi == NULL || !tqi->added || + tqi->stopped)) { rte_pktmbuf_free(mbufs[i]); continue; } @@ -608,7 +617,8 @@ txa_service_tx(struct txa_service_data *txa, struct rte_event *ev, queue = rte_event_eth_tx_adapter_txq_get(m); tqi = txa_service_queue(txa, port, queue); - if (unlikely(tqi == NULL || !tqi->added)) { + if (unlikely(tqi == NULL || !tqi->added || + tqi->stopped)) { rte_pktmbuf_free(m); continue; } @@ -672,7 +682,8 @@ txa_service_func(void *args) for (q = 0; q < dev->data->nb_tx_queues; q++) { tqi = txa_service_queue(txa, i, q); - if (unlikely(tqi == NULL || !tqi->added)) + if (unlikely(tqi == NULL || !tqi->added || + tqi->stopped)) continue; nb_tx += rte_eth_tx_buffer_flush(i, q, @@ -867,6 +878,7 @@ txa_service_queue_add(uint8_t id, tqi->tx_buf = tb; tqi->added = 1; + tqi->stopped = false; tdi->nb_queues++; txa->nb_queues++; @@ -885,6 +897,20 @@ txa_service_queue_add(uint8_t id, return -1; } +static inline void +txa_txq_buffer_drain(struct txa_service_queue_info *tqi) +{ + struct rte_eth_dev_tx_buffer *b; + uint16_t i; + + b = tqi->tx_buf; + + for (i = 0; i < b->length; i++) + rte_pktmbuf_free(b->pkts[i]); + + b->length = 0; +} + static int txa_service_queue_del(uint8_t id, const struct rte_eth_dev *dev, @@ -930,6 +956,8 @@ txa_service_queue_del(uint8_t id, if (tqi == NULL || !tqi->added) goto ret_unlock; + /* Drain the buffered mbufs */ + txa_txq_buffer_drain(tqi); tb = tqi->tx_buf; tqi->added = 0; tqi->tx_buf = NULL; @@ -1320,3 +1348,80 @@ rte_event_eth_tx_adapter_instance_get(uint16_t eth_dev_id, return -EINVAL; } + +static int +txa_queue_state_set(uint16_t eth_dev_id, uint16_t tx_queue_id, bool state) +{ + struct txa_service_data *txa; + struct txa_service_queue_info *tqi; + uint8_t txa_inst_id; + int ret; + uint32_t caps = 0; + + /* Below API already does validation of input parameters. + * Hence skipping the validation here. + */ + ret = rte_event_eth_tx_adapter_instance_get(eth_dev_id, + tx_queue_id, + &txa_inst_id); + if (ret < 0) + return -EINVAL; + + TXA_CHECK_OR_ERR_RET(txa_inst_id); + + txa = txa_service_id_to_data(txa_inst_id); + ret = rte_event_eth_tx_adapter_caps_get(txa->eventdev_id, + eth_dev_id, + &caps); + if (state == true) { + if (caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT) { + ret = txa_dev_queue_start(txa_inst_id) ? + txa_dev_queue_start(txa_inst_id)(txa_inst_id, + eth_dev_id, + tx_queue_id) + : -EINVAL; + if (ret == 0) + return ret; + } + rte_spinlock_lock(&txa->tx_lock); + tqi = txa_service_queue(txa, eth_dev_id, tx_queue_id); + if (unlikely(tqi == NULL || !tqi->added)) { + rte_spinlock_unlock(&txa->tx_lock); + return -EINVAL; + } + tqi->stopped = false; + rte_spinlock_unlock(&txa->tx_lock); + } else { + if (caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT) { + ret = txa_dev_queue_stop(txa_inst_id) ? + txa_dev_queue_stop(txa_inst_id)(txa_inst_id, + eth_dev_id, + tx_queue_id) + : -EINVAL; + if (ret == 0) + return ret; + } + rte_spinlock_lock(&txa->tx_lock); + tqi = txa_service_queue(txa, eth_dev_id, tx_queue_id); + if (unlikely(tqi == NULL || !tqi->added)) { + rte_spinlock_unlock(&txa->tx_lock); + return -EINVAL; + } + txa_txq_buffer_drain(tqi); + tqi->stopped = true; + rte_spinlock_unlock(&txa->tx_lock); + } + return 0; +} + +int +rte_event_eth_tx_adapter_queue_start(uint16_t eth_dev_id, uint16_t tx_queue_id) +{ + return txa_queue_state_set(eth_dev_id, tx_queue_id, true); +} + +int +rte_event_eth_tx_adapter_queue_stop(uint16_t eth_dev_id, uint16_t tx_queue_id) +{ + return txa_queue_state_set(eth_dev_id, tx_queue_id, false); +} diff --git a/lib/eventdev/rte_event_eth_tx_adapter.h b/lib/eventdev/rte_event_eth_tx_adapter.h index 9432b740e8..77e394e1ac 100644 --- a/lib/eventdev/rte_event_eth_tx_adapter.h +++ b/lib/eventdev/rte_event_eth_tx_adapter.h @@ -35,6 +35,8 @@ * - rte_event_eth_tx_adapter_event_port_get() * - rte_event_eth_tx_adapter_service_id_get() * - rte_event_eth_tx_adapter_instance_get() + * - rte_event_eth_tx_adapter_queue_start() + * - rte_event_eth_tx_adapter_queue_stop() * * The application creates the adapter using * rte_event_eth_tx_adapter_create() or rte_event_eth_tx_adapter_create_ext(). @@ -446,6 +448,43 @@ int rte_event_eth_tx_adapter_instance_get(uint16_t eth_dev_id, uint16_t tx_queue_id, uint8_t *txa_inst_id); +/** + * Enables the Tx Adapter to start enqueueing packets to the + * Tx queue. + * + * This function is provided so that the application can + * resuming enqueueing events that reference packets for + * after calling + * rte_event_eth_tx_adapter_queue_stop() + * + * @param eth_dev_id + * Port identifier of Ethernet device. + * @param tx_queue_id + * Ethernet device transmit queue index. + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int +rte_event_eth_tx_adapter_queue_start(uint16_t eth_dev_id, uint16_t tx_queue_id); + +/** + * Stops the Tx Adapter from transmitting any mbufs to the + * . The Tx Adapter also frees any mbufs + * that it may have buffered for this queue. + * + * @param eth_dev_id + * Port identifier of Ethernet device. + * @param tx_queue_id + * Ethernet device transmit queue index. + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int +rte_event_eth_tx_adapter_queue_stop(uint16_t eth_dev_id, uint16_t tx_queue_id); #ifdef __cplusplus } diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index 9a71cf3f8f..dd63ec6f68 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -116,6 +116,8 @@ EXPERIMENTAL { # added in 22.11 rte_event_eth_rx_adapter_instance_get; rte_event_eth_tx_adapter_instance_get; + rte_event_eth_tx_adapter_queue_start; + rte_event_eth_tx_adapter_queue_stop; }; INTERNAL {