From patchwork Fri Feb 17 07:32:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124111 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ED08041CBC; Fri, 17 Feb 2023 08:39:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C70BF42D17; Fri, 17 Feb 2023 08:39:12 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 9F45842D1A for ; Fri, 17 Feb 2023 08:39:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619551; x=1708155551; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AhWCJEmWSRMwNO17L2Cb9F25sU/UFhb9a83uAFdBhM4=; b=ikyiu2JPV9P7GLV85YT1X3n4xaWw3xoysAcW4MHPoUU60ur+A75/wLrJ wbnElMSWrARQdXziqz4LPGfxZDAIR77G7URKfPnJQdf8iJ68eMFbarC3+ QySXyMMxTJ5WfFNs3sMftsQKb5pkjj6Vsx2gAQ/VO6/R9P0cKzBmQqLUH 4SZNdLJdD1T0NsequOn1jiKQQSb/eQXggZvN39XQpR0NbQN64M0ikt5es +vjGXB1iuL1iJUKSo4M+oAHd3iT21n0f1G47TciK8DvRubgfLHSnENKl+ NB5E/BioPr5ESQAPXPAf4fisw/YQ4UXtiOpKdaQ3E/mRsVgpPWWJB5tKJ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153035" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153035" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458672" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458672" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:07 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 04/10] net/gve: support queue release and stop for DQO Date: Fri, 17 Feb 2023 15:32:22 +0800 Message-Id: <20230217073228.340815-5-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for queue operations: - gve_tx_queue_release_dqo - gve_rx_queue_release_dqo - gve_stop_tx_queues_dqo - gve_stop_rx_queues_dqo Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 18 +++++++++--- drivers/net/gve/gve_ethdev.h | 12 ++++++++ drivers/net/gve/gve_rx.c | 5 +++- drivers/net/gve/gve_rx_dqo.c | 57 ++++++++++++++++++++++++++++++++++++ drivers/net/gve/gve_tx.c | 5 +++- drivers/net/gve/gve_tx_dqo.c | 55 ++++++++++++++++++++++++++++++++++ 6 files changed, 146 insertions(+), 6 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 413696890f..efa121ca4d 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -292,11 +292,19 @@ gve_dev_close(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Failed to stop dev."); } - for (i = 0; i < dev->data->nb_tx_queues; i++) - gve_tx_queue_release(dev, i); + if (gve_is_gqi(priv)) { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release(dev, i); + } else { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release_dqo(dev, i); - for (i = 0; i < dev->data->nb_rx_queues; i++) - gve_rx_queue_release(dev, i); + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release_dqo(dev, i); + } gve_free_qpls(priv); rte_free(priv->adminq); @@ -470,6 +478,8 @@ static const struct eth_dev_ops gve_eth_dev_ops_dqo = { .dev_infos_get = gve_dev_info_get, .rx_queue_setup = gve_rx_queue_setup_dqo, .tx_queue_setup = gve_tx_queue_setup_dqo, + .rx_queue_release = gve_rx_queue_release_dqo, + .tx_queue_release = gve_tx_queue_release_dqo, .link_update = gve_link_update, .mtu_set = gve_dev_mtu_set, }; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index c4e5b8cb43..5cc57afdb9 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -364,4 +364,16 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *conf); +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 66fbcf3930..e264bcadad 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2022 Intel Corporation + * Copyright(C) 2022-2023 Intel Corporation */ #include "gve_ethdev.h" @@ -354,6 +354,9 @@ gve_stop_rx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_rx_queues_dqo(dev); + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index 9c412c1481..8236cd7b50 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -5,6 +5,38 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_rxq_mbufs_dqo(struct gve_rx_queue *rxq) +{ + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (rxq->sw_ring[i]) { + rte_pktmbuf_free_seg(rxq->sw_ring[i]); + rxq->sw_ring[i] = NULL; + } + } + + rxq->nb_avail = rxq->nb_rx_desc; +} + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_rx_queue *q = dev->data->rx_queues[qid]; + + if (q == NULL) + return; + + gve_release_rxq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static void gve_reset_rxq_dqo(struct gve_rx_queue *rxq) { @@ -54,6 +86,12 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->rx_desc_cnt; + /* Free memory if needed */ + if (dev->data->rx_queues[queue_id]) { + gve_rx_queue_release_dqo(dev, queue_id); + dev->data->rx_queues[queue_id] = NULL; + } + /* Allocate the RX queue data structure. */ rxq = rte_zmalloc_socket("gve rxq", sizeof(struct gve_rx_queue), @@ -152,3 +190,22 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(rxq); return err; } + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_rx_queue *rxq; + uint16_t i; + int err; + + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + gve_release_rxq_mbufs_dqo(rxq); + gve_reset_rxq_dqo(rxq); + } +} diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index 9b41c59358..86f558d7a0 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2022 Intel Corporation + * Copyright(C) 2022-2023 Intel Corporation */ #include "gve_ethdev.h" @@ -671,6 +671,9 @@ gve_stop_tx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_tx_queues_dqo(dev); + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy txqs"); diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index acf4ee2952..34f131cd7e 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -5,6 +5,36 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_txq_mbufs_dqo(struct gve_tx_queue *txq) +{ + uint16_t i; + + for (i = 0; i < txq->sw_size; i++) { + if (txq->sw_ring[i]) { + rte_pktmbuf_free_seg(txq->sw_ring[i]); + txq->sw_ring[i] = NULL; + } + } +} + +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_tx_queue *q = dev->data->tx_queues[qid]; + + if (q == NULL) + return; + + gve_release_txq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static int check_tx_thresh_dqo(uint16_t nb_desc, uint16_t tx_rs_thresh, uint16_t tx_free_thresh) @@ -90,6 +120,12 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->tx_desc_cnt; + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_id]) { + gve_tx_queue_release_dqo(dev, queue_id); + dev->data->tx_queues[queue_id] = NULL; + } + /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("gve txq", sizeof(struct gve_tx_queue), @@ -182,3 +218,22 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(txq); return err; } + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_tx_queue *txq; + uint16_t i; + int err; + + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy txqs"); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + gve_release_txq_mbufs_dqo(txq); + gve_reset_txq_dqo(txq); + } +}