From patchwork Thu Feb 9 08:45:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 123549 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 394F341C4D; Thu, 9 Feb 2023 10:43:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A61742D3F; Thu, 9 Feb 2023 10:43:07 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 395A442D20 for ; Thu, 9 Feb 2023 10:43:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675935785; x=1707471785; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wIZgDqdTAsT6WoaWLxgxaUB6hfsBMziulWQO1kYPTmQ=; b=XpITRiwb/E0aPX33JkJnRrqm7aWZQ0nFFi3Tom9lpOB1FuYvTwMtA0zV oynAVrk8pMoWa2AUQYKGBaz188+KIJbXnKDWdLSz/LZ/WLJFM37CNmoMa eDHFln7nY3omBzDyiJQYzzKCRRoLJU8t+7KIMdBC9Dliihi7Lt2R11PK3 nDxkQeYL9Ou7h7pTqllG+t0ahlz33peNAFC3a+UA+yp5ku+TnfiB/fs1W GmgEMYAUNYWtxwMp+9tDSfm55B2N81vN/EAzyOE2hct+1w16fcq1Mw1DQ 2YJG7p2kezxxY2G5aZBJ1f/9xVqC8OPbz0QG40iBKVLuUsFVsQWa8E/Ho A==; X-IronPort-AV: E=McAfee;i="6500,9779,10615"; a="309712343" X-IronPort-AV: E=Sophos;i="5.97,283,1669104000"; d="scan'208";a="309712343" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2023 01:43:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10615"; a="697964678" X-IronPort-AV: E=Sophos;i="5.97,283,1669104000"; d="scan'208";a="697964678" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga008.jf.intel.com with ESMTP; 09 Feb 2023 01:43:03 -0800 From: Mingxia Liu To: dev@dpdk.org, qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: Mingxia Liu Subject: [PATCH v5 07/21] net/cpfl: support queue release Date: Thu, 9 Feb 2023 08:45:27 +0000 Message-Id: <20230209084541.2712723-8-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230209084541.2712723-1-mingxia.liu@intel.com> References: <20230118075738.904616-1-mingxia.liu@intel.com> <20230209084541.2712723-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for queue operations: - rx_queue_release - tx_queue_release Signed-off-by: Mingxia Liu --- drivers/net/cpfl/cpfl_ethdev.c | 2 ++ drivers/net/cpfl/cpfl_rxtx.c | 35 ++++++++++++++++++++++++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 2 ++ 3 files changed, 39 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 8ce7329b78..f59ad56db2 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -623,6 +623,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .tx_queue_start = cpfl_tx_queue_start, .rx_queue_stop = cpfl_rx_queue_stop, .tx_queue_stop = cpfl_tx_queue_stop, + .rx_queue_release = cpfl_dev_rx_queue_release, + .tx_queue_release = cpfl_dev_tx_queue_release, .dev_supported_ptypes_get = cpfl_dev_supported_ptypes_get, }; diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index ab5383a635..aa0f6bd792 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload) return ol; } +static const struct idpf_rxq_ops def_rxq_ops = { + .release_mbufs = idpf_qc_rxq_mbufs_release, +}; + +static const struct idpf_txq_ops def_txq_ops = { + .release_mbufs = idpf_qc_txq_mbufs_release, +}; + static const struct rte_memzone * cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t len, uint16_t queue_type, @@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq, idpf_qc_split_rx_bufq_reset(bufq); bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start + queue_idx * vport->chunks_info.rx_buf_qtail_spacing); + bufq->ops = &def_rxq_ops; bufq->q_set = true; if (bufq_id == 1) { @@ -235,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0) return -EINVAL; + /* Free memory if needed */ + if (dev->data->rx_queues[queue_idx] != NULL) { + idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; + } + /* Setup Rx queue */ rxq = rte_zmalloc_socket("cpfl rxq", sizeof(struct idpf_rx_queue), @@ -287,6 +302,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, idpf_qc_single_rx_queue_reset(rxq); rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start + queue_idx * vport->chunks_info.rx_qtail_spacing); + rxq->ops = &def_rxq_ops; } else { idpf_qc_split_rx_descq_reset(rxq); @@ -399,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0) return -EINVAL; + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_idx] != NULL) { + idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; + } + /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("cpfl txq", sizeof(struct idpf_tx_queue), @@ -461,6 +483,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start + queue_idx * vport->chunks_info.tx_qtail_spacing); + txq->ops = &def_txq_ops; txq->q_set = true; dev->data->tx_queues[queue_idx] = txq; @@ -674,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } +void +cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid) +{ + idpf_qc_rx_queue_release(dev->data->rx_queues[qid]); +} + +void +cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) +{ + idpf_qc_tx_queue_release(dev->data->tx_queues[qid]); +} + void cpfl_stop_queues(struct rte_eth_dev *dev) { diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index e9b810deaa..f5882401dc 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); void cpfl_stop_queues(struct rte_eth_dev *dev); int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); +void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid); +void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid); #endif /* _CPFL_RXTX_H_ */