From patchwork Tue Dec 18 08:46:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenzhuo Lu X-Patchwork-Id: 49058 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B8AC21BCA5; Tue, 18 Dec 2018 09:42:29 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 89CE31BBEC for ; Tue, 18 Dec 2018 09:42:14 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Dec 2018 00:42:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,367,1539673200"; d="scan'208";a="111449539" Received: from dpdk26.sh.intel.com ([10.67.110.164]) by orsmga003.jf.intel.com with ESMTP; 18 Dec 2018 00:42:12 -0800 From: Wenzhuo Lu To: dev@dpdk.org Cc: Wenzhuo Lu , Qiming Yang , Xiaoyun Li , Jingjing Wu Date: Tue, 18 Dec 2018 16:46:25 +0800 Message-Id: <1545122800-57293-17-git-send-email-wenzhuo.lu@intel.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1545122800-57293-1-git-send-email-wenzhuo.lu@intel.com> References: <1542956179-80951-1-git-send-email-wenzhuo.lu@intel.com> <1545122800-57293-1-git-send-email-wenzhuo.lu@intel.com> Subject: [dpdk-dev] [PATCH v6 16/31] net/ice: support device and queue ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Normally when starting/stopping the device the queue should be started and stopped. Support them both in this patch. Below ops are added, dev_configure dev_start dev_stop dev_close dev_reset rx_queue_start rx_queue_stop tx_queue_start tx_queue_stop rx_queue_setup rx_queue_release tx_queue_setup tx_queue_release Signed-off-by: Wenzhuo Lu Signed-off-by: Qiming Yang Signed-off-by: Xiaoyun Li Signed-off-by: Jingjing Wu --- config/common_base | 2 + doc/guides/nics/features/ice.ini | 1 + doc/guides/nics/ice.rst | 8 + drivers/net/ice/Makefile | 3 +- drivers/net/ice/ice_ethdev.c | 199 ++++++++- drivers/net/ice/ice_rxtx.c | 923 +++++++++++++++++++++++++++++++++++++++ drivers/net/ice/ice_rxtx.h | 137 ++++++ drivers/net/ice/meson.build | 3 +- 8 files changed, 1273 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ice/ice_rxtx.c create mode 100644 drivers/net/ice/ice_rxtx.h diff --git a/config/common_base b/config/common_base index 872f440..a342760 100644 --- a/config/common_base +++ b/config/common_base @@ -303,6 +303,8 @@ CONFIG_RTE_LIBRTE_ICE_PMD=y CONFIG_RTE_LIBRTE_ICE_DEBUG_RX=n CONFIG_RTE_LIBRTE_ICE_DEBUG_TX=n CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y +CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n # Compile burst-oriented AVF PMD driver # diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini index 085e848..a43a9cd 100644 --- a/doc/guides/nics/features/ice.ini +++ b/doc/guides/nics/features/ice.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Queue start/stop = Y BSD nic_uio = Y Linux UIO = Y Linux VFIO = Y diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index 946ed04..96a594f 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -38,6 +38,14 @@ Please note that enabling debugging options may affect system performance. Toggle display of generic debugging messages. +- ``CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC`` (default ``y``) + + Toggle bulk allocation for RX. + +- ``CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC`` (default ``n``) + + Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte. + Runtime Config Options ~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile index 70f23e3..bc24444 100644 --- a/drivers/net/ice/Makefile +++ b/drivers/net/ice/Makefile @@ -11,7 +11,7 @@ LIB = librte_pmd_ice.a CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) -LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci +LDLIBS += -lrte_eal -lrte_ethdev -lrte_kvargs -lrte_bus_pci -lrte_mempool EXPORT_MAP := rte_pmd_ice_version.map @@ -50,5 +50,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_switch.c SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_nvm.c SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_rxtx.c include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index a514755..0274d9e 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -6,6 +6,7 @@ #include "base/ice_sched.h" #include "ice_ethdev.h" +#include "ice_rxtx.h" #define ICE_MAX_QP_NUM "max_queue_pair_num" #define ICE_DFLT_OUTER_TAG_TYPE ICE_AQ_VSI_OUTER_TAG_VLAN_9100 @@ -13,6 +14,12 @@ int ice_logtype_init; int ice_logtype_driver; +static int ice_dev_configure(struct rte_eth_dev *dev); +static int ice_dev_start(struct rte_eth_dev *dev); +static void ice_dev_stop(struct rte_eth_dev *dev); +static void ice_dev_close(struct rte_eth_dev *dev); +static int ice_dev_reset(struct rte_eth_dev *dev); + static const struct rte_pci_id pci_id_ice_map[] = { { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_BACKPLANE) }, { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810C_QSFP) }, @@ -21,7 +28,19 @@ }; static const struct eth_dev_ops ice_eth_dev_ops = { - .dev_configure = NULL, + .dev_configure = ice_dev_configure, + .dev_start = ice_dev_start, + .dev_stop = ice_dev_stop, + .dev_close = ice_dev_close, + .dev_reset = ice_dev_reset, + .rx_queue_start = ice_rx_queue_start, + .rx_queue_stop = ice_rx_queue_stop, + .tx_queue_start = ice_tx_queue_start, + .tx_queue_stop = ice_tx_queue_stop, + .rx_queue_setup = ice_rx_queue_setup, + .rx_queue_release = ice_rx_queue_release, + .tx_queue_setup = ice_tx_queue_setup, + .tx_queue_release = ice_tx_queue_release, }; static void @@ -559,11 +578,41 @@ } static void +ice_dev_stop(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + uint16_t i; + + /* avoid stopping again */ + if (pf->adapter_stopped) + return; + + /* stop and clear all Rx queues */ + for (i = 0; i < data->nb_rx_queues; i++) + ice_rx_queue_stop(dev, i); + + /* stop and clear all Tx queues */ + for (i = 0; i < data->nb_tx_queues; i++) + ice_tx_queue_stop(dev, i); + + /* Clear all queues and release mbufs */ + ice_clear_queues(dev); + + pf->adapter_stopped = true; +} + +static void ice_dev_close(struct rte_eth_dev *dev) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + ice_dev_stop(dev); + + /* release all queue resource */ + ice_free_queues(dev); + ice_res_pool_destroy(&pf->msix_pool); ice_release_vsi(pf->main_vsi); @@ -594,6 +643,154 @@ } static int +ice_dev_configure(__rte_unused struct rte_eth_dev *dev) +{ + struct ice_adapter *ad = + ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + + /* Initialize to TRUE. If any of Rx queues doesn't meet the + * bulk allocation or vector Rx preconditions we will reset it. + */ + ad->rx_bulk_alloc_allowed = true; + ad->tx_simple_allowed = true; + + return 0; +} + +static int ice_init_rss(struct ice_pf *pf) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + struct ice_vsi *vsi = pf->main_vsi; + struct rte_eth_dev *dev = pf->adapter->eth_dev; + struct rte_eth_rss_conf *rss_conf; + struct ice_aqc_get_set_rss_keys key; + uint16_t i, nb_q; + int ret = 0; + + rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + nb_q = dev->data->nb_rx_queues; + vsi->rss_key_size = ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE; + vsi->rss_lut_size = hw->func_caps.common_cap.rss_table_size; + + if (!vsi->rss_key) + vsi->rss_key = rte_zmalloc(NULL, + vsi->rss_key_size, 0); + if (!vsi->rss_lut) + vsi->rss_lut = rte_zmalloc(NULL, + vsi->rss_lut_size, 0); + + /* configure RSS key */ + if (!rss_conf->rss_key) { + /* Calculate the default hash key */ + for (i = 0; i <= vsi->rss_key_size; i++) + vsi->rss_key[i] = (uint8_t)rte_rand(); + } else { + rte_memcpy(vsi->rss_key, rss_conf->rss_key, + RTE_MIN(rss_conf->rss_key_len, + vsi->rss_key_size)); + } + rte_memcpy(key.standard_rss_key, vsi->rss_key, vsi->rss_key_size); + ret = ice_aq_set_rss_key(hw, vsi->idx, &key); + if (ret) + return -EINVAL; + + /* init RSS LUT table */ + for (i = 0; i < vsi->rss_lut_size; i++) + vsi->rss_lut[i] = i % nb_q; + + ret = ice_aq_set_rss_lut(hw, vsi->idx, + ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF, + vsi->rss_lut, vsi->rss_lut_size); + if (ret) + return -EINVAL; + + return 0; +} + +static int +ice_dev_start(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + uint16_t nb_rxq = 0; + uint16_t nb_txq, i; + int ret; + + /* program Tx queues' context in hardware */ + for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { + ret = ice_tx_queue_start(dev, nb_txq); + if (ret) { + PMD_DRV_LOG(ERR, "fail to start Tx queue %u", nb_txq); + goto tx_err; + } + } + + /* program Rx queues' context in hardware*/ + for (nb_rxq = 0; nb_rxq < data->nb_rx_queues; nb_rxq++) { + ret = ice_rx_queue_start(dev, nb_rxq); + if (ret) { + PMD_DRV_LOG(ERR, "fail to start Rx queue %u", nb_rxq); + goto rx_err; + } + } + + ret = ice_init_rss(pf); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); + goto rx_err; + } + + ret = ice_aq_set_event_mask(hw, hw->port_info->lport, + ((u16)(ICE_AQ_LINK_EVENT_LINK_FAULT | + ICE_AQ_LINK_EVENT_PHY_TEMP_ALARM | + ICE_AQ_LINK_EVENT_EXCESSIVE_ERRORS | + ICE_AQ_LINK_EVENT_SIGNAL_DETECT | + ICE_AQ_LINK_EVENT_AN_COMPLETED | + ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED)), + NULL); + if (ret != ICE_SUCCESS) + PMD_DRV_LOG(WARNING, "Fail to set phy mask"); + + pf->adapter_stopped = false; + + return 0; + + /* stop the started queues if failed to start all queues */ +rx_err: + for (i = 0; i < nb_rxq; i++) + ice_rx_queue_stop(dev, i); +tx_err: + for (i = 0; i < nb_txq; i++) + ice_tx_queue_stop(dev, i); + + return -EIO; +} + +static int +ice_dev_reset(struct rte_eth_dev *dev) +{ + int ret; + + if (dev->data->sriov.active) + return -ENOTSUP; + + ret = ice_dev_uninit(dev); + if (ret) { + PMD_INIT_LOG(ERR, "failed to uninit device, status = %d", ret); + return -ENXIO; + } + + ret = ice_dev_init(dev); + if (ret) { + PMD_INIT_LOG(ERR, "failed to init device, status = %d", ret); + return -ENXIO; + } + + return 0; +} + +static int ice_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c new file mode 100644 index 0000000..9c5eee1 --- /dev/null +++ b/drivers/net/ice/ice_rxtx.c @@ -0,0 +1,923 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#include +#include + +#include "ice_rxtx.h" + +#define ICE_TD_CMD ICE_TX_DESC_CMD_EOP + +#define ICE_TX_CKSUM_OFFLOAD_MASK ( \ + PKT_TX_IP_CKSUM | \ + PKT_TX_L4_MASK | \ + PKT_TX_TCP_SEG | \ + PKT_TX_OUTER_IP_CKSUM) + +#define ICE_RX_ERR_BITS 0x3f + +static enum ice_status +ice_program_hw_rx_queue(struct ice_rx_queue *rxq) +{ + struct ice_vsi *vsi = rxq->vsi; + struct ice_hw *hw = ICE_VSI_TO_HW(vsi); + struct rte_eth_dev *dev = ICE_VSI_TO_ETH_DEV(rxq->vsi); + struct ice_rlan_ctx rx_ctx; + enum ice_status err; + uint16_t buf_size, len; + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + uint32_t regval; + + /** + * The kernel driver uses flex descriptor. It sets the register + * to flex descriptor mode. + * DPDK uses legacy descriptor. It should set the register back + * to the default value, then uses legacy descriptor mode. + */ + regval = (0x01 << QRXFLXP_CNTXT_RXDID_PRIO_S) & + QRXFLXP_CNTXT_RXDID_PRIO_M; + ICE_WRITE_REG(hw, QRXFLXP_CNTXT(rxq->reg_idx), regval); + + /* Set buffer size as the head split is disabled. */ + buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - + RTE_PKTMBUF_HEADROOM); + rxq->rx_hdr_len = 0; + rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); + len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len; + rxq->max_pkt_len = RTE_MIN(len, + dev->data->dev_conf.rxmode.max_rx_pkt_len); + + if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (rxq->max_pkt_len <= ETHER_MAX_LEN || + rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) { + PMD_DRV_LOG(ERR, "maximum packet length must " + "be larger than %u and smaller than %u," + "as jumbo frame is enabled", + (uint32_t)ETHER_MAX_LEN, + (uint32_t)ICE_FRAME_SIZE_MAX); + return -EINVAL; + } + } else { + if (rxq->max_pkt_len < ETHER_MIN_LEN || + rxq->max_pkt_len > ETHER_MAX_LEN) { + PMD_DRV_LOG(ERR, "maximum packet length must be " + "larger than %u and smaller than %u, " + "as jumbo frame is disabled", + (uint32_t)ETHER_MIN_LEN, + (uint32_t)ETHER_MAX_LEN); + return -EINVAL; + } + } + + memset(&rx_ctx, 0, sizeof(rx_ctx)); + + rx_ctx.base = rxq->rx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT; + rx_ctx.qlen = rxq->nb_rx_desc; + rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; + rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; + rx_ctx.dtype = 0; /* No Header Split mode */ +#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC + rx_ctx.dsize = 1; /* 32B descriptors */ +#endif + rx_ctx.rxmax = rxq->max_pkt_len; + /* TPH: Transaction Layer Packet (TLP) processing hints */ + rx_ctx.tphrdesc_ena = 1; + rx_ctx.tphwdesc_ena = 1; + rx_ctx.tphdata_ena = 1; + rx_ctx.tphhead_ena = 1; + /* Low Receive Queue Threshold defined in 64 descriptors units. + * When the number of free descriptors goes below the lrxqthresh, + * an immediate interrupt is triggered. + */ + rx_ctx.lrxqthresh = 2; + /*default use 32 byte descriptor, vlan tag extract to L2TAG2(1st)*/ + rx_ctx.l2tsel = 1; + rx_ctx.showiv = 0; + + err = ice_clear_rxq_ctx(hw, rxq->reg_idx); + if (err) { + PMD_DRV_LOG(ERR, "Failed to clear Lan Rx queue (%u) context", + rxq->queue_id); + return -EINVAL; + } + err = ice_write_rxq_ctx(hw, &rx_ctx, rxq->reg_idx); + if (err) { + PMD_DRV_LOG(ERR, "Failed to write Lan Rx queue (%u) context", + rxq->queue_id); + return -EINVAL; + } + + buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - + RTE_PKTMBUF_HEADROOM); + + rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx); + + /* Init the Rx tail register*/ + ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + + return 0; +} + +/* Allocate mbufs for all descriptors in rx queue */ +static int +ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) +{ + struct ice_rx_entry *rxe = rxq->sw_ring; + uint64_t dma_addr; + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + volatile union ice_rx_desc *rxd; + struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp); + + if (unlikely(!mbuf)) { + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); + return -ENOMEM; + } + + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + + rxd = &rxq->rx_ring[i]; + rxd->read.pkt_addr = dma_addr; + rxd->read.hdr_addr = 0; +#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC + rxd->read.rsvd1 = 0; + rxd->read.rsvd2 = 0; +#endif + rxe[i].mbuf = mbuf; + } + + return 0; +} + +/* Free all mbufs for descriptors in rx queue */ +static void +ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq) +{ + uint16_t i; + + if (!rxq || !rxq->sw_ring) { + PMD_DRV_LOG(DEBUG, "Pointer to sw_ring is NULL"); + return; + } + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (rxq->sw_ring[i].mbuf) { + rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + rxq->sw_ring[i].mbuf = NULL; + } + } +#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC + if (rxq->rx_nb_avail == 0) + return; + for (i = 0; i < rxq->rx_nb_avail; i++) { + struct rte_mbuf *mbuf; + + mbuf = rxq->rx_stage[rxq->rx_next_avail + i]; + rte_pktmbuf_free_seg(mbuf); + } + rxq->rx_nb_avail = 0; +#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */ +} + +/* turn on or off rx queue + * @q_idx: queue index in pf scope + * @on: turn on or off the queue + */ +static int +ice_switch_rx_queue(struct ice_hw *hw, uint16_t q_idx, bool on) +{ + uint32_t reg; + uint16_t j; + + /* QRX_CTRL = QRX_ENA */ + reg = ICE_READ_REG(hw, QRX_CTRL(q_idx)); + + if (on) { + if (reg & QRX_CTRL_QENA_STAT_M) + return 0; /* Already on, skip */ + reg |= QRX_CTRL_QENA_REQ_M; + } else { + if (!(reg & QRX_CTRL_QENA_STAT_M)) + return 0; /* Already off, skip */ + reg &= ~QRX_CTRL_QENA_REQ_M; + } + + /* Write the register */ + ICE_WRITE_REG(hw, QRX_CTRL(q_idx), reg); + /* Check the result. It is said that QENA_STAT + * follows the QENA_REQ not more than 10 use. + * TODO: need to change the wait counter later + */ + for (j = 0; j < ICE_CHK_Q_ENA_COUNT; j++) { + rte_delay_us(ICE_CHK_Q_ENA_INTERVAL_US); + reg = ICE_READ_REG(hw, QRX_CTRL(q_idx)); + if (on) { + if ((reg & QRX_CTRL_QENA_REQ_M) && + (reg & QRX_CTRL_QENA_STAT_M)) + break; + } else { + if (!(reg & QRX_CTRL_QENA_REQ_M) && + !(reg & QRX_CTRL_QENA_STAT_M)) + break; + } + } + + /* Check if it is timeout */ + if (j >= ICE_CHK_Q_ENA_COUNT) { + PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]", + (on ? "enable" : "disable"), q_idx); + return -ETIMEDOUT; + } + + return 0; +} + +static inline int +#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC +ice_check_rx_burst_bulk_alloc_preconditions(struct ice_rx_queue *rxq) +#else +ice_check_rx_burst_bulk_alloc_preconditions + (__rte_unused struct ice_rx_queue *rxq) +#endif +{ + int ret = 0; + +#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC + if (!(rxq->rx_free_thresh >= ICE_RX_MAX_BURST)) { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " + "rxq->rx_free_thresh=%d, " + "ICE_RX_MAX_BURST=%d", + rxq->rx_free_thresh, ICE_RX_MAX_BURST); + ret = -EINVAL; + } else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " + "rxq->rx_free_thresh=%d, " + "rxq->nb_rx_desc=%d", + rxq->rx_free_thresh, rxq->nb_rx_desc); + ret = -EINVAL; + } else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " + "rxq->nb_rx_desc=%d, " + "rxq->rx_free_thresh=%d", + rxq->nb_rx_desc, rxq->rx_free_thresh); + ret = -EINVAL; + } +#else + ret = -EINVAL; +#endif + + return ret; +} + +/* reset fields in ice_rx_queue back to default */ +static void +ice_reset_rx_queue(struct ice_rx_queue *rxq) +{ + unsigned int i; + uint16_t len; + + if (!rxq) { + PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL"); + return; + } + +#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC + if (ice_check_rx_burst_bulk_alloc_preconditions(rxq) == 0) + len = (uint16_t)(rxq->nb_rx_desc + ICE_RX_MAX_BURST); + else +#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */ + len = rxq->nb_rx_desc; + + for (i = 0; i < len * sizeof(union ice_rx_desc); i++) + ((volatile char *)rxq->rx_ring)[i] = 0; + +#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC + memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf)); + for (i = 0; i < ICE_RX_MAX_BURST; ++i) + rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf; + + rxq->rx_nb_avail = 0; + rxq->rx_next_avail = 0; + rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1); +#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */ + + rxq->rx_tail = 0; + rxq->nb_rx_hold = 0; + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; +} + +int +ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct ice_rx_queue *rxq; + int err; + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id >= dev->data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "RX queue %u is out of range %u", + rx_queue_id, dev->data->nb_rx_queues); + return -EINVAL; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + if (!rxq || !rxq->q_set) { + PMD_DRV_LOG(ERR, "RX queue %u not available or setup", + rx_queue_id); + return -EINVAL; + } + + err = ice_program_hw_rx_queue(rxq); + if (err) { + PMD_DRV_LOG(ERR, "fail to program RX queue %u", + rx_queue_id); + return -EIO; + } + + err = ice_alloc_rx_queue_mbufs(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf"); + return -ENOMEM; + } + + rte_wmb(); + + /* Init the RX tail register. */ + ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + + err = ice_switch_rx_queue(hw, rxq->reg_idx, TRUE); + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", + rx_queue_id); + + ice_rx_queue_release_mbufs(rxq); + ice_reset_rx_queue(rxq); + return -EINVAL; + } + + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STARTED; + + return 0; +} + +int +ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct ice_rx_queue *rxq; + int err; + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + + err = ice_switch_rx_queue(hw, rxq->reg_idx, FALSE); + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", + rx_queue_id); + return -EINVAL; + } + ice_rx_queue_release_mbufs(rxq); + ice_reset_rx_queue(rxq); + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STOPPED; + } + + return 0; +} + +int +ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct ice_tx_queue *txq; + int err; + struct ice_vsi *vsi; + struct ice_hw *hw; + struct ice_aqc_add_tx_qgrp txq_elem; + struct ice_tlan_ctx tx_ctx; + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id >= dev->data->nb_tx_queues) { + PMD_DRV_LOG(ERR, "TX queue %u is out of range %u", + tx_queue_id, dev->data->nb_tx_queues); + return -EINVAL; + } + + txq = dev->data->tx_queues[tx_queue_id]; + if (!txq || !txq->q_set) { + PMD_DRV_LOG(ERR, "TX queue %u is not available or setup", + tx_queue_id); + return -EINVAL; + } + + vsi = txq->vsi; + hw = ICE_VSI_TO_HW(vsi); + + memset(&txq_elem, 0, sizeof(txq_elem)); + memset(&tx_ctx, 0, sizeof(tx_ctx)); + txq_elem.num_txqs = 1; + txq_elem.txqs[0].txq_id = rte_cpu_to_le_16(txq->reg_idx); + + tx_ctx.base = txq->tx_ring_phys_addr / ICE_QUEUE_BASE_ADDR_UNIT; + tx_ctx.qlen = txq->nb_tx_desc; + tx_ctx.pf_num = hw->pf_id; + tx_ctx.vmvf_type = ICE_TLAN_CTX_VMVF_TYPE_PF; + tx_ctx.src_vsi = vsi->vsi_id; + tx_ctx.port_num = hw->port_info->lport; + tx_ctx.tso_ena = 1; /* tso enable */ + tx_ctx.tso_qnum = txq->reg_idx; /* index for tso state structure */ + tx_ctx.legacy_int = 1; /* Legacy or Advanced Host Interface */ + + ice_set_ctx((uint8_t *)&tx_ctx, txq_elem.txqs[0].txq_ctx, + ice_tlan_ctx_info); + + txq->qtx_tail = hw->hw_addr + QTX_COMM_DBELL(txq->reg_idx); + + /* Init the Tx tail register*/ + ICE_PCI_REG_WRITE(txq->qtx_tail, 0); + + err = ice_ena_vsi_txq(hw->port_info, vsi->idx, 0, 1, &txq_elem, + sizeof(txq_elem), NULL); + if (err) { + PMD_DRV_LOG(ERR, "Failed to add lan txq"); + return -EIO; + } + /* store the schedule node id */ + txq->q_teid = txq_elem.txqs[0].q_teid; + + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; +} + +/* Free all mbufs for descriptors in tx queue */ +static void +ice_tx_queue_release_mbufs(struct ice_tx_queue *txq) +{ + uint16_t i; + + if (!txq || !txq->sw_ring) { + PMD_DRV_LOG(DEBUG, "Pointer to txq or sw_ring is NULL"); + return; + } + + for (i = 0; i < txq->nb_tx_desc; i++) { + if (txq->sw_ring[i].mbuf) { + rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); + txq->sw_ring[i].mbuf = NULL; + } + } +} + +static void +ice_reset_tx_queue(struct ice_tx_queue *txq) +{ + struct ice_tx_entry *txe; + uint16_t i, prev, size; + + if (!txq) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + txe = txq->sw_ring; + size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc; + for (i = 0; i < size; i++) + ((volatile char *)txq->tx_ring)[i] = 0; + + prev = (uint16_t)(txq->nb_tx_desc - 1); + for (i = 0; i < txq->nb_tx_desc; i++) { + volatile struct ice_tx_desc *txd = &txq->tx_ring[i]; + + txd->cmd_type_offset_bsz = + rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE); + txe[i].mbuf = NULL; + txe[i].last_id = i; + txe[prev].next_id = i; + prev = i; + } + + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); + + txq->tx_tail = 0; + txq->nb_tx_used = 0; + + txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); +} + +int +ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct ice_tx_queue *txq; + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + enum ice_status status; + uint16_t q_ids[1]; + uint32_t q_teids[1]; + + if (tx_queue_id >= dev->data->nb_tx_queues) { + PMD_DRV_LOG(ERR, "TX queue %u is out of range %u", + tx_queue_id, dev->data->nb_tx_queues); + return -EINVAL; + } + + txq = dev->data->tx_queues[tx_queue_id]; + if (!txq) { + PMD_DRV_LOG(ERR, "TX queue %u is not available", + tx_queue_id); + return -EINVAL; + } + + q_ids[0] = txq->reg_idx; + q_teids[0] = txq->q_teid; + + status = ice_dis_vsi_txq(hw->port_info, 1, q_ids, q_teids, + ICE_NO_RESET, 0, NULL); + if (status != ICE_SUCCESS) { + PMD_DRV_LOG(DEBUG, "Failed to disable Lan Tx queue"); + return -EINVAL; + } + + ice_tx_queue_release_mbufs(txq); + ice_reset_tx_queue(txq); + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + return 0; +} + +int +ice_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_adapter *ad = + ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct ice_vsi *vsi = pf->main_vsi; + struct ice_rx_queue *rxq; + const struct rte_memzone *rz; + uint32_t ring_size; + uint16_t len; + int use_def_burst_func = 1; + + if (nb_desc % ICE_ALIGN_RING_DESC != 0 || + nb_desc > ICE_MAX_RING_DESC || + nb_desc < ICE_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is " + "invalid", nb_desc); + return -EINVAL; + } + + /* Free memory if needed */ + if (dev->data->rx_queues[queue_idx]) { + ice_rx_queue_release(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; + } + + /* Allocate the rx queue data structure */ + rxq = rte_zmalloc_socket(NULL, + sizeof(struct ice_rx_queue), + RTE_CACHE_LINE_SIZE, + socket_id); + if (!rxq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for " + "rx queue data structure"); + return -ENOMEM; + } + rxq->mp = mp; + rxq->nb_rx_desc = nb_desc; + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->queue_id = queue_idx; + + rxq->reg_idx = vsi->base_queue + queue_idx; + rxq->port_id = dev->data->port_id; + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = ETHER_CRC_LEN; + else + rxq->crc_len = 0; + + rxq->drop_en = rx_conf->rx_drop_en; + rxq->vsi = vsi; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + + /* Allocate the maximun number of RX ring hardware descriptor. */ + len = ICE_MAX_RING_DESC; + +#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC + /** + * Allocating a little more memory because vectorized/bulk_alloc Rx + * functions doesn't check boundaries each time. + */ + len += ICE_RX_MAX_BURST; +#endif + + /* Allocate the maximum number of RX ring hardware descriptor. */ + ring_size = sizeof(union ice_rx_desc) * len; + ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN); + rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, + ring_size, ICE_RING_BASE_ALIGN, + socket_id); + if (!rz) { + ice_rx_queue_release(rxq); + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX"); + return -ENOMEM; + } + + /* Zero all the descriptors in the ring. */ + memset(rz->addr, 0, ring_size); + + rxq->rx_ring_phys_addr = rz->phys_addr; + rxq->rx_ring = (union ice_rx_desc *)rz->addr; + +#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC + len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST); +#else + len = nb_desc; +#endif + + /* Allocate the software ring. */ + rxq->sw_ring = rte_zmalloc_socket(NULL, + sizeof(struct ice_rx_entry) * len, + RTE_CACHE_LINE_SIZE, + socket_id); + if (!rxq->sw_ring) { + ice_rx_queue_release(rxq); + PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring"); + return -ENOMEM; + } + + ice_reset_rx_queue(rxq); + rxq->q_set = TRUE; + dev->data->rx_queues[queue_idx] = rxq; + + use_def_burst_func = ice_check_rx_burst_bulk_alloc_preconditions(rxq); + + if (!use_def_burst_func) { +#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " + "satisfied. Rx Burst Bulk Alloc function will be " + "used on port=%d, queue=%d.", + rxq->port_id, rxq->queue_id); +#endif /* RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC */ + } else { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " + "not satisfied, Scattered Rx is requested, " + "or RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC is " + "not enabled on port=%d, queue=%d.", + rxq->port_id, rxq->queue_id); + ad->rx_bulk_alloc_allowed = false; + } + + return 0; +} + +void +ice_rx_queue_release(void *rxq) +{ + struct ice_rx_queue *q = (struct ice_rx_queue *)rxq; + + if (!q) { + PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL"); + return; + } + + ice_rx_queue_release_mbufs(q); + rte_free(q->sw_ring); + rte_free(q); +} + +int +ice_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_vsi *vsi = pf->main_vsi; + struct ice_tx_queue *txq; + const struct rte_memzone *tz; + uint32_t ring_size; + uint16_t tx_rs_thresh, tx_free_thresh; + uint64_t offloads; + + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + if (nb_desc % ICE_ALIGN_RING_DESC != 0 || + nb_desc > ICE_MAX_RING_DESC || + nb_desc < ICE_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is " + "invalid", nb_desc); + return -EINVAL; + } + + /** + * The following two parameters control the setting of the RS bit on + * transmit descriptors. TX descriptors will have their RS bit set + * after txq->tx_rs_thresh descriptors have been used. The TX + * descriptor ring will be cleaned after txq->tx_free_thresh + * descriptors are used or if the number of descriptors required to + * transmit a packet is greater than the number of free TX descriptors. + * + * The following constraints must be satisfied: + * - tx_rs_thresh must be greater than 0. + * - tx_rs_thresh must be less than the size of the ring minus 2. + * - tx_rs_thresh must be less than or equal to tx_free_thresh. + * - tx_rs_thresh must be a divisor of the ring size. + * - tx_free_thresh must be greater than 0. + * - tx_free_thresh must be less than the size of the ring minus 3. + * + * One descriptor in the TX ring is used as a sentinel to avoid a H/W + * race condition, hence the maximum threshold constraints. When set + * to zero use default values. + */ + tx_rs_thresh = (uint16_t)(tx_conf->tx_rs_thresh ? + tx_conf->tx_rs_thresh : + ICE_DEFAULT_TX_RSBIT_THRESH); + tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ? + tx_conf->tx_free_thresh : + ICE_DEFAULT_TX_FREE_THRESH); + if (tx_rs_thresh >= (nb_desc - 2)) { + PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the " + "number of TX descriptors minus 2. " + "(tx_rs_thresh=%u port=%d queue=%d)", + (unsigned int)tx_rs_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return -EINVAL; + } + if (tx_free_thresh >= (nb_desc - 3)) { + PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the " + "tx_free_thresh must be less than the " + "number of TX descriptors minus 3. " + "(tx_free_thresh=%u port=%d queue=%d)", + (unsigned int)tx_free_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return -EINVAL; + } + if (tx_rs_thresh > tx_free_thresh) { + PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or " + "equal to tx_free_thresh. (tx_free_thresh=%u" + " tx_rs_thresh=%u port=%d queue=%d)", + (unsigned int)tx_free_thresh, + (unsigned int)tx_rs_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return -EINVAL; + } + if ((nb_desc % tx_rs_thresh) != 0) { + PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the " + "number of TX descriptors. (tx_rs_thresh=%u" + " port=%d queue=%d)", + (unsigned int)tx_rs_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return -EINVAL; + } + if (tx_rs_thresh > 1 && tx_conf->tx_thresh.wthresh != 0) { + PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if " + "tx_rs_thresh is greater than 1. " + "(tx_rs_thresh=%u port=%d queue=%d)", + (unsigned int)tx_rs_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return -EINVAL; + } + + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_idx]) { + ice_tx_queue_release(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; + } + + /* Allocate the TX queue data structure. */ + txq = rte_zmalloc_socket(NULL, + sizeof(struct ice_tx_queue), + RTE_CACHE_LINE_SIZE, + socket_id); + if (!txq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for " + "tx queue structure"); + return -ENOMEM; + } + + /* Allocate TX hardware ring descriptors. */ + ring_size = sizeof(struct ice_tx_desc) * ICE_MAX_RING_DESC; + ring_size = RTE_ALIGN(ring_size, ICE_DMA_MEM_ALIGN); + tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, + ring_size, ICE_RING_BASE_ALIGN, + socket_id); + if (!tz) { + ice_tx_queue_release(txq); + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX"); + return -ENOMEM; + } + + txq->nb_tx_desc = nb_desc; + txq->tx_rs_thresh = tx_rs_thresh; + txq->tx_free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + + txq->reg_idx = vsi->base_queue + queue_idx; + txq->port_id = dev->data->port_id; + txq->offloads = offloads; + txq->vsi = vsi; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + + txq->tx_ring_phys_addr = tz->phys_addr; + txq->tx_ring = (struct ice_tx_desc *)tz->addr; + + /* Allocate software ring */ + txq->sw_ring = + rte_zmalloc_socket(NULL, + sizeof(struct ice_tx_entry) * nb_desc, + RTE_CACHE_LINE_SIZE, + socket_id); + if (!txq->sw_ring) { + ice_tx_queue_release(txq); + PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring"); + return -ENOMEM; + } + + ice_reset_tx_queue(txq); + txq->q_set = TRUE; + dev->data->tx_queues[queue_idx] = txq; + + return 0; +} + +void +ice_tx_queue_release(void *txq) +{ + struct ice_tx_queue *q = (struct ice_tx_queue *)txq; + + if (!q) { + PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL"); + return; + } + + ice_tx_queue_release_mbufs(q); + rte_free(q->sw_ring); + rte_free(q); +} + +void +ice_clear_queues(struct rte_eth_dev *dev) +{ + uint16_t i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + ice_tx_queue_release_mbufs(dev->data->tx_queues[i]); + ice_reset_tx_queue(dev->data->tx_queues[i]); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + ice_rx_queue_release_mbufs(dev->data->rx_queues[i]); + ice_reset_rx_queue(dev->data->rx_queues[i]); + } +} + +void +ice_free_queues(struct rte_eth_dev *dev) +{ + uint16_t i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (!dev->data->rx_queues[i]) + continue; + ice_rx_queue_release(dev->data->rx_queues[i]); + dev->data->rx_queues[i] = NULL; + } + dev->data->nb_rx_queues = 0; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + if (!dev->data->tx_queues[i]) + continue; + ice_tx_queue_release(dev->data->tx_queues[i]); + dev->data->tx_queues[i] = NULL; + } + dev->data->nb_tx_queues = 0; +} diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h new file mode 100644 index 0000000..088a206 --- /dev/null +++ b/drivers/net/ice/ice_rxtx.h @@ -0,0 +1,137 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Intel Corporation + */ + +#ifndef _ICE_RXTX_H_ +#define _ICE_RXTX_H_ + +#include "ice_ethdev.h" + +#define ICE_ALIGN_RING_DESC 32 +#define ICE_MIN_RING_DESC 64 +#define ICE_MAX_RING_DESC 4096 +#define ICE_DMA_MEM_ALIGN 4096 +#define ICE_RING_BASE_ALIGN 128 + +#define ICE_RX_MAX_BURST 32 +#define ICE_TX_MAX_BURST 32 + +#define ICE_CHK_Q_ENA_COUNT 100 +#define ICE_CHK_Q_ENA_INTERVAL_US 100 + +#ifdef RTE_LIBRTE_ICE_16BYTE_RX_DESC +#define ice_rx_desc ice_16byte_rx_desc +#else +#define ice_rx_desc ice_32byte_rx_desc +#endif + +#define ICE_SUPPORT_CHAIN_NUM 5 + +struct ice_rx_entry { + struct rte_mbuf *mbuf; +}; + +struct ice_rx_queue { + struct rte_mempool *mp; /* mbuf pool to populate RX ring */ + volatile union ice_rx_desc *rx_ring;/* RX ring virtual address */ + uint64_t rx_ring_phys_addr; /* RX ring DMA address */ + struct ice_rx_entry *sw_ring; /* address of RX soft ring */ + uint16_t nb_rx_desc; /* number of RX descriptors */ + uint16_t rx_free_thresh; /* max free RX desc to hold */ + uint16_t rx_tail; /* current value of tail */ + uint16_t nb_rx_hold; /* number of held free RX desc */ + struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */ + struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */ +#ifdef RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC + uint16_t rx_nb_avail; /**< number of staged packets ready */ + uint16_t rx_next_avail; /**< index of next staged packets */ + uint16_t rx_free_trigger; /**< triggers rx buffer allocation */ + struct rte_mbuf fake_mbuf; /**< dummy mbuf */ + struct rte_mbuf *rx_stage[ICE_RX_MAX_BURST * 2]; +#endif + uint8_t port_id; /* device port ID */ + uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */ + uint16_t queue_id; /* RX queue index */ + uint16_t reg_idx; /* RX queue register index */ + uint8_t drop_en; /* if not 0, set register bit */ + volatile uint8_t *qrx_tail; /* register address of tail */ + struct ice_vsi *vsi; /* the VSI this queue belongs to */ + uint16_t rx_buf_len; /* The packet buffer size */ + uint16_t rx_hdr_len; /* The header buffer size */ + uint16_t max_pkt_len; /* Maximum packet length */ + bool q_set; /* indicate if rx queue has been configured */ + bool rx_deferred_start; /* don't start this queue in dev start */ +}; + +struct ice_tx_entry { + struct rte_mbuf *mbuf; + uint16_t next_id; + uint16_t last_id; +}; + +struct ice_tx_queue { + uint16_t nb_tx_desc; /* number of TX descriptors */ + uint64_t tx_ring_phys_addr; /* TX ring DMA address */ + volatile struct ice_tx_desc *tx_ring; /* TX ring virtual address */ + struct ice_tx_entry *sw_ring; /* virtual address of SW ring */ + uint16_t tx_tail; /* current value of tail register */ + volatile uint8_t *qtx_tail; /* register address of tail */ + uint16_t nb_tx_used; /* number of TX desc used since RS bit set */ + /* index to last TX descriptor to have been cleaned */ + uint16_t last_desc_cleaned; + /* Total number of TX descriptors ready to be allocated. */ + uint16_t nb_tx_free; + /* Start freeing TX buffers if there are less free descriptors than + * this value. + */ + uint16_t tx_free_thresh; + /* Number of TX descriptors to use before RS bit is set. */ + uint16_t tx_rs_thresh; + uint8_t pthresh; /**< Prefetch threshold register. */ + uint8_t hthresh; /**< Host threshold register. */ + uint8_t wthresh; /**< Write-back threshold reg. */ + uint8_t port_id; /* Device port identifier. */ + uint16_t queue_id; /* TX queue index. */ + uint32_t q_teid; /* TX schedule node id. */ + uint16_t reg_idx; + uint64_t offloads; + struct ice_vsi *vsi; /* the VSI this queue belongs to */ + uint16_t tx_next_dd; + uint16_t tx_next_rs; + bool tx_deferred_start; /* don't start this queue in dev start */ + bool q_set; /* indicate if tx queue has been configured */ +}; + +/* Offload features */ +union ice_tx_offload { + uint64_t data; + struct { + uint64_t l2_len:7; /* L2 (MAC) Header Length. */ + uint64_t l3_len:9; /* L3 (IP) Header Length. */ + uint64_t l4_len:8; /* L4 Header Length. */ + uint64_t tso_segsz:16; /* TCP TSO segment size */ + uint64_t outer_l2_len:8; /* outer L2 Header Length */ + uint64_t outer_l3_len:16; /* outer L3 Header Length */ + }; +}; + +int ice_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int ice_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf); +int ice_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); +int ice_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); +int ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); +int ice_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); +void ice_rx_queue_release(void *rxq); +void ice_tx_queue_release(void *txq); +void ice_clear_queues(struct rte_eth_dev *dev); +void ice_free_queues(struct rte_eth_dev *dev); +#endif /* _ICE_RXTX_H_ */ diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build index 9ed7b27..857dc0e 100644 --- a/drivers/net/ice/meson.build +++ b/drivers/net/ice/meson.build @@ -5,7 +5,8 @@ subdir('base') objs = [base_objs] sources = files( - 'ice_ethdev.c' + 'ice_ethdev.c', + 'ice_rxtx.c' ) deps += ['hash']