From patchwork Mon Feb 25 17:18:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Leyi Rong X-Patchwork-Id: 50487 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5F7263237; Mon, 25 Feb 2019 12:22:22 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 7F28C37B4 for ; Mon, 25 Feb 2019 10:26:56 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Feb 2019 01:26:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,411,1544515200"; d="scan'208";a="323130323" Received: from lrong-srv-02.sh.intel.com ([10.67.119.137]) by fmsmga005.fm.intel.com with ESMTP; 25 Feb 2019 01:26:50 -0800 From: Leyi Rong To: ferruh.yigit@intel.com, jingjing.wu@intel.com, wenzhuo.lu@intel.com, qi.z.zhang.intel.com@dpdk.org Cc: dev@dpdk.org, Leyi Rong Date: Tue, 26 Feb 2019 01:18:52 +0800 Message-Id: <20190225171853.4643-3-leyi.rong@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190225171853.4643-1-leyi.rong@intel.com> References: <20190222150336.22299-2-leyi.rong@intel.com> <20190225171853.4643-1-leyi.rong@intel.com> MIME-Version: 1.0 X-Mailman-Approved-At: Mon, 25 Feb 2019 12:22:21 +0100 Subject: [dpdk-dev] [PATCH v2 2/3] net/iavf: rename remaining avf strings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This is the main patch which renames the macros, functions, structs and any remaining strings in the iavf code. Signed-off-by: Leyi Rong --- config/common_base | 16 +- drivers/net/Makefile | 2 +- drivers/net/iavf/Makefile | 12 +- drivers/net/iavf/base/README | 8 +- drivers/net/iavf/base/iavf_adminq.c | 572 +++--- drivers/net/iavf/base/iavf_adminq.h | 110 +- drivers/net/iavf/base/iavf_adminq_cmd.h | 2512 +++++++++++------------ drivers/net/iavf/base/iavf_alloc.h | 46 +- drivers/net/iavf/base/iavf_common.c | 1694 +++++++-------- drivers/net/iavf/base/iavf_devids.h | 10 +- drivers/net/iavf/base/iavf_hmc.h | 186 +- drivers/net/iavf/base/iavf_lan_hmc.h | 132 +- drivers/net/iavf/base/iavf_osdep.h | 80 +- drivers/net/iavf/base/iavf_prototype.h | 210 +- drivers/net/iavf/base/iavf_register.h | 618 +++--- drivers/net/iavf/base/iavf_status.h | 142 +- drivers/net/iavf/base/iavf_type.h | 2266 ++++++++++---------- drivers/net/iavf/base/virtchnl.h | 10 +- drivers/net/iavf/iavf.h | 174 +- drivers/net/iavf/iavf_ethdev.c | 714 +++---- drivers/net/iavf/iavf_log.h | 20 +- drivers/net/iavf/iavf_rxtx.c | 582 +++--- drivers/net/iavf/iavf_rxtx.h | 162 +- drivers/net/iavf/iavf_rxtx_vec_common.h | 26 +- drivers/net/iavf/iavf_rxtx_vec_sse.c | 110 +- drivers/net/iavf/iavf_vchnl.c | 244 +-- drivers/net/iavf/meson.build | 2 +- mk/rte.app.mk | 2 +- 28 files changed, 5331 insertions(+), 5331 deletions(-) diff --git a/config/common_base b/config/common_base index 7c6da5165..0b09a9348 100644 --- a/config/common_base +++ b/config/common_base @@ -306,14 +306,14 @@ CONFIG_RTE_LIBRTE_ICE_DEBUG_TX_FREE=n CONFIG_RTE_LIBRTE_ICE_RX_ALLOW_BULK_ALLOC=y CONFIG_RTE_LIBRTE_ICE_16BYTE_RX_DESC=n -# Compile burst-oriented AVF PMD driver -# -CONFIG_RTE_LIBRTE_AVF_PMD=y -CONFIG_RTE_LIBRTE_AVF_INC_VECTOR=y -CONFIG_RTE_LIBRTE_AVF_DEBUG_TX=n -CONFIG_RTE_LIBRTE_AVF_DEBUG_TX_FREE=n -CONFIG_RTE_LIBRTE_AVF_DEBUG_RX=n -CONFIG_RTE_LIBRTE_AVF_16BYTE_RX_DESC=n +# Compile burst-oriented IAVF PMD driver +# +CONFIG_RTE_LIBRTE_IAVF_PMD=y +CONFIG_RTE_LIBRTE_IAVF_INC_VECTOR=y +CONFIG_RTE_LIBRTE_IAVF_DEBUG_TX=n +CONFIG_RTE_LIBRTE_IAVF_DEBUG_TX_FREE=n +CONFIG_RTE_LIBRTE_IAVF_DEBUG_RX=n +CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC=n # # Compile burst-oriented Mellanox ConnectX-3 (MLX4) PMD diff --git a/drivers/net/Makefile b/drivers/net/Makefile index dea4b0c64..502869a87 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -29,7 +29,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic DIRS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += failsafe DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e -DIRS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += iavf +DIRS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf DIRS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio diff --git a/drivers/net/iavf/Makefile b/drivers/net/iavf/Makefile index 29ff3c2f3..3a0eb79ca 100644 --- a/drivers/net/iavf/Makefile +++ b/drivers/net/iavf/Makefile @@ -41,14 +41,14 @@ VPATH += $(SRCDIR)/base # # all source are stored in SRCS-y # -SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += iavf_adminq.c -SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += iavf_common.c +SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_adminq.c +SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_common.c -SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += iavf_ethdev.c -SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += iavf_vchnl.c -SRCS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += iavf_rxtx.c +SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_vchnl.c +SRCS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += iavf_rxtx.c ifeq ($(CONFIG_RTE_ARCH_X86), y) -SRCS-$(CONFIG_RTE_LIBRTE_AVF_INC_VECTOR) += iavf_rxtx_vec_sse.c +SRCS-$(CONFIG_RTE_LIBRTE_IAVF_INC_VECTOR) += iavf_rxtx_vec_sse.c endif include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/iavf/base/README b/drivers/net/iavf/base/README index 4710ae271..f57e1048f 100644 --- a/drivers/net/iavf/base/README +++ b/drivers/net/iavf/base/README @@ -2,12 +2,12 @@ * Copyright(c) 2017 Intel Corporation */ -IntelĀ® AVF driver +IntelĀ® IAVF driver ================= -This directory contains source code of FreeBSD AVF driver of version +This directory contains source code of FreeBSD IAVF driver of version cid-avf.2018.01.02.tar.gz released by the team which develops -basic drivers for any AVF NIC. The directory of base/ contains the +basic drivers for any IAVF NIC. The directory of base/ contains the original source package. Updating the driver @@ -16,4 +16,4 @@ Updating the driver NOTE: The source code in this directory should not be modified apart from the following file(s): - avf_osdep.h + iavf_osdep.h diff --git a/drivers/net/iavf/base/iavf_adminq.c b/drivers/net/iavf/base/iavf_adminq.c index 9d16f0f61..036c34087 100644 --- a/drivers/net/iavf/base/iavf_adminq.c +++ b/drivers/net/iavf/base/iavf_adminq.c @@ -38,49 +38,49 @@ POSSIBILITY OF SUCH DAMAGE. #include "iavf_prototype.h" /** - * avf_adminq_init_regs - Initialize AdminQ registers + * iavf_adminq_init_regs - Initialize AdminQ registers * @hw: pointer to the hardware structure * * This assumes the alloc_asq and alloc_arq functions have already been called **/ -STATIC void avf_adminq_init_regs(struct avf_hw *hw) +STATIC void iavf_adminq_init_regs(struct iavf_hw *hw) { /* set head and tail registers in our local struct */ - if (avf_is_vf(hw)) { - hw->aq.asq.tail = AVF_ATQT1; - hw->aq.asq.head = AVF_ATQH1; - hw->aq.asq.len = AVF_ATQLEN1; - hw->aq.asq.bal = AVF_ATQBAL1; - hw->aq.asq.bah = AVF_ATQBAH1; - hw->aq.arq.tail = AVF_ARQT1; - hw->aq.arq.head = AVF_ARQH1; - hw->aq.arq.len = AVF_ARQLEN1; - hw->aq.arq.bal = AVF_ARQBAL1; - hw->aq.arq.bah = AVF_ARQBAH1; + if (iavf_is_vf(hw)) { + hw->aq.asq.tail = IAVF_ATQT1; + hw->aq.asq.head = IAVF_ATQH1; + hw->aq.asq.len = IAVF_ATQLEN1; + hw->aq.asq.bal = IAVF_ATQBAL1; + hw->aq.asq.bah = IAVF_ATQBAH1; + hw->aq.arq.tail = IAVF_ARQT1; + hw->aq.arq.head = IAVF_ARQH1; + hw->aq.arq.len = IAVF_ARQLEN1; + hw->aq.arq.bal = IAVF_ARQBAL1; + hw->aq.arq.bah = IAVF_ARQBAH1; } } /** - * avf_alloc_adminq_asq_ring - Allocate Admin Queue send rings + * iavf_alloc_adminq_asq_ring - Allocate Admin Queue send rings * @hw: pointer to the hardware structure **/ -enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw) +enum iavf_status_code iavf_alloc_adminq_asq_ring(struct iavf_hw *hw) { - enum avf_status_code ret_code; + enum iavf_status_code ret_code; - ret_code = avf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf, - avf_mem_atq_ring, + ret_code = iavf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf, + iavf_mem_atq_ring, (hw->aq.num_asq_entries * - sizeof(struct avf_aq_desc)), - AVF_ADMINQ_DESC_ALIGNMENT); + sizeof(struct iavf_aq_desc)), + IAVF_ADMINQ_DESC_ALIGNMENT); if (ret_code) return ret_code; - ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf, + ret_code = iavf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf, (hw->aq.num_asq_entries * - sizeof(struct avf_asq_cmd_details))); + sizeof(struct iavf_asq_cmd_details))); if (ret_code) { - avf_free_dma_mem(hw, &hw->aq.asq.desc_buf); + iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf); return ret_code; } @@ -88,55 +88,55 @@ enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw) } /** - * avf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings + * iavf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings * @hw: pointer to the hardware structure **/ -enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw) +enum iavf_status_code iavf_alloc_adminq_arq_ring(struct iavf_hw *hw) { - enum avf_status_code ret_code; + enum iavf_status_code ret_code; - ret_code = avf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf, - avf_mem_arq_ring, + ret_code = iavf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf, + iavf_mem_arq_ring, (hw->aq.num_arq_entries * - sizeof(struct avf_aq_desc)), - AVF_ADMINQ_DESC_ALIGNMENT); + sizeof(struct iavf_aq_desc)), + IAVF_ADMINQ_DESC_ALIGNMENT); return ret_code; } /** - * avf_free_adminq_asq - Free Admin Queue send rings + * iavf_free_adminq_asq - Free Admin Queue send rings * @hw: pointer to the hardware structure * * This assumes the posted send buffers have already been cleaned * and de-allocated **/ -void avf_free_adminq_asq(struct avf_hw *hw) +void iavf_free_adminq_asq(struct iavf_hw *hw) { - avf_free_dma_mem(hw, &hw->aq.asq.desc_buf); + iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf); } /** - * avf_free_adminq_arq - Free Admin Queue receive rings + * iavf_free_adminq_arq - Free Admin Queue receive rings * @hw: pointer to the hardware structure * * This assumes the posted receive buffers have already been cleaned * and de-allocated **/ -void avf_free_adminq_arq(struct avf_hw *hw) +void iavf_free_adminq_arq(struct iavf_hw *hw) { - avf_free_dma_mem(hw, &hw->aq.arq.desc_buf); + iavf_free_dma_mem(hw, &hw->aq.arq.desc_buf); } /** - * avf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue + * iavf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue * @hw: pointer to the hardware structure **/ -STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw) +STATIC enum iavf_status_code iavf_alloc_arq_bufs(struct iavf_hw *hw) { - enum avf_status_code ret_code; - struct avf_aq_desc *desc; - struct avf_dma_mem *bi; + enum iavf_status_code ret_code; + struct iavf_aq_desc *desc; + struct iavf_dma_mem *bi; int i; /* We'll be allocating the buffer info memory first, then we can @@ -144,28 +144,28 @@ STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw) */ /* buffer_info structures do not need alignment */ - ret_code = avf_allocate_virt_mem(hw, &hw->aq.arq.dma_head, - (hw->aq.num_arq_entries * sizeof(struct avf_dma_mem))); + ret_code = iavf_allocate_virt_mem(hw, &hw->aq.arq.dma_head, + (hw->aq.num_arq_entries * sizeof(struct iavf_dma_mem))); if (ret_code) goto alloc_arq_bufs; - hw->aq.arq.r.arq_bi = (struct avf_dma_mem *)hw->aq.arq.dma_head.va; + hw->aq.arq.r.arq_bi = (struct iavf_dma_mem *)hw->aq.arq.dma_head.va; /* allocate the mapped buffers */ for (i = 0; i < hw->aq.num_arq_entries; i++) { bi = &hw->aq.arq.r.arq_bi[i]; - ret_code = avf_allocate_dma_mem(hw, bi, - avf_mem_arq_buf, + ret_code = iavf_allocate_dma_mem(hw, bi, + iavf_mem_arq_buf, hw->aq.arq_buf_size, - AVF_ADMINQ_DESC_ALIGNMENT); + IAVF_ADMINQ_DESC_ALIGNMENT); if (ret_code) goto unwind_alloc_arq_bufs; /* now configure the descriptors for use */ - desc = AVF_ADMINQ_DESC(hw->aq.arq, i); + desc = IAVF_ADMINQ_DESC(hw->aq.arq, i); - desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF); - if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF) - desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB); + desc->flags = CPU_TO_LE16(IAVF_AQ_FLAG_BUF); + if (hw->aq.arq_buf_size > IAVF_AQ_LARGE_BUF) + desc->flags |= CPU_TO_LE16(IAVF_AQ_FLAG_LB); desc->opcode = 0; /* This is in accordance with Admin queue design, there is no * register for buffer size configuration @@ -175,9 +175,9 @@ STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw) desc->cookie_high = 0; desc->cookie_low = 0; desc->params.external.addr_high = - CPU_TO_LE32(AVF_HI_DWORD(bi->pa)); + CPU_TO_LE32(IAVF_HI_DWORD(bi->pa)); desc->params.external.addr_low = - CPU_TO_LE32(AVF_LO_DWORD(bi->pa)); + CPU_TO_LE32(IAVF_LO_DWORD(bi->pa)); desc->params.external.param0 = 0; desc->params.external.param1 = 0; } @@ -189,36 +189,36 @@ STATIC enum avf_status_code avf_alloc_arq_bufs(struct avf_hw *hw) /* don't try to free the one that failed... */ i--; for (; i >= 0; i--) - avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); - avf_free_virt_mem(hw, &hw->aq.arq.dma_head); + iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); + iavf_free_virt_mem(hw, &hw->aq.arq.dma_head); return ret_code; } /** - * avf_alloc_asq_bufs - Allocate empty buffer structs for the send queue + * iavf_alloc_asq_bufs - Allocate empty buffer structs for the send queue * @hw: pointer to the hardware structure **/ -STATIC enum avf_status_code avf_alloc_asq_bufs(struct avf_hw *hw) +STATIC enum iavf_status_code iavf_alloc_asq_bufs(struct iavf_hw *hw) { - enum avf_status_code ret_code; - struct avf_dma_mem *bi; + enum iavf_status_code ret_code; + struct iavf_dma_mem *bi; int i; /* No mapped memory needed yet, just the buffer info structures */ - ret_code = avf_allocate_virt_mem(hw, &hw->aq.asq.dma_head, - (hw->aq.num_asq_entries * sizeof(struct avf_dma_mem))); + ret_code = iavf_allocate_virt_mem(hw, &hw->aq.asq.dma_head, + (hw->aq.num_asq_entries * sizeof(struct iavf_dma_mem))); if (ret_code) goto alloc_asq_bufs; - hw->aq.asq.r.asq_bi = (struct avf_dma_mem *)hw->aq.asq.dma_head.va; + hw->aq.asq.r.asq_bi = (struct iavf_dma_mem *)hw->aq.asq.dma_head.va; /* allocate the mapped buffers */ for (i = 0; i < hw->aq.num_asq_entries; i++) { bi = &hw->aq.asq.r.asq_bi[i]; - ret_code = avf_allocate_dma_mem(hw, bi, - avf_mem_asq_buf, + ret_code = iavf_allocate_dma_mem(hw, bi, + iavf_mem_asq_buf, hw->aq.asq_buf_size, - AVF_ADMINQ_DESC_ALIGNMENT); + IAVF_ADMINQ_DESC_ALIGNMENT); if (ret_code) goto unwind_alloc_asq_bufs; } @@ -229,63 +229,63 @@ STATIC enum avf_status_code avf_alloc_asq_bufs(struct avf_hw *hw) /* don't try to free the one that failed... */ i--; for (; i >= 0; i--) - avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); - avf_free_virt_mem(hw, &hw->aq.asq.dma_head); + iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); + iavf_free_virt_mem(hw, &hw->aq.asq.dma_head); return ret_code; } /** - * avf_free_arq_bufs - Free receive queue buffer info elements + * iavf_free_arq_bufs - Free receive queue buffer info elements * @hw: pointer to the hardware structure **/ -STATIC void avf_free_arq_bufs(struct avf_hw *hw) +STATIC void iavf_free_arq_bufs(struct iavf_hw *hw) { int i; /* free descriptors */ for (i = 0; i < hw->aq.num_arq_entries; i++) - avf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); + iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); /* free the descriptor memory */ - avf_free_dma_mem(hw, &hw->aq.arq.desc_buf); + iavf_free_dma_mem(hw, &hw->aq.arq.desc_buf); /* free the dma header */ - avf_free_virt_mem(hw, &hw->aq.arq.dma_head); + iavf_free_virt_mem(hw, &hw->aq.arq.dma_head); } /** - * avf_free_asq_bufs - Free send queue buffer info elements + * iavf_free_asq_bufs - Free send queue buffer info elements * @hw: pointer to the hardware structure **/ -STATIC void avf_free_asq_bufs(struct avf_hw *hw) +STATIC void iavf_free_asq_bufs(struct iavf_hw *hw) { int i; /* only unmap if the address is non-NULL */ for (i = 0; i < hw->aq.num_asq_entries; i++) if (hw->aq.asq.r.asq_bi[i].pa) - avf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); + iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); /* free the buffer info list */ - avf_free_virt_mem(hw, &hw->aq.asq.cmd_buf); + iavf_free_virt_mem(hw, &hw->aq.asq.cmd_buf); /* free the descriptor memory */ - avf_free_dma_mem(hw, &hw->aq.asq.desc_buf); + iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf); /* free the dma header */ - avf_free_virt_mem(hw, &hw->aq.asq.dma_head); + iavf_free_virt_mem(hw, &hw->aq.asq.dma_head); } /** - * avf_config_asq_regs - configure ASQ registers + * iavf_config_asq_regs - configure ASQ registers * @hw: pointer to the hardware structure * * Configure base address and length registers for the transmit queue **/ -STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw) +STATIC enum iavf_status_code iavf_config_asq_regs(struct iavf_hw *hw) { - enum avf_status_code ret_code = AVF_SUCCESS; + enum iavf_status_code ret_code = IAVF_SUCCESS; u32 reg = 0; /* Clear Head and Tail */ @@ -294,33 +294,33 @@ STATIC enum avf_status_code avf_config_asq_regs(struct avf_hw *hw) /* set starting point */ #ifdef INTEGRATED_VF - if (avf_is_vf(hw)) + if (iavf_is_vf(hw)) wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries | - AVF_ATQLEN1_ATQENABLE_MASK)); + IAVF_ATQLEN1_ATQENABLE_MASK)); #else wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries | - AVF_ATQLEN1_ATQENABLE_MASK)); + IAVF_ATQLEN1_ATQENABLE_MASK)); #endif /* INTEGRATED_VF */ - wr32(hw, hw->aq.asq.bal, AVF_LO_DWORD(hw->aq.asq.desc_buf.pa)); - wr32(hw, hw->aq.asq.bah, AVF_HI_DWORD(hw->aq.asq.desc_buf.pa)); + wr32(hw, hw->aq.asq.bal, IAVF_LO_DWORD(hw->aq.asq.desc_buf.pa)); + wr32(hw, hw->aq.asq.bah, IAVF_HI_DWORD(hw->aq.asq.desc_buf.pa)); /* Check one register to verify that config was applied */ reg = rd32(hw, hw->aq.asq.bal); - if (reg != AVF_LO_DWORD(hw->aq.asq.desc_buf.pa)) - ret_code = AVF_ERR_ADMIN_QUEUE_ERROR; + if (reg != IAVF_LO_DWORD(hw->aq.asq.desc_buf.pa)) + ret_code = IAVF_ERR_ADMIN_QUEUE_ERROR; return ret_code; } /** - * avf_config_arq_regs - ARQ register configuration + * iavf_config_arq_regs - ARQ register configuration * @hw: pointer to the hardware structure * * Configure base address and length registers for the receive (event queue) **/ -STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw) +STATIC enum iavf_status_code iavf_config_arq_regs(struct iavf_hw *hw) { - enum avf_status_code ret_code = AVF_SUCCESS; + enum iavf_status_code ret_code = IAVF_SUCCESS; u32 reg = 0; /* Clear Head and Tail */ @@ -329,29 +329,29 @@ STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw) /* set starting point */ #ifdef INTEGRATED_VF - if (avf_is_vf(hw)) + if (iavf_is_vf(hw)) wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries | - AVF_ARQLEN1_ARQENABLE_MASK)); + IAVF_ARQLEN1_ARQENABLE_MASK)); #else wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries | - AVF_ARQLEN1_ARQENABLE_MASK)); + IAVF_ARQLEN1_ARQENABLE_MASK)); #endif /* INTEGRATED_VF */ - wr32(hw, hw->aq.arq.bal, AVF_LO_DWORD(hw->aq.arq.desc_buf.pa)); - wr32(hw, hw->aq.arq.bah, AVF_HI_DWORD(hw->aq.arq.desc_buf.pa)); + wr32(hw, hw->aq.arq.bal, IAVF_LO_DWORD(hw->aq.arq.desc_buf.pa)); + wr32(hw, hw->aq.arq.bah, IAVF_HI_DWORD(hw->aq.arq.desc_buf.pa)); /* Update tail in the HW to post pre-allocated buffers */ wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1); /* Check one register to verify that config was applied */ reg = rd32(hw, hw->aq.arq.bal); - if (reg != AVF_LO_DWORD(hw->aq.arq.desc_buf.pa)) - ret_code = AVF_ERR_ADMIN_QUEUE_ERROR; + if (reg != IAVF_LO_DWORD(hw->aq.arq.desc_buf.pa)) + ret_code = IAVF_ERR_ADMIN_QUEUE_ERROR; return ret_code; } /** - * avf_init_asq - main initialization routine for ASQ + * iavf_init_asq - main initialization routine for ASQ * @hw: pointer to the hardware structure * * This is the main initialization routine for the Admin Send Queue @@ -363,20 +363,20 @@ STATIC enum avf_status_code avf_config_arq_regs(struct avf_hw *hw) * Do *NOT* hold the lock when calling this as the memory allocation routines * called are not going to be atomic context safe **/ -enum avf_status_code avf_init_asq(struct avf_hw *hw) +enum iavf_status_code iavf_init_asq(struct iavf_hw *hw) { - enum avf_status_code ret_code = AVF_SUCCESS; + enum iavf_status_code ret_code = IAVF_SUCCESS; if (hw->aq.asq.count > 0) { /* queue already initialized */ - ret_code = AVF_ERR_NOT_READY; + ret_code = IAVF_ERR_NOT_READY; goto init_adminq_exit; } /* verify input for valid configuration */ if ((hw->aq.num_asq_entries == 0) || (hw->aq.asq_buf_size == 0)) { - ret_code = AVF_ERR_CONFIG; + ret_code = IAVF_ERR_CONFIG; goto init_adminq_exit; } @@ -384,18 +384,18 @@ enum avf_status_code avf_init_asq(struct avf_hw *hw) hw->aq.asq.next_to_clean = 0; /* allocate the ring memory */ - ret_code = avf_alloc_adminq_asq_ring(hw); - if (ret_code != AVF_SUCCESS) + ret_code = iavf_alloc_adminq_asq_ring(hw); + if (ret_code != IAVF_SUCCESS) goto init_adminq_exit; /* allocate buffers in the rings */ - ret_code = avf_alloc_asq_bufs(hw); - if (ret_code != AVF_SUCCESS) + ret_code = iavf_alloc_asq_bufs(hw); + if (ret_code != IAVF_SUCCESS) goto init_adminq_free_rings; /* initialize base registers */ - ret_code = avf_config_asq_regs(hw); - if (ret_code != AVF_SUCCESS) + ret_code = iavf_config_asq_regs(hw); + if (ret_code != IAVF_SUCCESS) goto init_adminq_free_rings; /* success! */ @@ -403,14 +403,14 @@ enum avf_status_code avf_init_asq(struct avf_hw *hw) goto init_adminq_exit; init_adminq_free_rings: - avf_free_adminq_asq(hw); + iavf_free_adminq_asq(hw); init_adminq_exit: return ret_code; } /** - * avf_init_arq - initialize ARQ + * iavf_init_arq - initialize ARQ * @hw: pointer to the hardware structure * * The main initialization routine for the Admin Receive (Event) Queue. @@ -422,20 +422,20 @@ enum avf_status_code avf_init_asq(struct avf_hw *hw) * Do *NOT* hold the lock when calling this as the memory allocation routines * called are not going to be atomic context safe **/ -enum avf_status_code avf_init_arq(struct avf_hw *hw) +enum iavf_status_code iavf_init_arq(struct iavf_hw *hw) { - enum avf_status_code ret_code = AVF_SUCCESS; + enum iavf_status_code ret_code = IAVF_SUCCESS; if (hw->aq.arq.count > 0) { /* queue already initialized */ - ret_code = AVF_ERR_NOT_READY; + ret_code = IAVF_ERR_NOT_READY; goto init_adminq_exit; } /* verify input for valid configuration */ if ((hw->aq.num_arq_entries == 0) || (hw->aq.arq_buf_size == 0)) { - ret_code = AVF_ERR_CONFIG; + ret_code = IAVF_ERR_CONFIG; goto init_adminq_exit; } @@ -443,18 +443,18 @@ enum avf_status_code avf_init_arq(struct avf_hw *hw) hw->aq.arq.next_to_clean = 0; /* allocate the ring memory */ - ret_code = avf_alloc_adminq_arq_ring(hw); - if (ret_code != AVF_SUCCESS) + ret_code = iavf_alloc_adminq_arq_ring(hw); + if (ret_code != IAVF_SUCCESS) goto init_adminq_exit; /* allocate buffers in the rings */ - ret_code = avf_alloc_arq_bufs(hw); - if (ret_code != AVF_SUCCESS) + ret_code = iavf_alloc_arq_bufs(hw); + if (ret_code != IAVF_SUCCESS) goto init_adminq_free_rings; /* initialize base registers */ - ret_code = avf_config_arq_regs(hw); - if (ret_code != AVF_SUCCESS) + ret_code = iavf_config_arq_regs(hw); + if (ret_code != IAVF_SUCCESS) goto init_adminq_free_rings; /* success! */ @@ -462,26 +462,26 @@ enum avf_status_code avf_init_arq(struct avf_hw *hw) goto init_adminq_exit; init_adminq_free_rings: - avf_free_adminq_arq(hw); + iavf_free_adminq_arq(hw); init_adminq_exit: return ret_code; } /** - * avf_shutdown_asq - shutdown the ASQ + * iavf_shutdown_asq - shutdown the ASQ * @hw: pointer to the hardware structure * * The main shutdown routine for the Admin Send Queue **/ -enum avf_status_code avf_shutdown_asq(struct avf_hw *hw) +enum iavf_status_code iavf_shutdown_asq(struct iavf_hw *hw) { - enum avf_status_code ret_code = AVF_SUCCESS; + enum iavf_status_code ret_code = IAVF_SUCCESS; - avf_acquire_spinlock(&hw->aq.asq_spinlock); + iavf_acquire_spinlock(&hw->aq.asq_spinlock); if (hw->aq.asq.count == 0) { - ret_code = AVF_ERR_NOT_READY; + ret_code = IAVF_ERR_NOT_READY; goto shutdown_asq_out; } @@ -495,27 +495,27 @@ enum avf_status_code avf_shutdown_asq(struct avf_hw *hw) hw->aq.asq.count = 0; /* to indicate uninitialized queue */ /* free ring buffers */ - avf_free_asq_bufs(hw); + iavf_free_asq_bufs(hw); shutdown_asq_out: - avf_release_spinlock(&hw->aq.asq_spinlock); + iavf_release_spinlock(&hw->aq.asq_spinlock); return ret_code; } /** - * avf_shutdown_arq - shutdown ARQ + * iavf_shutdown_arq - shutdown ARQ * @hw: pointer to the hardware structure * * The main shutdown routine for the Admin Receive Queue **/ -enum avf_status_code avf_shutdown_arq(struct avf_hw *hw) +enum iavf_status_code iavf_shutdown_arq(struct iavf_hw *hw) { - enum avf_status_code ret_code = AVF_SUCCESS; + enum iavf_status_code ret_code = IAVF_SUCCESS; - avf_acquire_spinlock(&hw->aq.arq_spinlock); + iavf_acquire_spinlock(&hw->aq.arq_spinlock); if (hw->aq.arq.count == 0) { - ret_code = AVF_ERR_NOT_READY; + ret_code = IAVF_ERR_NOT_READY; goto shutdown_arq_out; } @@ -529,15 +529,15 @@ enum avf_status_code avf_shutdown_arq(struct avf_hw *hw) hw->aq.arq.count = 0; /* to indicate uninitialized queue */ /* free ring buffers */ - avf_free_arq_bufs(hw); + iavf_free_arq_bufs(hw); shutdown_arq_out: - avf_release_spinlock(&hw->aq.arq_spinlock); + iavf_release_spinlock(&hw->aq.arq_spinlock); return ret_code; } /** - * avf_init_adminq - main initialization routine for Admin Queue + * iavf_init_adminq - main initialization routine for Admin Queue * @hw: pointer to the hardware structure * * Prior to calling this function, drivers *MUST* set the following fields @@ -547,123 +547,123 @@ enum avf_status_code avf_shutdown_arq(struct avf_hw *hw) * - hw->aq.arq_buf_size * - hw->aq.asq_buf_size **/ -enum avf_status_code avf_init_adminq(struct avf_hw *hw) +enum iavf_status_code iavf_init_adminq(struct iavf_hw *hw) { - enum avf_status_code ret_code; + enum iavf_status_code ret_code; /* verify input for valid configuration */ if ((hw->aq.num_arq_entries == 0) || (hw->aq.num_asq_entries == 0) || (hw->aq.arq_buf_size == 0) || (hw->aq.asq_buf_size == 0)) { - ret_code = AVF_ERR_CONFIG; + ret_code = IAVF_ERR_CONFIG; goto init_adminq_exit; } - avf_init_spinlock(&hw->aq.asq_spinlock); - avf_init_spinlock(&hw->aq.arq_spinlock); + iavf_init_spinlock(&hw->aq.asq_spinlock); + iavf_init_spinlock(&hw->aq.arq_spinlock); /* Set up register offsets */ - avf_adminq_init_regs(hw); + iavf_adminq_init_regs(hw); /* setup ASQ command write back timeout */ - hw->aq.asq_cmd_timeout = AVF_ASQ_CMD_TIMEOUT; + hw->aq.asq_cmd_timeout = IAVF_ASQ_CMD_TIMEOUT; /* allocate the ASQ */ - ret_code = avf_init_asq(hw); - if (ret_code != AVF_SUCCESS) + ret_code = iavf_init_asq(hw); + if (ret_code != IAVF_SUCCESS) goto init_adminq_destroy_spinlocks; /* allocate the ARQ */ - ret_code = avf_init_arq(hw); - if (ret_code != AVF_SUCCESS) + ret_code = iavf_init_arq(hw); + if (ret_code != IAVF_SUCCESS) goto init_adminq_free_asq; - ret_code = AVF_SUCCESS; + ret_code = IAVF_SUCCESS; /* success! */ goto init_adminq_exit; init_adminq_free_asq: - avf_shutdown_asq(hw); + iavf_shutdown_asq(hw); init_adminq_destroy_spinlocks: - avf_destroy_spinlock(&hw->aq.asq_spinlock); - avf_destroy_spinlock(&hw->aq.arq_spinlock); + iavf_destroy_spinlock(&hw->aq.asq_spinlock); + iavf_destroy_spinlock(&hw->aq.arq_spinlock); init_adminq_exit: return ret_code; } /** - * avf_shutdown_adminq - shutdown routine for the Admin Queue + * iavf_shutdown_adminq - shutdown routine for the Admin Queue * @hw: pointer to the hardware structure **/ -enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw) +enum iavf_status_code iavf_shutdown_adminq(struct iavf_hw *hw) { - enum avf_status_code ret_code = AVF_SUCCESS; + enum iavf_status_code ret_code = IAVF_SUCCESS; - if (avf_check_asq_alive(hw)) - avf_aq_queue_shutdown(hw, true); + if (iavf_check_asq_alive(hw)) + iavf_aq_queue_shutdown(hw, true); - avf_shutdown_asq(hw); - avf_shutdown_arq(hw); - avf_destroy_spinlock(&hw->aq.asq_spinlock); - avf_destroy_spinlock(&hw->aq.arq_spinlock); + iavf_shutdown_asq(hw); + iavf_shutdown_arq(hw); + iavf_destroy_spinlock(&hw->aq.asq_spinlock); + iavf_destroy_spinlock(&hw->aq.arq_spinlock); if (hw->nvm_buff.va) - avf_free_virt_mem(hw, &hw->nvm_buff); + iavf_free_virt_mem(hw, &hw->nvm_buff); return ret_code; } /** - * avf_clean_asq - cleans Admin send queue + * iavf_clean_asq - cleans Admin send queue * @hw: pointer to the hardware structure * * returns the number of free desc **/ -u16 avf_clean_asq(struct avf_hw *hw) +u16 iavf_clean_asq(struct iavf_hw *hw) { - struct avf_adminq_ring *asq = &(hw->aq.asq); - struct avf_asq_cmd_details *details; + struct iavf_adminq_ring *asq = &(hw->aq.asq); + struct iavf_asq_cmd_details *details; u16 ntc = asq->next_to_clean; - struct avf_aq_desc desc_cb; - struct avf_aq_desc *desc; + struct iavf_aq_desc desc_cb; + struct iavf_aq_desc *desc; - desc = AVF_ADMINQ_DESC(*asq, ntc); - details = AVF_ADMINQ_DETAILS(*asq, ntc); + desc = IAVF_ADMINQ_DESC(*asq, ntc); + details = IAVF_ADMINQ_DETAILS(*asq, ntc); while (rd32(hw, hw->aq.asq.head) != ntc) { - avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head)); if (details->callback) { - AVF_ADMINQ_CALLBACK cb_func = - (AVF_ADMINQ_CALLBACK)details->callback; - avf_memcpy(&desc_cb, desc, sizeof(struct avf_aq_desc), - AVF_DMA_TO_DMA); + IAVF_ADMINQ_CALLBACK cb_func = + (IAVF_ADMINQ_CALLBACK)details->callback; + iavf_memcpy(&desc_cb, desc, sizeof(struct iavf_aq_desc), + IAVF_DMA_TO_DMA); cb_func(hw, &desc_cb); } - avf_memset(desc, 0, sizeof(*desc), AVF_DMA_MEM); - avf_memset(details, 0, sizeof(*details), AVF_NONDMA_MEM); + iavf_memset(desc, 0, sizeof(*desc), IAVF_DMA_MEM); + iavf_memset(details, 0, sizeof(*details), IAVF_NONDMA_MEM); ntc++; if (ntc == asq->count) ntc = 0; - desc = AVF_ADMINQ_DESC(*asq, ntc); - details = AVF_ADMINQ_DETAILS(*asq, ntc); + desc = IAVF_ADMINQ_DESC(*asq, ntc); + details = IAVF_ADMINQ_DETAILS(*asq, ntc); } asq->next_to_clean = ntc; - return AVF_DESC_UNUSED(asq); + return IAVF_DESC_UNUSED(asq); } /** - * avf_asq_done - check if FW has processed the Admin Send Queue + * iavf_asq_done - check if FW has processed the Admin Send Queue * @hw: pointer to the hw struct * * Returns true if the firmware has processed all descriptors on the * admin send queue. Returns false if there are still requests pending. **/ -bool avf_asq_done(struct avf_hw *hw) +bool iavf_asq_done(struct iavf_hw *hw) { /* AQ designers suggest use of head for better * timing reliability than DD bit @@ -673,7 +673,7 @@ bool avf_asq_done(struct avf_hw *hw) } /** - * avf_asq_send_command - send command to Admin Queue + * iavf_asq_send_command - send command to Admin Queue * @hw: pointer to the hw struct * @desc: prefilled descriptor describing the command (non DMA mem) * @buff: buffer to use for indirect commands @@ -683,45 +683,45 @@ bool avf_asq_done(struct avf_hw *hw) * This is the main send command driver routine for the Admin Queue send * queue. It runs the queue, cleans the queue, etc **/ -enum avf_status_code avf_asq_send_command(struct avf_hw *hw, - struct avf_aq_desc *desc, +enum iavf_status_code iavf_asq_send_command(struct iavf_hw *hw, + struct iavf_aq_desc *desc, void *buff, /* can be NULL */ u16 buff_size, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - enum avf_status_code status = AVF_SUCCESS; - struct avf_dma_mem *dma_buff = NULL; - struct avf_asq_cmd_details *details; - struct avf_aq_desc *desc_on_ring; + enum iavf_status_code status = IAVF_SUCCESS; + struct iavf_dma_mem *dma_buff = NULL; + struct iavf_asq_cmd_details *details; + struct iavf_aq_desc *desc_on_ring; bool cmd_completed = false; u16 retval = 0; u32 val = 0; - avf_acquire_spinlock(&hw->aq.asq_spinlock); + iavf_acquire_spinlock(&hw->aq.asq_spinlock); - hw->aq.asq_last_status = AVF_AQ_RC_OK; + hw->aq.asq_last_status = IAVF_AQ_RC_OK; if (hw->aq.asq.count == 0) { - avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQTX: Admin queue not initialized.\n"); - status = AVF_ERR_QUEUE_EMPTY; + status = IAVF_ERR_QUEUE_EMPTY; goto asq_send_command_error; } val = rd32(hw, hw->aq.asq.head); if (val >= hw->aq.num_asq_entries) { - avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQTX: head overrun at %d\n", val); - status = AVF_ERR_QUEUE_EMPTY; + status = IAVF_ERR_QUEUE_EMPTY; goto asq_send_command_error; } - details = AVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use); + details = IAVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use); if (cmd_details) { - avf_memcpy(details, + iavf_memcpy(details, cmd_details, - sizeof(struct avf_asq_cmd_details), - AVF_NONDMA_TO_NONDMA); + sizeof(struct iavf_asq_cmd_details), + IAVF_NONDMA_TO_NONDMA); /* If the cmd_details are defined copy the cookie. The * CPU_TO_LE32 is not needed here because the data is ignored @@ -729,14 +729,14 @@ enum avf_status_code avf_asq_send_command(struct avf_hw *hw, */ if (details->cookie) { desc->cookie_high = - CPU_TO_LE32(AVF_HI_DWORD(details->cookie)); + CPU_TO_LE32(IAVF_HI_DWORD(details->cookie)); desc->cookie_low = - CPU_TO_LE32(AVF_LO_DWORD(details->cookie)); + CPU_TO_LE32(IAVF_LO_DWORD(details->cookie)); } } else { - avf_memset(details, 0, - sizeof(struct avf_asq_cmd_details), - AVF_NONDMA_MEM); + iavf_memset(details, 0, + sizeof(struct iavf_asq_cmd_details), + IAVF_NONDMA_MEM); } /* clear requested flags and then set additional flags if defined */ @@ -744,19 +744,19 @@ enum avf_status_code avf_asq_send_command(struct avf_hw *hw, desc->flags |= CPU_TO_LE16(details->flags_ena); if (buff_size > hw->aq.asq_buf_size) { - avf_debug(hw, - AVF_DEBUG_AQ_MESSAGE, + iavf_debug(hw, + IAVF_DEBUG_AQ_MESSAGE, "AQTX: Invalid buffer size: %d.\n", buff_size); - status = AVF_ERR_INVALID_SIZE; + status = IAVF_ERR_INVALID_SIZE; goto asq_send_command_error; } if (details->postpone && !details->async) { - avf_debug(hw, - AVF_DEBUG_AQ_MESSAGE, + iavf_debug(hw, + IAVF_DEBUG_AQ_MESSAGE, "AQTX: Async flag not set along with postpone flag"); - status = AVF_ERR_PARAM; + status = IAVF_ERR_PARAM; goto asq_send_command_error; } @@ -767,41 +767,41 @@ enum avf_status_code avf_asq_send_command(struct avf_hw *hw, /* the clean function called here could be called in a separate thread * in case of asynchronous completions */ - if (avf_clean_asq(hw) == 0) { - avf_debug(hw, - AVF_DEBUG_AQ_MESSAGE, + if (iavf_clean_asq(hw) == 0) { + iavf_debug(hw, + IAVF_DEBUG_AQ_MESSAGE, "AQTX: Error queue is full.\n"); - status = AVF_ERR_ADMIN_QUEUE_FULL; + status = IAVF_ERR_ADMIN_QUEUE_FULL; goto asq_send_command_error; } /* initialize the temp desc pointer with the right desc */ - desc_on_ring = AVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use); + desc_on_ring = IAVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use); /* if the desc is available copy the temp desc to the right place */ - avf_memcpy(desc_on_ring, desc, sizeof(struct avf_aq_desc), - AVF_NONDMA_TO_DMA); + iavf_memcpy(desc_on_ring, desc, sizeof(struct iavf_aq_desc), + IAVF_NONDMA_TO_DMA); /* if buff is not NULL assume indirect command */ if (buff != NULL) { dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]); /* copy the user buff into the respective DMA buff */ - avf_memcpy(dma_buff->va, buff, buff_size, - AVF_NONDMA_TO_DMA); + iavf_memcpy(dma_buff->va, buff, buff_size, + IAVF_NONDMA_TO_DMA); desc_on_ring->datalen = CPU_TO_LE16(buff_size); /* Update the address values in the desc with the pa value * for respective buffer */ desc_on_ring->params.external.addr_high = - CPU_TO_LE32(AVF_HI_DWORD(dma_buff->pa)); + CPU_TO_LE32(IAVF_HI_DWORD(dma_buff->pa)); desc_on_ring->params.external.addr_low = - CPU_TO_LE32(AVF_LO_DWORD(dma_buff->pa)); + CPU_TO_LE32(IAVF_LO_DWORD(dma_buff->pa)); } /* bump the tail */ - avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n"); - avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring, + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n"); + iavf_debug_aq(hw, IAVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring, buff, buff_size); (hw->aq.asq.next_to_use)++; if (hw->aq.asq.next_to_use == hw->aq.asq.count) @@ -819,24 +819,24 @@ enum avf_status_code avf_asq_send_command(struct avf_hw *hw, /* AQ designers suggest use of head for better * timing reliability than DD bit */ - if (avf_asq_done(hw)) + if (iavf_asq_done(hw)) break; - avf_usec_delay(50); + iavf_usec_delay(50); total_delay += 50; } while (total_delay < hw->aq.asq_cmd_timeout); } /* if ready, copy the desc back to temp */ - if (avf_asq_done(hw)) { - avf_memcpy(desc, desc_on_ring, sizeof(struct avf_aq_desc), - AVF_DMA_TO_NONDMA); + if (iavf_asq_done(hw)) { + iavf_memcpy(desc, desc_on_ring, sizeof(struct iavf_aq_desc), + IAVF_DMA_TO_NONDMA); if (buff != NULL) - avf_memcpy(buff, dma_buff->va, buff_size, - AVF_DMA_TO_NONDMA); + iavf_memcpy(buff, dma_buff->va, buff_size, + IAVF_DMA_TO_NONDMA); retval = LE16_TO_CPU(desc->retval); if (retval != 0) { - avf_debug(hw, - AVF_DEBUG_AQ_MESSAGE, + iavf_debug(hw, + IAVF_DEBUG_AQ_MESSAGE, "AQTX: Command completed with error 0x%X.\n", retval); @@ -844,60 +844,60 @@ enum avf_status_code avf_asq_send_command(struct avf_hw *hw, retval &= 0xff; } cmd_completed = true; - if ((enum avf_admin_queue_err)retval == AVF_AQ_RC_OK) - status = AVF_SUCCESS; + if ((enum iavf_admin_queue_err)retval == IAVF_AQ_RC_OK) + status = IAVF_SUCCESS; else - status = AVF_ERR_ADMIN_QUEUE_ERROR; - hw->aq.asq_last_status = (enum avf_admin_queue_err)retval; + status = IAVF_ERR_ADMIN_QUEUE_ERROR; + hw->aq.asq_last_status = (enum iavf_admin_queue_err)retval; } - avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer writeback:\n"); - avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size); + iavf_debug_aq(hw, IAVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size); /* save writeback aq if requested */ if (details->wb_desc) - avf_memcpy(details->wb_desc, desc_on_ring, - sizeof(struct avf_aq_desc), AVF_DMA_TO_NONDMA); + iavf_memcpy(details->wb_desc, desc_on_ring, + sizeof(struct iavf_aq_desc), IAVF_DMA_TO_NONDMA); /* update the error if time out occurred */ if ((!cmd_completed) && (!details->async && !details->postpone)) { - if (rd32(hw, hw->aq.asq.len) & AVF_ATQLEN1_ATQCRIT_MASK) { - avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, + if (rd32(hw, hw->aq.asq.len) & IAVF_ATQLEN1_ATQCRIT_MASK) { + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQTX: AQ Critical error.\n"); - status = AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR; + status = IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR; } else { - avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQTX: Writeback timeout.\n"); - status = AVF_ERR_ADMIN_QUEUE_TIMEOUT; + status = IAVF_ERR_ADMIN_QUEUE_TIMEOUT; } } asq_send_command_error: - avf_release_spinlock(&hw->aq.asq_spinlock); + iavf_release_spinlock(&hw->aq.asq_spinlock); return status; } /** - * avf_fill_default_direct_cmd_desc - AQ descriptor helper function + * iavf_fill_default_direct_cmd_desc - AQ descriptor helper function * @desc: pointer to the temp descriptor (non DMA mem) * @opcode: the opcode can be used to decide which flags to turn off or on * * Fill the desc with default values **/ -void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc, +void iavf_fill_default_direct_cmd_desc(struct iavf_aq_desc *desc, u16 opcode) { /* zero out the desc */ - avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc), - AVF_NONDMA_MEM); + iavf_memset((void *)desc, 0, sizeof(struct iavf_aq_desc), + IAVF_NONDMA_MEM); desc->opcode = CPU_TO_LE16(opcode); - desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_SI); + desc->flags = CPU_TO_LE16(IAVF_AQ_FLAG_SI); } /** - * avf_clean_arq_element + * iavf_clean_arq_element * @hw: pointer to the hw struct * @e: event info from the receive descriptor, includes any buffers * @pending: number of events that could be left to process @@ -906,73 +906,73 @@ void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc, * the contents through e. It can also return how many events are * left to process through 'pending' **/ -enum avf_status_code avf_clean_arq_element(struct avf_hw *hw, - struct avf_arq_event_info *e, +enum iavf_status_code iavf_clean_arq_element(struct iavf_hw *hw, + struct iavf_arq_event_info *e, u16 *pending) { - enum avf_status_code ret_code = AVF_SUCCESS; + enum iavf_status_code ret_code = IAVF_SUCCESS; u16 ntc = hw->aq.arq.next_to_clean; - struct avf_aq_desc *desc; - struct avf_dma_mem *bi; + struct iavf_aq_desc *desc; + struct iavf_dma_mem *bi; u16 desc_idx; u16 datalen; u16 flags; u16 ntu; /* pre-clean the event info */ - avf_memset(&e->desc, 0, sizeof(e->desc), AVF_NONDMA_MEM); + iavf_memset(&e->desc, 0, sizeof(e->desc), IAVF_NONDMA_MEM); /* take the lock before we start messing with the ring */ - avf_acquire_spinlock(&hw->aq.arq_spinlock); + iavf_acquire_spinlock(&hw->aq.arq_spinlock); if (hw->aq.arq.count == 0) { - avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQRX: Admin queue not initialized.\n"); - ret_code = AVF_ERR_QUEUE_EMPTY; + ret_code = IAVF_ERR_QUEUE_EMPTY; goto clean_arq_element_err; } /* set next_to_use to head */ #ifdef INTEGRATED_VF - if (!avf_is_vf(hw)) - ntu = rd32(hw, hw->aq.arq.head) & AVF_PF_ARQH_ARQH_MASK; + if (!iavf_is_vf(hw)) + ntu = rd32(hw, hw->aq.arq.head) & IAVF_PF_ARQH_ARQH_MASK; else - ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK; + ntu = rd32(hw, hw->aq.arq.head) & IAVF_ARQH1_ARQH_MASK; #else - ntu = rd32(hw, hw->aq.arq.head) & AVF_ARQH1_ARQH_MASK; + ntu = rd32(hw, hw->aq.arq.head) & IAVF_ARQH1_ARQH_MASK; #endif /* INTEGRATED_VF */ if (ntu == ntc) { /* nothing to do - shouldn't need to update ring's values */ - ret_code = AVF_ERR_ADMIN_QUEUE_NO_WORK; + ret_code = IAVF_ERR_ADMIN_QUEUE_NO_WORK; goto clean_arq_element_out; } /* now clean the next descriptor */ - desc = AVF_ADMINQ_DESC(hw->aq.arq, ntc); + desc = IAVF_ADMINQ_DESC(hw->aq.arq, ntc); desc_idx = ntc; hw->aq.arq_last_status = - (enum avf_admin_queue_err)LE16_TO_CPU(desc->retval); + (enum iavf_admin_queue_err)LE16_TO_CPU(desc->retval); flags = LE16_TO_CPU(desc->flags); - if (flags & AVF_AQ_FLAG_ERR) { - ret_code = AVF_ERR_ADMIN_QUEUE_ERROR; - avf_debug(hw, - AVF_DEBUG_AQ_MESSAGE, + if (flags & IAVF_AQ_FLAG_ERR) { + ret_code = IAVF_ERR_ADMIN_QUEUE_ERROR; + iavf_debug(hw, + IAVF_DEBUG_AQ_MESSAGE, "AQRX: Event received with error 0x%X.\n", hw->aq.arq_last_status); } - avf_memcpy(&e->desc, desc, sizeof(struct avf_aq_desc), - AVF_DMA_TO_NONDMA); + iavf_memcpy(&e->desc, desc, sizeof(struct iavf_aq_desc), + IAVF_DMA_TO_NONDMA); datalen = LE16_TO_CPU(desc->datalen); e->msg_len = min(datalen, e->buf_len); if (e->msg_buf != NULL && (e->msg_len != 0)) - avf_memcpy(e->msg_buf, + iavf_memcpy(e->msg_buf, hw->aq.arq.r.arq_bi[desc_idx].va, - e->msg_len, AVF_DMA_TO_NONDMA); + e->msg_len, IAVF_DMA_TO_NONDMA); - avf_debug(hw, AVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n"); - avf_debug_aq(hw, AVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf, + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n"); + iavf_debug_aq(hw, IAVF_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf, hw->aq.arq_buf_size); /* Restore the original datalen and buffer address in the desc, @@ -980,14 +980,14 @@ enum avf_status_code avf_clean_arq_element(struct avf_hw *hw, * size */ bi = &hw->aq.arq.r.arq_bi[ntc]; - avf_memset((void *)desc, 0, sizeof(struct avf_aq_desc), AVF_DMA_MEM); + iavf_memset((void *)desc, 0, sizeof(struct iavf_aq_desc), IAVF_DMA_MEM); - desc->flags = CPU_TO_LE16(AVF_AQ_FLAG_BUF); - if (hw->aq.arq_buf_size > AVF_AQ_LARGE_BUF) - desc->flags |= CPU_TO_LE16(AVF_AQ_FLAG_LB); + desc->flags = CPU_TO_LE16(IAVF_AQ_FLAG_BUF); + if (hw->aq.arq_buf_size > IAVF_AQ_LARGE_BUF) + desc->flags |= CPU_TO_LE16(IAVF_AQ_FLAG_LB); desc->datalen = CPU_TO_LE16((u16)bi->size); - desc->params.external.addr_high = CPU_TO_LE32(AVF_HI_DWORD(bi->pa)); - desc->params.external.addr_low = CPU_TO_LE32(AVF_LO_DWORD(bi->pa)); + desc->params.external.addr_high = CPU_TO_LE32(IAVF_HI_DWORD(bi->pa)); + desc->params.external.addr_low = CPU_TO_LE32(IAVF_LO_DWORD(bi->pa)); /* set tail = the last cleaned desc index. */ wr32(hw, hw->aq.arq.tail, ntc); @@ -1003,7 +1003,7 @@ enum avf_status_code avf_clean_arq_element(struct avf_hw *hw, if (pending != NULL) *pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc); clean_arq_element_err: - avf_release_spinlock(&hw->aq.arq_spinlock); + iavf_release_spinlock(&hw->aq.arq_spinlock); return ret_code; } diff --git a/drivers/net/iavf/base/iavf_adminq.h b/drivers/net/iavf/base/iavf_adminq.h index ce72fb5a8..c6e7e852d 100644 --- a/drivers/net/iavf/base/iavf_adminq.h +++ b/drivers/net/iavf/base/iavf_adminq.h @@ -31,26 +31,26 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_ADMINQ_H_ -#define _AVF_ADMINQ_H_ +#ifndef _IAVF_ADMINQ_H_ +#define _IAVF_ADMINQ_H_ #include "iavf_osdep.h" #include "iavf_status.h" #include "iavf_adminq_cmd.h" -#define AVF_ADMINQ_DESC(R, i) \ - (&(((struct avf_aq_desc *)((R).desc_buf.va))[i])) +#define IAVF_ADMINQ_DESC(R, i) \ + (&(((struct iavf_aq_desc *)((R).desc_buf.va))[i])) -#define AVF_ADMINQ_DESC_ALIGNMENT 4096 +#define IAVF_ADMINQ_DESC_ALIGNMENT 4096 -struct avf_adminq_ring { - struct avf_virt_mem dma_head; /* space for dma structures */ - struct avf_dma_mem desc_buf; /* descriptor ring memory */ - struct avf_virt_mem cmd_buf; /* command buffer memory */ +struct iavf_adminq_ring { + struct iavf_virt_mem dma_head; /* space for dma structures */ + struct iavf_dma_mem desc_buf; /* descriptor ring memory */ + struct iavf_virt_mem cmd_buf; /* command buffer memory */ union { - struct avf_dma_mem *asq_bi; - struct avf_dma_mem *arq_bi; + struct iavf_dma_mem *asq_bi; + struct iavf_dma_mem *arq_bi; } r; u16 count; /* Number of descriptors */ @@ -69,31 +69,31 @@ struct avf_adminq_ring { }; /* ASQ transaction details */ -struct avf_asq_cmd_details { - void *callback; /* cast from type AVF_ADMINQ_CALLBACK */ +struct iavf_asq_cmd_details { + void *callback; /* cast from type IAVF_ADMINQ_CALLBACK */ u64 cookie; u16 flags_ena; u16 flags_dis; bool async; bool postpone; - struct avf_aq_desc *wb_desc; + struct iavf_aq_desc *wb_desc; }; -#define AVF_ADMINQ_DETAILS(R, i) \ - (&(((struct avf_asq_cmd_details *)((R).cmd_buf.va))[i])) +#define IAVF_ADMINQ_DETAILS(R, i) \ + (&(((struct iavf_asq_cmd_details *)((R).cmd_buf.va))[i])) /* ARQ event information */ -struct avf_arq_event_info { - struct avf_aq_desc desc; +struct iavf_arq_event_info { + struct iavf_aq_desc desc; u16 msg_len; u16 buf_len; u8 *msg_buf; }; /* Admin Queue information */ -struct avf_adminq_info { - struct avf_adminq_ring arq; /* receive queue */ - struct avf_adminq_ring asq; /* send queue */ +struct iavf_adminq_info { + struct iavf_adminq_ring arq; /* receive queue */ + struct iavf_adminq_ring asq; /* send queue */ u32 asq_cmd_timeout; /* send queue cmd write back timeout*/ u16 num_arq_entries; /* receive queue depth */ u16 num_asq_entries; /* send queue depth */ @@ -105,49 +105,49 @@ struct avf_adminq_info { u16 api_maj_ver; /* api major version */ u16 api_min_ver; /* api minor version */ - struct avf_spinlock asq_spinlock; /* Send queue spinlock */ - struct avf_spinlock arq_spinlock; /* Receive queue spinlock */ + struct iavf_spinlock asq_spinlock; /* Send queue spinlock */ + struct iavf_spinlock arq_spinlock; /* Receive queue spinlock */ /* last status values on send and receive queues */ - enum avf_admin_queue_err asq_last_status; - enum avf_admin_queue_err arq_last_status; + enum iavf_admin_queue_err asq_last_status; + enum iavf_admin_queue_err arq_last_status; }; /** - * avf_aq_rc_to_posix - convert errors to user-land codes + * iavf_aq_rc_to_posix - convert errors to user-land codes * aq_ret: AdminQ handler error code can override aq_rc * aq_rc: AdminQ firmware error code to convert **/ -STATIC INLINE int avf_aq_rc_to_posix(int aq_ret, int aq_rc) +STATIC INLINE int iavf_aq_rc_to_posix(int aq_ret, int aq_rc) { int aq_to_posix[] = { - 0, /* AVF_AQ_RC_OK */ - -EPERM, /* AVF_AQ_RC_EPERM */ - -ENOENT, /* AVF_AQ_RC_ENOENT */ - -ESRCH, /* AVF_AQ_RC_ESRCH */ - -EINTR, /* AVF_AQ_RC_EINTR */ - -EIO, /* AVF_AQ_RC_EIO */ - -ENXIO, /* AVF_AQ_RC_ENXIO */ - -E2BIG, /* AVF_AQ_RC_E2BIG */ - -EAGAIN, /* AVF_AQ_RC_EAGAIN */ - -ENOMEM, /* AVF_AQ_RC_ENOMEM */ - -EACCES, /* AVF_AQ_RC_EACCES */ - -EFAULT, /* AVF_AQ_RC_EFAULT */ - -EBUSY, /* AVF_AQ_RC_EBUSY */ - -EEXIST, /* AVF_AQ_RC_EEXIST */ - -EINVAL, /* AVF_AQ_RC_EINVAL */ - -ENOTTY, /* AVF_AQ_RC_ENOTTY */ - -ENOSPC, /* AVF_AQ_RC_ENOSPC */ - -ENOSYS, /* AVF_AQ_RC_ENOSYS */ - -ERANGE, /* AVF_AQ_RC_ERANGE */ - -EPIPE, /* AVF_AQ_RC_EFLUSHED */ - -ESPIPE, /* AVF_AQ_RC_BAD_ADDR */ - -EROFS, /* AVF_AQ_RC_EMODE */ - -EFBIG, /* AVF_AQ_RC_EFBIG */ + 0, /* IAVF_AQ_RC_OK */ + -EPERM, /* IAVF_AQ_RC_EPERM */ + -ENOENT, /* IAVF_AQ_RC_ENOENT */ + -ESRCH, /* IAVF_AQ_RC_ESRCH */ + -EINTR, /* IAVF_AQ_RC_EINTR */ + -EIO, /* IAVF_AQ_RC_EIO */ + -ENXIO, /* IAVF_AQ_RC_ENXIO */ + -E2BIG, /* IAVF_AQ_RC_E2BIG */ + -EAGAIN, /* IAVF_AQ_RC_EAGAIN */ + -ENOMEM, /* IAVF_AQ_RC_ENOMEM */ + -EACCES, /* IAVF_AQ_RC_EACCES */ + -EFAULT, /* IAVF_AQ_RC_EFAULT */ + -EBUSY, /* IAVF_AQ_RC_EBUSY */ + -EEXIST, /* IAVF_AQ_RC_EEXIST */ + -EINVAL, /* IAVF_AQ_RC_EINVAL */ + -ENOTTY, /* IAVF_AQ_RC_ENOTTY */ + -ENOSPC, /* IAVF_AQ_RC_ENOSPC */ + -ENOSYS, /* IAVF_AQ_RC_ENOSYS */ + -ERANGE, /* IAVF_AQ_RC_ERANGE */ + -EPIPE, /* IAVF_AQ_RC_EFLUSHED */ + -ESPIPE, /* IAVF_AQ_RC_BAD_ADDR */ + -EROFS, /* IAVF_AQ_RC_EMODE */ + -EFBIG, /* IAVF_AQ_RC_EFBIG */ }; /* aq_rc is invalid if AQ timed out */ - if (aq_ret == AVF_ERR_ADMIN_QUEUE_TIMEOUT) + if (aq_ret == IAVF_ERR_ADMIN_QUEUE_TIMEOUT) return -EAGAIN; if (!((u32)aq_rc < (sizeof(aq_to_posix) / sizeof((aq_to_posix)[0])))) @@ -157,10 +157,10 @@ STATIC INLINE int avf_aq_rc_to_posix(int aq_ret, int aq_rc) } /* general information */ -#define AVF_AQ_LARGE_BUF 512 -#define AVF_ASQ_CMD_TIMEOUT 250000 /* usecs */ +#define IAVF_AQ_LARGE_BUF 512 +#define IAVF_ASQ_CMD_TIMEOUT 250000 /* usecs */ -void avf_fill_default_direct_cmd_desc(struct avf_aq_desc *desc, +void iavf_fill_default_direct_cmd_desc(struct iavf_aq_desc *desc, u16 opcode); -#endif /* _AVF_ADMINQ_H_ */ +#endif /* _IAVF_ADMINQ_H_ */ diff --git a/drivers/net/iavf/base/iavf_adminq_cmd.h b/drivers/net/iavf/base/iavf_adminq_cmd.h index 795491187..353feb3da 100644 --- a/drivers/net/iavf/base/iavf_adminq_cmd.h +++ b/drivers/net/iavf/base/iavf_adminq_cmd.h @@ -31,28 +31,28 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_ADMINQ_CMD_H_ -#define _AVF_ADMINQ_CMD_H_ +#ifndef _IAVF_ADMINQ_CMD_H_ +#define _IAVF_ADMINQ_CMD_H_ -/* This header file defines the avf Admin Queue commands and is shared between - * avf Firmware and Software. +/* This header file defines the iavf Admin Queue commands and is shared between + * iavf Firmware and Software. * * This file needs to comply with the Linux Kernel coding style. */ -#define AVF_FW_API_VERSION_MAJOR 0x0001 -#define AVF_FW_API_VERSION_MINOR_X722 0x0005 -#define AVF_FW_API_VERSION_MINOR_X710 0x0007 +#define IAVF_FW_API_VERSION_MAJOR 0x0001 +#define IAVF_FW_API_VERSION_MINOR_X722 0x0005 +#define IAVF_FW_API_VERSION_MINOR_X710 0x0007 -#define AVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == AVF_MAC_XL710 ? \ - AVF_FW_API_VERSION_MINOR_X710 : \ - AVF_FW_API_VERSION_MINOR_X722) +#define IAVF_FW_MINOR_VERSION(_h) ((_h)->mac.type == IAVF_MAC_XL710 ? \ + IAVF_FW_API_VERSION_MINOR_X710 : \ + IAVF_FW_API_VERSION_MINOR_X722) /* API version 1.7 implements additional link and PHY-specific APIs */ -#define AVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007 +#define IAVF_MINOR_VER_GET_LINK_INFO_XL710 0x0007 -struct avf_aq_desc { +struct iavf_aq_desc { __le16 flags; __le16 opcode; __le16 datalen; @@ -82,242 +82,242 @@ struct avf_aq_desc { */ /* command flags and offsets*/ -#define AVF_AQ_FLAG_DD_SHIFT 0 -#define AVF_AQ_FLAG_CMP_SHIFT 1 -#define AVF_AQ_FLAG_ERR_SHIFT 2 -#define AVF_AQ_FLAG_VFE_SHIFT 3 -#define AVF_AQ_FLAG_LB_SHIFT 9 -#define AVF_AQ_FLAG_RD_SHIFT 10 -#define AVF_AQ_FLAG_VFC_SHIFT 11 -#define AVF_AQ_FLAG_BUF_SHIFT 12 -#define AVF_AQ_FLAG_SI_SHIFT 13 -#define AVF_AQ_FLAG_EI_SHIFT 14 -#define AVF_AQ_FLAG_FE_SHIFT 15 - -#define AVF_AQ_FLAG_DD (1 << AVF_AQ_FLAG_DD_SHIFT) /* 0x1 */ -#define AVF_AQ_FLAG_CMP (1 << AVF_AQ_FLAG_CMP_SHIFT) /* 0x2 */ -#define AVF_AQ_FLAG_ERR (1 << AVF_AQ_FLAG_ERR_SHIFT) /* 0x4 */ -#define AVF_AQ_FLAG_VFE (1 << AVF_AQ_FLAG_VFE_SHIFT) /* 0x8 */ -#define AVF_AQ_FLAG_LB (1 << AVF_AQ_FLAG_LB_SHIFT) /* 0x200 */ -#define AVF_AQ_FLAG_RD (1 << AVF_AQ_FLAG_RD_SHIFT) /* 0x400 */ -#define AVF_AQ_FLAG_VFC (1 << AVF_AQ_FLAG_VFC_SHIFT) /* 0x800 */ -#define AVF_AQ_FLAG_BUF (1 << AVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */ -#define AVF_AQ_FLAG_SI (1 << AVF_AQ_FLAG_SI_SHIFT) /* 0x2000 */ -#define AVF_AQ_FLAG_EI (1 << AVF_AQ_FLAG_EI_SHIFT) /* 0x4000 */ -#define AVF_AQ_FLAG_FE (1 << AVF_AQ_FLAG_FE_SHIFT) /* 0x8000 */ +#define IAVF_AQ_FLAG_DD_SHIFT 0 +#define IAVF_AQ_FLAG_CMP_SHIFT 1 +#define IAVF_AQ_FLAG_ERR_SHIFT 2 +#define IAVF_AQ_FLAG_VFE_SHIFT 3 +#define IAVF_AQ_FLAG_LB_SHIFT 9 +#define IAVF_AQ_FLAG_RD_SHIFT 10 +#define IAVF_AQ_FLAG_VFC_SHIFT 11 +#define IAVF_AQ_FLAG_BUF_SHIFT 12 +#define IAVF_AQ_FLAG_SI_SHIFT 13 +#define IAVF_AQ_FLAG_EI_SHIFT 14 +#define IAVF_AQ_FLAG_FE_SHIFT 15 + +#define IAVF_AQ_FLAG_DD (1 << IAVF_AQ_FLAG_DD_SHIFT) /* 0x1 */ +#define IAVF_AQ_FLAG_CMP (1 << IAVF_AQ_FLAG_CMP_SHIFT) /* 0x2 */ +#define IAVF_AQ_FLAG_ERR (1 << IAVF_AQ_FLAG_ERR_SHIFT) /* 0x4 */ +#define IAVF_AQ_FLAG_VFE (1 << IAVF_AQ_FLAG_VFE_SHIFT) /* 0x8 */ +#define IAVF_AQ_FLAG_LB (1 << IAVF_AQ_FLAG_LB_SHIFT) /* 0x200 */ +#define IAVF_AQ_FLAG_RD (1 << IAVF_AQ_FLAG_RD_SHIFT) /* 0x400 */ +#define IAVF_AQ_FLAG_VFC (1 << IAVF_AQ_FLAG_VFC_SHIFT) /* 0x800 */ +#define IAVF_AQ_FLAG_BUF (1 << IAVF_AQ_FLAG_BUF_SHIFT) /* 0x1000 */ +#define IAVF_AQ_FLAG_SI (1 << IAVF_AQ_FLAG_SI_SHIFT) /* 0x2000 */ +#define IAVF_AQ_FLAG_EI (1 << IAVF_AQ_FLAG_EI_SHIFT) /* 0x4000 */ +#define IAVF_AQ_FLAG_FE (1 << IAVF_AQ_FLAG_FE_SHIFT) /* 0x8000 */ /* error codes */ -enum avf_admin_queue_err { - AVF_AQ_RC_OK = 0, /* success */ - AVF_AQ_RC_EPERM = 1, /* Operation not permitted */ - AVF_AQ_RC_ENOENT = 2, /* No such element */ - AVF_AQ_RC_ESRCH = 3, /* Bad opcode */ - AVF_AQ_RC_EINTR = 4, /* operation interrupted */ - AVF_AQ_RC_EIO = 5, /* I/O error */ - AVF_AQ_RC_ENXIO = 6, /* No such resource */ - AVF_AQ_RC_E2BIG = 7, /* Arg too long */ - AVF_AQ_RC_EAGAIN = 8, /* Try again */ - AVF_AQ_RC_ENOMEM = 9, /* Out of memory */ - AVF_AQ_RC_EACCES = 10, /* Permission denied */ - AVF_AQ_RC_EFAULT = 11, /* Bad address */ - AVF_AQ_RC_EBUSY = 12, /* Device or resource busy */ - AVF_AQ_RC_EEXIST = 13, /* object already exists */ - AVF_AQ_RC_EINVAL = 14, /* Invalid argument */ - AVF_AQ_RC_ENOTTY = 15, /* Not a typewriter */ - AVF_AQ_RC_ENOSPC = 16, /* No space left or alloc failure */ - AVF_AQ_RC_ENOSYS = 17, /* Function not implemented */ - AVF_AQ_RC_ERANGE = 18, /* Parameter out of range */ - AVF_AQ_RC_EFLUSHED = 19, /* Cmd flushed due to prev cmd error */ - AVF_AQ_RC_BAD_ADDR = 20, /* Descriptor contains a bad pointer */ - AVF_AQ_RC_EMODE = 21, /* Op not allowed in current dev mode */ - AVF_AQ_RC_EFBIG = 22, /* File too large */ +enum iavf_admin_queue_err { + IAVF_AQ_RC_OK = 0, /* success */ + IAVF_AQ_RC_EPERM = 1, /* Operation not permitted */ + IAVF_AQ_RC_ENOENT = 2, /* No such element */ + IAVF_AQ_RC_ESRCH = 3, /* Bad opcode */ + IAVF_AQ_RC_EINTR = 4, /* operation interrupted */ + IAVF_AQ_RC_EIO = 5, /* I/O error */ + IAVF_AQ_RC_ENXIO = 6, /* No such resource */ + IAVF_AQ_RC_E2BIG = 7, /* Arg too long */ + IAVF_AQ_RC_EAGAIN = 8, /* Try again */ + IAVF_AQ_RC_ENOMEM = 9, /* Out of memory */ + IAVF_AQ_RC_EACCES = 10, /* Permission denied */ + IAVF_AQ_RC_EFAULT = 11, /* Bad address */ + IAVF_AQ_RC_EBUSY = 12, /* Device or resource busy */ + IAVF_AQ_RC_EEXIST = 13, /* object already exists */ + IAVF_AQ_RC_EINVAL = 14, /* Invalid argument */ + IAVF_AQ_RC_ENOTTY = 15, /* Not a typewriter */ + IAVF_AQ_RC_ENOSPC = 16, /* No space left or alloc failure */ + IAVF_AQ_RC_ENOSYS = 17, /* Function not implemented */ + IAVF_AQ_RC_ERANGE = 18, /* Parameter out of range */ + IAVF_AQ_RC_EFLUSHED = 19, /* Cmd flushed due to prev cmd error */ + IAVF_AQ_RC_BAD_ADDR = 20, /* Descriptor contains a bad pointer */ + IAVF_AQ_RC_EMODE = 21, /* Op not allowed in current dev mode */ + IAVF_AQ_RC_EFBIG = 22, /* File too large */ }; /* Admin Queue command opcodes */ -enum avf_admin_queue_opc { +enum iavf_admin_queue_opc { /* aq commands */ - avf_aqc_opc_get_version = 0x0001, - avf_aqc_opc_driver_version = 0x0002, - avf_aqc_opc_queue_shutdown = 0x0003, - avf_aqc_opc_set_pf_context = 0x0004, + iavf_aqc_opc_get_version = 0x0001, + iavf_aqc_opc_driver_version = 0x0002, + iavf_aqc_opc_queue_shutdown = 0x0003, + iavf_aqc_opc_set_pf_context = 0x0004, /* resource ownership */ - avf_aqc_opc_request_resource = 0x0008, - avf_aqc_opc_release_resource = 0x0009, + iavf_aqc_opc_request_resource = 0x0008, + iavf_aqc_opc_release_resource = 0x0009, - avf_aqc_opc_list_func_capabilities = 0x000A, - avf_aqc_opc_list_dev_capabilities = 0x000B, + iavf_aqc_opc_list_func_capabilities = 0x000A, + iavf_aqc_opc_list_dev_capabilities = 0x000B, /* Proxy commands */ - avf_aqc_opc_set_proxy_config = 0x0104, - avf_aqc_opc_set_ns_proxy_table_entry = 0x0105, + iavf_aqc_opc_set_proxy_config = 0x0104, + iavf_aqc_opc_set_ns_proxy_table_entry = 0x0105, /* LAA */ - avf_aqc_opc_mac_address_read = 0x0107, - avf_aqc_opc_mac_address_write = 0x0108, + iavf_aqc_opc_mac_address_read = 0x0107, + iavf_aqc_opc_mac_address_write = 0x0108, /* PXE */ - avf_aqc_opc_clear_pxe_mode = 0x0110, + iavf_aqc_opc_clear_pxe_mode = 0x0110, /* WoL commands */ - avf_aqc_opc_set_wol_filter = 0x0120, - avf_aqc_opc_get_wake_reason = 0x0121, - avf_aqc_opc_clear_all_wol_filters = 0x025E, + iavf_aqc_opc_set_wol_filter = 0x0120, + iavf_aqc_opc_get_wake_reason = 0x0121, + iavf_aqc_opc_clear_all_wol_filters = 0x025E, /* internal switch commands */ - avf_aqc_opc_get_switch_config = 0x0200, - avf_aqc_opc_add_statistics = 0x0201, - avf_aqc_opc_remove_statistics = 0x0202, - avf_aqc_opc_set_port_parameters = 0x0203, - avf_aqc_opc_get_switch_resource_alloc = 0x0204, - avf_aqc_opc_set_switch_config = 0x0205, - avf_aqc_opc_rx_ctl_reg_read = 0x0206, - avf_aqc_opc_rx_ctl_reg_write = 0x0207, - - avf_aqc_opc_add_vsi = 0x0210, - avf_aqc_opc_update_vsi_parameters = 0x0211, - avf_aqc_opc_get_vsi_parameters = 0x0212, - - avf_aqc_opc_add_pv = 0x0220, - avf_aqc_opc_update_pv_parameters = 0x0221, - avf_aqc_opc_get_pv_parameters = 0x0222, - - avf_aqc_opc_add_veb = 0x0230, - avf_aqc_opc_update_veb_parameters = 0x0231, - avf_aqc_opc_get_veb_parameters = 0x0232, - - avf_aqc_opc_delete_element = 0x0243, - - avf_aqc_opc_add_macvlan = 0x0250, - avf_aqc_opc_remove_macvlan = 0x0251, - avf_aqc_opc_add_vlan = 0x0252, - avf_aqc_opc_remove_vlan = 0x0253, - avf_aqc_opc_set_vsi_promiscuous_modes = 0x0254, - avf_aqc_opc_add_tag = 0x0255, - avf_aqc_opc_remove_tag = 0x0256, - avf_aqc_opc_add_multicast_etag = 0x0257, - avf_aqc_opc_remove_multicast_etag = 0x0258, - avf_aqc_opc_update_tag = 0x0259, - avf_aqc_opc_add_control_packet_filter = 0x025A, - avf_aqc_opc_remove_control_packet_filter = 0x025B, - avf_aqc_opc_add_cloud_filters = 0x025C, - avf_aqc_opc_remove_cloud_filters = 0x025D, - avf_aqc_opc_clear_wol_switch_filters = 0x025E, - avf_aqc_opc_replace_cloud_filters = 0x025F, - - avf_aqc_opc_add_mirror_rule = 0x0260, - avf_aqc_opc_delete_mirror_rule = 0x0261, + iavf_aqc_opc_get_switch_config = 0x0200, + iavf_aqc_opc_add_statistics = 0x0201, + iavf_aqc_opc_remove_statistics = 0x0202, + iavf_aqc_opc_set_port_parameters = 0x0203, + iavf_aqc_opc_get_switch_resource_alloc = 0x0204, + iavf_aqc_opc_set_switch_config = 0x0205, + iavf_aqc_opc_rx_ctl_reg_read = 0x0206, + iavf_aqc_opc_rx_ctl_reg_write = 0x0207, + + iavf_aqc_opc_add_vsi = 0x0210, + iavf_aqc_opc_update_vsi_parameters = 0x0211, + iavf_aqc_opc_get_vsi_parameters = 0x0212, + + iavf_aqc_opc_add_pv = 0x0220, + iavf_aqc_opc_update_pv_parameters = 0x0221, + iavf_aqc_opc_get_pv_parameters = 0x0222, + + iavf_aqc_opc_add_veb = 0x0230, + iavf_aqc_opc_update_veb_parameters = 0x0231, + iavf_aqc_opc_get_veb_parameters = 0x0232, + + iavf_aqc_opc_delete_element = 0x0243, + + iavf_aqc_opc_add_macvlan = 0x0250, + iavf_aqc_opc_remove_macvlan = 0x0251, + iavf_aqc_opc_add_vlan = 0x0252, + iavf_aqc_opc_remove_vlan = 0x0253, + iavf_aqc_opc_set_vsi_promiscuous_modes = 0x0254, + iavf_aqc_opc_add_tag = 0x0255, + iavf_aqc_opc_remove_tag = 0x0256, + iavf_aqc_opc_add_multicast_etag = 0x0257, + iavf_aqc_opc_remove_multicast_etag = 0x0258, + iavf_aqc_opc_update_tag = 0x0259, + iavf_aqc_opc_add_control_packet_filter = 0x025A, + iavf_aqc_opc_remove_control_packet_filter = 0x025B, + iavf_aqc_opc_add_cloud_filters = 0x025C, + iavf_aqc_opc_remove_cloud_filters = 0x025D, + iavf_aqc_opc_clear_wol_switch_filters = 0x025E, + iavf_aqc_opc_replace_cloud_filters = 0x025F, + + iavf_aqc_opc_add_mirror_rule = 0x0260, + iavf_aqc_opc_delete_mirror_rule = 0x0261, /* Dynamic Device Personalization */ - avf_aqc_opc_write_personalization_profile = 0x0270, - avf_aqc_opc_get_personalization_profile_list = 0x0271, + iavf_aqc_opc_write_personalization_profile = 0x0270, + iavf_aqc_opc_get_personalization_profile_list = 0x0271, /* DCB commands */ - avf_aqc_opc_dcb_ignore_pfc = 0x0301, - avf_aqc_opc_dcb_updated = 0x0302, - avf_aqc_opc_set_dcb_parameters = 0x0303, + iavf_aqc_opc_dcb_ignore_pfc = 0x0301, + iavf_aqc_opc_dcb_updated = 0x0302, + iavf_aqc_opc_set_dcb_parameters = 0x0303, /* TX scheduler */ - avf_aqc_opc_configure_vsi_bw_limit = 0x0400, - avf_aqc_opc_configure_vsi_ets_sla_bw_limit = 0x0406, - avf_aqc_opc_configure_vsi_tc_bw = 0x0407, - avf_aqc_opc_query_vsi_bw_config = 0x0408, - avf_aqc_opc_query_vsi_ets_sla_config = 0x040A, - avf_aqc_opc_configure_switching_comp_bw_limit = 0x0410, - - avf_aqc_opc_enable_switching_comp_ets = 0x0413, - avf_aqc_opc_modify_switching_comp_ets = 0x0414, - avf_aqc_opc_disable_switching_comp_ets = 0x0415, - avf_aqc_opc_configure_switching_comp_ets_bw_limit = 0x0416, - avf_aqc_opc_configure_switching_comp_bw_config = 0x0417, - avf_aqc_opc_query_switching_comp_ets_config = 0x0418, - avf_aqc_opc_query_port_ets_config = 0x0419, - avf_aqc_opc_query_switching_comp_bw_config = 0x041A, - avf_aqc_opc_suspend_port_tx = 0x041B, - avf_aqc_opc_resume_port_tx = 0x041C, - avf_aqc_opc_configure_partition_bw = 0x041D, + iavf_aqc_opc_configure_vsi_bw_limit = 0x0400, + iavf_aqc_opc_configure_vsi_ets_sla_bw_limit = 0x0406, + iavf_aqc_opc_configure_vsi_tc_bw = 0x0407, + iavf_aqc_opc_query_vsi_bw_config = 0x0408, + iavf_aqc_opc_query_vsi_ets_sla_config = 0x040A, + iavf_aqc_opc_configure_switching_comp_bw_limit = 0x0410, + + iavf_aqc_opc_enable_switching_comp_ets = 0x0413, + iavf_aqc_opc_modify_switching_comp_ets = 0x0414, + iavf_aqc_opc_disable_switching_comp_ets = 0x0415, + iavf_aqc_opc_configure_switching_comp_ets_bw_limit = 0x0416, + iavf_aqc_opc_configure_switching_comp_bw_config = 0x0417, + iavf_aqc_opc_query_switching_comp_ets_config = 0x0418, + iavf_aqc_opc_query_port_ets_config = 0x0419, + iavf_aqc_opc_query_switching_comp_bw_config = 0x041A, + iavf_aqc_opc_suspend_port_tx = 0x041B, + iavf_aqc_opc_resume_port_tx = 0x041C, + iavf_aqc_opc_configure_partition_bw = 0x041D, /* hmc */ - avf_aqc_opc_query_hmc_resource_profile = 0x0500, - avf_aqc_opc_set_hmc_resource_profile = 0x0501, + iavf_aqc_opc_query_hmc_resource_profile = 0x0500, + iavf_aqc_opc_set_hmc_resource_profile = 0x0501, /* phy commands*/ /* phy commands*/ - avf_aqc_opc_get_phy_abilities = 0x0600, - avf_aqc_opc_set_phy_config = 0x0601, - avf_aqc_opc_set_mac_config = 0x0603, - avf_aqc_opc_set_link_restart_an = 0x0605, - avf_aqc_opc_get_link_status = 0x0607, - avf_aqc_opc_set_phy_int_mask = 0x0613, - avf_aqc_opc_get_local_advt_reg = 0x0614, - avf_aqc_opc_set_local_advt_reg = 0x0615, - avf_aqc_opc_get_partner_advt = 0x0616, - avf_aqc_opc_set_lb_modes = 0x0618, - avf_aqc_opc_get_phy_wol_caps = 0x0621, - avf_aqc_opc_set_phy_debug = 0x0622, - avf_aqc_opc_upload_ext_phy_fm = 0x0625, - avf_aqc_opc_run_phy_activity = 0x0626, - avf_aqc_opc_set_phy_register = 0x0628, - avf_aqc_opc_get_phy_register = 0x0629, + iavf_aqc_opc_get_phy_abilities = 0x0600, + iavf_aqc_opc_set_phy_config = 0x0601, + iavf_aqc_opc_set_mac_config = 0x0603, + iavf_aqc_opc_set_link_restart_an = 0x0605, + iavf_aqc_opc_get_link_status = 0x0607, + iavf_aqc_opc_set_phy_int_mask = 0x0613, + iavf_aqc_opc_get_local_advt_reg = 0x0614, + iavf_aqc_opc_set_local_advt_reg = 0x0615, + iavf_aqc_opc_get_partner_advt = 0x0616, + iavf_aqc_opc_set_lb_modes = 0x0618, + iavf_aqc_opc_get_phy_wol_caps = 0x0621, + iavf_aqc_opc_set_phy_debug = 0x0622, + iavf_aqc_opc_upload_ext_phy_fm = 0x0625, + iavf_aqc_opc_run_phy_activity = 0x0626, + iavf_aqc_opc_set_phy_register = 0x0628, + iavf_aqc_opc_get_phy_register = 0x0629, /* NVM commands */ - avf_aqc_opc_nvm_read = 0x0701, - avf_aqc_opc_nvm_erase = 0x0702, - avf_aqc_opc_nvm_update = 0x0703, - avf_aqc_opc_nvm_config_read = 0x0704, - avf_aqc_opc_nvm_config_write = 0x0705, - avf_aqc_opc_nvm_progress = 0x0706, - avf_aqc_opc_oem_post_update = 0x0720, - avf_aqc_opc_thermal_sensor = 0x0721, + iavf_aqc_opc_nvm_read = 0x0701, + iavf_aqc_opc_nvm_erase = 0x0702, + iavf_aqc_opc_nvm_update = 0x0703, + iavf_aqc_opc_nvm_config_read = 0x0704, + iavf_aqc_opc_nvm_config_write = 0x0705, + iavf_aqc_opc_nvm_progress = 0x0706, + iavf_aqc_opc_oem_post_update = 0x0720, + iavf_aqc_opc_thermal_sensor = 0x0721, /* virtualization commands */ - avf_aqc_opc_send_msg_to_pf = 0x0801, - avf_aqc_opc_send_msg_to_vf = 0x0802, - avf_aqc_opc_send_msg_to_peer = 0x0803, + iavf_aqc_opc_send_msg_to_pf = 0x0801, + iavf_aqc_opc_send_msg_to_vf = 0x0802, + iavf_aqc_opc_send_msg_to_peer = 0x0803, /* alternate structure */ - avf_aqc_opc_alternate_write = 0x0900, - avf_aqc_opc_alternate_write_indirect = 0x0901, - avf_aqc_opc_alternate_read = 0x0902, - avf_aqc_opc_alternate_read_indirect = 0x0903, - avf_aqc_opc_alternate_write_done = 0x0904, - avf_aqc_opc_alternate_set_mode = 0x0905, - avf_aqc_opc_alternate_clear_port = 0x0906, + iavf_aqc_opc_alternate_write = 0x0900, + iavf_aqc_opc_alternate_write_indirect = 0x0901, + iavf_aqc_opc_alternate_read = 0x0902, + iavf_aqc_opc_alternate_read_indirect = 0x0903, + iavf_aqc_opc_alternate_write_done = 0x0904, + iavf_aqc_opc_alternate_set_mode = 0x0905, + iavf_aqc_opc_alternate_clear_port = 0x0906, /* LLDP commands */ - avf_aqc_opc_lldp_get_mib = 0x0A00, - avf_aqc_opc_lldp_update_mib = 0x0A01, - avf_aqc_opc_lldp_add_tlv = 0x0A02, - avf_aqc_opc_lldp_update_tlv = 0x0A03, - avf_aqc_opc_lldp_delete_tlv = 0x0A04, - avf_aqc_opc_lldp_stop = 0x0A05, - avf_aqc_opc_lldp_start = 0x0A06, - avf_aqc_opc_get_cee_dcb_cfg = 0x0A07, - avf_aqc_opc_lldp_set_local_mib = 0x0A08, - avf_aqc_opc_lldp_stop_start_spec_agent = 0x0A09, + iavf_aqc_opc_lldp_get_mib = 0x0A00, + iavf_aqc_opc_lldp_update_mib = 0x0A01, + iavf_aqc_opc_lldp_add_tlv = 0x0A02, + iavf_aqc_opc_lldp_update_tlv = 0x0A03, + iavf_aqc_opc_lldp_delete_tlv = 0x0A04, + iavf_aqc_opc_lldp_stop = 0x0A05, + iavf_aqc_opc_lldp_start = 0x0A06, + iavf_aqc_opc_get_cee_dcb_cfg = 0x0A07, + iavf_aqc_opc_lldp_set_local_mib = 0x0A08, + iavf_aqc_opc_lldp_stop_start_spec_agent = 0x0A09, /* Tunnel commands */ - avf_aqc_opc_add_udp_tunnel = 0x0B00, - avf_aqc_opc_del_udp_tunnel = 0x0B01, - avf_aqc_opc_set_rss_key = 0x0B02, - avf_aqc_opc_set_rss_lut = 0x0B03, - avf_aqc_opc_get_rss_key = 0x0B04, - avf_aqc_opc_get_rss_lut = 0x0B05, + iavf_aqc_opc_add_udp_tunnel = 0x0B00, + iavf_aqc_opc_del_udp_tunnel = 0x0B01, + iavf_aqc_opc_set_rss_key = 0x0B02, + iavf_aqc_opc_set_rss_lut = 0x0B03, + iavf_aqc_opc_get_rss_key = 0x0B04, + iavf_aqc_opc_get_rss_lut = 0x0B05, /* Async Events */ - avf_aqc_opc_event_lan_overflow = 0x1001, + iavf_aqc_opc_event_lan_overflow = 0x1001, /* OEM commands */ - avf_aqc_opc_oem_parameter_change = 0xFE00, - avf_aqc_opc_oem_device_status_change = 0xFE01, - avf_aqc_opc_oem_ocsd_initialize = 0xFE02, - avf_aqc_opc_oem_ocbb_initialize = 0xFE03, + iavf_aqc_opc_oem_parameter_change = 0xFE00, + iavf_aqc_opc_oem_device_status_change = 0xFE01, + iavf_aqc_opc_oem_ocsd_initialize = 0xFE02, + iavf_aqc_opc_oem_ocbb_initialize = 0xFE03, /* debug commands */ - avf_aqc_opc_debug_read_reg = 0xFF03, - avf_aqc_opc_debug_write_reg = 0xFF04, - avf_aqc_opc_debug_modify_reg = 0xFF07, - avf_aqc_opc_debug_dump_internals = 0xFF08, + iavf_aqc_opc_debug_read_reg = 0xFF03, + iavf_aqc_opc_debug_write_reg = 0xFF04, + iavf_aqc_opc_debug_modify_reg = 0xFF07, + iavf_aqc_opc_debug_dump_internals = 0xFF08, }; /* command structures and indirect data structures */ @@ -338,18 +338,18 @@ enum avf_admin_queue_opc { * structure is not of the correct size, otherwise it creates an enum that is * never used. */ -#define AVF_CHECK_STRUCT_LEN(n, X) enum avf_static_assert_enum_##X \ - { avf_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) } +#define IAVF_CHECK_STRUCT_LEN(n, X) enum iavf_static_assert_enum_##X \ + { iavf_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) } /* This macro is used extensively to ensure that command structures are 16 * bytes in length as they have to map to the raw array of that size. */ -#define AVF_CHECK_CMD_LENGTH(X) AVF_CHECK_STRUCT_LEN(16, X) +#define IAVF_CHECK_CMD_LENGTH(X) IAVF_CHECK_STRUCT_LEN(16, X) /* internal (0x00XX) commands */ /* Get version (direct 0x0001) */ -struct avf_aqc_get_version { +struct iavf_aqc_get_version { __le32 rom_ver; __le32 fw_build; __le16 fw_major; @@ -358,10 +358,10 @@ struct avf_aqc_get_version { __le16 api_minor; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_version); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_version); /* Send driver version (indirect 0x0002) */ -struct avf_aqc_driver_version { +struct iavf_aqc_driver_version { u8 driver_major_ver; u8 driver_minor_ver; u8 driver_build_ver; @@ -371,36 +371,36 @@ struct avf_aqc_driver_version { __le32 address_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_driver_version); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_driver_version); /* Queue Shutdown (direct 0x0003) */ -struct avf_aqc_queue_shutdown { +struct iavf_aqc_queue_shutdown { __le32 driver_unloading; -#define AVF_AQ_DRIVER_UNLOADING 0x1 +#define IAVF_AQ_DRIVER_UNLOADING 0x1 u8 reserved[12]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_queue_shutdown); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_queue_shutdown); /* Set PF context (0x0004, direct) */ -struct avf_aqc_set_pf_context { +struct iavf_aqc_set_pf_context { u8 pf_id; u8 reserved[15]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_pf_context); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_pf_context); /* Request resource ownership (direct 0x0008) * Release resource ownership (direct 0x0009) */ -#define AVF_AQ_RESOURCE_NVM 1 -#define AVF_AQ_RESOURCE_SDP 2 -#define AVF_AQ_RESOURCE_ACCESS_READ 1 -#define AVF_AQ_RESOURCE_ACCESS_WRITE 2 -#define AVF_AQ_RESOURCE_NVM_READ_TIMEOUT 3000 -#define AVF_AQ_RESOURCE_NVM_WRITE_TIMEOUT 180000 - -struct avf_aqc_request_resource { +#define IAVF_AQ_RESOURCE_NVM 1 +#define IAVF_AQ_RESOURCE_SDP 2 +#define IAVF_AQ_RESOURCE_ACCESS_READ 1 +#define IAVF_AQ_RESOURCE_ACCESS_WRITE 2 +#define IAVF_AQ_RESOURCE_NVM_READ_TIMEOUT 3000 +#define IAVF_AQ_RESOURCE_NVM_WRITE_TIMEOUT 180000 + +struct iavf_aqc_request_resource { __le16 resource_id; __le16 access_type; __le32 timeout; @@ -408,14 +408,14 @@ struct avf_aqc_request_resource { u8 reserved[4]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_request_resource); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_request_resource); /* Get function capabilities (indirect 0x000A) * Get device capabilities (indirect 0x000B) */ -struct avf_aqc_list_capabilites { +struct iavf_aqc_list_capabilites { u8 command_flags; -#define AVF_AQ_LIST_CAP_PF_INDEX_EN 1 +#define IAVF_AQ_LIST_CAP_PF_INDEX_EN 1 u8 pf_index; u8 reserved[2]; __le32 count; @@ -423,9 +423,9 @@ struct avf_aqc_list_capabilites { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_list_capabilites); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_list_capabilites); -struct avf_aqc_list_capabilities_element_resp { +struct iavf_aqc_list_capabilities_element_resp { __le16 id; u8 major_rev; u8 minor_rev; @@ -437,46 +437,46 @@ struct avf_aqc_list_capabilities_element_resp { /* list of caps */ -#define AVF_AQ_CAP_ID_SWITCH_MODE 0x0001 -#define AVF_AQ_CAP_ID_MNG_MODE 0x0002 -#define AVF_AQ_CAP_ID_NPAR_ACTIVE 0x0003 -#define AVF_AQ_CAP_ID_OS2BMC_CAP 0x0004 -#define AVF_AQ_CAP_ID_FUNCTIONS_VALID 0x0005 -#define AVF_AQ_CAP_ID_ALTERNATE_RAM 0x0006 -#define AVF_AQ_CAP_ID_WOL_AND_PROXY 0x0008 -#define AVF_AQ_CAP_ID_SRIOV 0x0012 -#define AVF_AQ_CAP_ID_VF 0x0013 -#define AVF_AQ_CAP_ID_VMDQ 0x0014 -#define AVF_AQ_CAP_ID_8021QBG 0x0015 -#define AVF_AQ_CAP_ID_8021QBR 0x0016 -#define AVF_AQ_CAP_ID_VSI 0x0017 -#define AVF_AQ_CAP_ID_DCB 0x0018 -#define AVF_AQ_CAP_ID_FCOE 0x0021 -#define AVF_AQ_CAP_ID_ISCSI 0x0022 -#define AVF_AQ_CAP_ID_RSS 0x0040 -#define AVF_AQ_CAP_ID_RXQ 0x0041 -#define AVF_AQ_CAP_ID_TXQ 0x0042 -#define AVF_AQ_CAP_ID_MSIX 0x0043 -#define AVF_AQ_CAP_ID_VF_MSIX 0x0044 -#define AVF_AQ_CAP_ID_FLOW_DIRECTOR 0x0045 -#define AVF_AQ_CAP_ID_1588 0x0046 -#define AVF_AQ_CAP_ID_IWARP 0x0051 -#define AVF_AQ_CAP_ID_LED 0x0061 -#define AVF_AQ_CAP_ID_SDP 0x0062 -#define AVF_AQ_CAP_ID_MDIO 0x0063 -#define AVF_AQ_CAP_ID_WSR_PROT 0x0064 -#define AVF_AQ_CAP_ID_NVM_MGMT 0x0080 -#define AVF_AQ_CAP_ID_FLEX10 0x00F1 -#define AVF_AQ_CAP_ID_CEM 0x00F2 +#define IAVF_AQ_CAP_ID_SWITCH_MODE 0x0001 +#define IAVF_AQ_CAP_ID_MNG_MODE 0x0002 +#define IAVF_AQ_CAP_ID_NPAR_ACTIVE 0x0003 +#define IAVF_AQ_CAP_ID_OS2BMC_CAP 0x0004 +#define IAVF_AQ_CAP_ID_FUNCTIONS_VALID 0x0005 +#define IAVF_AQ_CAP_ID_ALTERNATE_RAM 0x0006 +#define IAVF_AQ_CAP_ID_WOL_AND_PROXY 0x0008 +#define IAVF_AQ_CAP_ID_SRIOV 0x0012 +#define IAVF_AQ_CAP_ID_VF 0x0013 +#define IAVF_AQ_CAP_ID_VMDQ 0x0014 +#define IAVF_AQ_CAP_ID_8021QBG 0x0015 +#define IAVF_AQ_CAP_ID_8021QBR 0x0016 +#define IAVF_AQ_CAP_ID_VSI 0x0017 +#define IAVF_AQ_CAP_ID_DCB 0x0018 +#define IAVF_AQ_CAP_ID_FCOE 0x0021 +#define IAVF_AQ_CAP_ID_ISCSI 0x0022 +#define IAVF_AQ_CAP_ID_RSS 0x0040 +#define IAVF_AQ_CAP_ID_RXQ 0x0041 +#define IAVF_AQ_CAP_ID_TXQ 0x0042 +#define IAVF_AQ_CAP_ID_MSIX 0x0043 +#define IAVF_AQ_CAP_ID_VF_MSIX 0x0044 +#define IAVF_AQ_CAP_ID_FLOW_DIRECTOR 0x0045 +#define IAVF_AQ_CAP_ID_1588 0x0046 +#define IAVF_AQ_CAP_ID_IWARP 0x0051 +#define IAVF_AQ_CAP_ID_LED 0x0061 +#define IAVF_AQ_CAP_ID_SDP 0x0062 +#define IAVF_AQ_CAP_ID_MDIO 0x0063 +#define IAVF_AQ_CAP_ID_WSR_PROT 0x0064 +#define IAVF_AQ_CAP_ID_NVM_MGMT 0x0080 +#define IAVF_AQ_CAP_ID_FLEX10 0x00F1 +#define IAVF_AQ_CAP_ID_CEM 0x00F2 /* Set CPPM Configuration (direct 0x0103) */ -struct avf_aqc_cppm_configuration { +struct iavf_aqc_cppm_configuration { __le16 command_flags; -#define AVF_AQ_CPPM_EN_LTRC 0x0800 -#define AVF_AQ_CPPM_EN_DMCTH 0x1000 -#define AVF_AQ_CPPM_EN_DMCTLX 0x2000 -#define AVF_AQ_CPPM_EN_HPTC 0x4000 -#define AVF_AQ_CPPM_EN_DMARC 0x8000 +#define IAVF_AQ_CPPM_EN_LTRC 0x0800 +#define IAVF_AQ_CPPM_EN_DMCTH 0x1000 +#define IAVF_AQ_CPPM_EN_DMCTLX 0x2000 +#define IAVF_AQ_CPPM_EN_HPTC 0x4000 +#define IAVF_AQ_CPPM_EN_DMARC 0x8000 __le16 ttlx; __le32 dmacr; __le16 dmcth; @@ -485,47 +485,47 @@ struct avf_aqc_cppm_configuration { __le32 pfltrc; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_cppm_configuration); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_cppm_configuration); /* Set ARP Proxy command / response (indirect 0x0104) */ -struct avf_aqc_arp_proxy_data { +struct iavf_aqc_arp_proxy_data { __le16 command_flags; -#define AVF_AQ_ARP_INIT_IPV4 0x0800 -#define AVF_AQ_ARP_UNSUP_CTL 0x1000 -#define AVF_AQ_ARP_ENA 0x2000 -#define AVF_AQ_ARP_ADD_IPV4 0x4000 -#define AVF_AQ_ARP_DEL_IPV4 0x8000 +#define IAVF_AQ_ARP_INIT_IPV4 0x0800 +#define IAVF_AQ_ARP_UNSUP_CTL 0x1000 +#define IAVF_AQ_ARP_ENA 0x2000 +#define IAVF_AQ_ARP_ADD_IPV4 0x4000 +#define IAVF_AQ_ARP_DEL_IPV4 0x8000 __le16 table_id; __le32 enabled_offloads; -#define AVF_AQ_ARP_DIRECTED_OFFLOAD_ENABLE 0x00000020 -#define AVF_AQ_ARP_OFFLOAD_ENABLE 0x00000800 +#define IAVF_AQ_ARP_DIRECTED_OFFLOAD_ENABLE 0x00000020 +#define IAVF_AQ_ARP_OFFLOAD_ENABLE 0x00000800 __le32 ip_addr; u8 mac_addr[6]; u8 reserved[2]; }; -AVF_CHECK_STRUCT_LEN(0x14, avf_aqc_arp_proxy_data); +IAVF_CHECK_STRUCT_LEN(0x14, iavf_aqc_arp_proxy_data); /* Set NS Proxy Table Entry Command (indirect 0x0105) */ -struct avf_aqc_ns_proxy_data { +struct iavf_aqc_ns_proxy_data { __le16 table_idx_mac_addr_0; __le16 table_idx_mac_addr_1; __le16 table_idx_ipv6_0; __le16 table_idx_ipv6_1; __le16 control; -#define AVF_AQ_NS_PROXY_ADD_0 0x0001 -#define AVF_AQ_NS_PROXY_DEL_0 0x0002 -#define AVF_AQ_NS_PROXY_ADD_1 0x0004 -#define AVF_AQ_NS_PROXY_DEL_1 0x0008 -#define AVF_AQ_NS_PROXY_ADD_IPV6_0 0x0010 -#define AVF_AQ_NS_PROXY_DEL_IPV6_0 0x0020 -#define AVF_AQ_NS_PROXY_ADD_IPV6_1 0x0040 -#define AVF_AQ_NS_PROXY_DEL_IPV6_1 0x0080 -#define AVF_AQ_NS_PROXY_COMMAND_SEQ 0x0100 -#define AVF_AQ_NS_PROXY_INIT_IPV6_TBL 0x0200 -#define AVF_AQ_NS_PROXY_INIT_MAC_TBL 0x0400 -#define AVF_AQ_NS_PROXY_OFFLOAD_ENABLE 0x0800 -#define AVF_AQ_NS_PROXY_DIRECTED_OFFLOAD_ENABLE 0x1000 +#define IAVF_AQ_NS_PROXY_ADD_0 0x0001 +#define IAVF_AQ_NS_PROXY_DEL_0 0x0002 +#define IAVF_AQ_NS_PROXY_ADD_1 0x0004 +#define IAVF_AQ_NS_PROXY_DEL_1 0x0008 +#define IAVF_AQ_NS_PROXY_ADD_IPV6_0 0x0010 +#define IAVF_AQ_NS_PROXY_DEL_IPV6_0 0x0020 +#define IAVF_AQ_NS_PROXY_ADD_IPV6_1 0x0040 +#define IAVF_AQ_NS_PROXY_DEL_IPV6_1 0x0080 +#define IAVF_AQ_NS_PROXY_COMMAND_SEQ 0x0100 +#define IAVF_AQ_NS_PROXY_INIT_IPV6_TBL 0x0200 +#define IAVF_AQ_NS_PROXY_INIT_MAC_TBL 0x0400 +#define IAVF_AQ_NS_PROXY_OFFLOAD_ENABLE 0x0800 +#define IAVF_AQ_NS_PROXY_DIRECTED_OFFLOAD_ENABLE 0x1000 u8 mac_addr_0[6]; u8 mac_addr_1[6]; u8 local_mac_addr[6]; @@ -533,247 +533,247 @@ struct avf_aqc_ns_proxy_data { u8 ipv6_addr_1[16]; }; -AVF_CHECK_STRUCT_LEN(0x3c, avf_aqc_ns_proxy_data); +IAVF_CHECK_STRUCT_LEN(0x3c, iavf_aqc_ns_proxy_data); /* Manage LAA Command (0x0106) - obsolete */ -struct avf_aqc_mng_laa { +struct iavf_aqc_mng_laa { __le16 command_flags; -#define AVF_AQ_LAA_FLAG_WR 0x8000 +#define IAVF_AQ_LAA_FLAG_WR 0x8000 u8 reserved[2]; __le32 sal; __le16 sah; u8 reserved2[6]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_mng_laa); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_mng_laa); /* Manage MAC Address Read Command (indirect 0x0107) */ -struct avf_aqc_mac_address_read { +struct iavf_aqc_mac_address_read { __le16 command_flags; -#define AVF_AQC_LAN_ADDR_VALID 0x10 -#define AVF_AQC_SAN_ADDR_VALID 0x20 -#define AVF_AQC_PORT_ADDR_VALID 0x40 -#define AVF_AQC_WOL_ADDR_VALID 0x80 -#define AVF_AQC_MC_MAG_EN_VALID 0x100 -#define AVF_AQC_WOL_PRESERVE_STATUS 0x200 -#define AVF_AQC_ADDR_VALID_MASK 0x3F0 +#define IAVF_AQC_LAN_ADDR_VALID 0x10 +#define IAVF_AQC_SAN_ADDR_VALID 0x20 +#define IAVF_AQC_PORT_ADDR_VALID 0x40 +#define IAVF_AQC_WOL_ADDR_VALID 0x80 +#define IAVF_AQC_MC_MAG_EN_VALID 0x100 +#define IAVF_AQC_WOL_PRESERVE_STATUS 0x200 +#define IAVF_AQC_ADDR_VALID_MASK 0x3F0 u8 reserved[6]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_read); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_mac_address_read); -struct avf_aqc_mac_address_read_data { +struct iavf_aqc_mac_address_read_data { u8 pf_lan_mac[6]; u8 pf_san_mac[6]; u8 port_mac[6]; u8 pf_wol_mac[6]; }; -AVF_CHECK_STRUCT_LEN(24, avf_aqc_mac_address_read_data); +IAVF_CHECK_STRUCT_LEN(24, iavf_aqc_mac_address_read_data); /* Manage MAC Address Write Command (0x0108) */ -struct avf_aqc_mac_address_write { +struct iavf_aqc_mac_address_write { __le16 command_flags; -#define AVF_AQC_MC_MAG_EN 0x0100 -#define AVF_AQC_WOL_PRESERVE_ON_PFR 0x0200 -#define AVF_AQC_WRITE_TYPE_LAA_ONLY 0x0000 -#define AVF_AQC_WRITE_TYPE_LAA_WOL 0x4000 -#define AVF_AQC_WRITE_TYPE_PORT 0x8000 -#define AVF_AQC_WRITE_TYPE_UPDATE_MC_MAG 0xC000 -#define AVF_AQC_WRITE_TYPE_MASK 0xC000 +#define IAVF_AQC_MC_MAG_EN 0x0100 +#define IAVF_AQC_WOL_PRESERVE_ON_PFR 0x0200 +#define IAVF_AQC_WRITE_TYPE_LAA_ONLY 0x0000 +#define IAVF_AQC_WRITE_TYPE_LAA_WOL 0x4000 +#define IAVF_AQC_WRITE_TYPE_PORT 0x8000 +#define IAVF_AQC_WRITE_TYPE_UPDATE_MC_MAG 0xC000 +#define IAVF_AQC_WRITE_TYPE_MASK 0xC000 __le16 mac_sah; __le32 mac_sal; u8 reserved[8]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_mac_address_write); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_mac_address_write); /* PXE commands (0x011x) */ /* Clear PXE Command and response (direct 0x0110) */ -struct avf_aqc_clear_pxe { +struct iavf_aqc_clear_pxe { u8 rx_cnt; u8 reserved[15]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_clear_pxe); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_clear_pxe); /* Set WoL Filter (0x0120) */ -struct avf_aqc_set_wol_filter { +struct iavf_aqc_set_wol_filter { __le16 filter_index; -#define AVF_AQC_MAX_NUM_WOL_FILTERS 8 -#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT 15 -#define AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_MASK (0x1 << \ - AVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT) - -#define AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT 0 -#define AVF_AQC_SET_WOL_FILTER_INDEX_MASK (0x7 << \ - AVF_AQC_SET_WOL_FILTER_INDEX_SHIFT) +#define IAVF_AQC_MAX_NUM_WOL_FILTERS 8 +#define IAVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT 15 +#define IAVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_MASK (0x1 << \ + IAVF_AQC_SET_WOL_FILTER_TYPE_MAGIC_SHIFT) + +#define IAVF_AQC_SET_WOL_FILTER_INDEX_SHIFT 0 +#define IAVF_AQC_SET_WOL_FILTER_INDEX_MASK (0x7 << \ + IAVF_AQC_SET_WOL_FILTER_INDEX_SHIFT) __le16 cmd_flags; -#define AVF_AQC_SET_WOL_FILTER 0x8000 -#define AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL 0x4000 -#define AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR 0x2000 -#define AVF_AQC_SET_WOL_FILTER_ACTION_CLEAR 0 -#define AVF_AQC_SET_WOL_FILTER_ACTION_SET 1 +#define IAVF_AQC_SET_WOL_FILTER 0x8000 +#define IAVF_AQC_SET_WOL_FILTER_NO_TCO_WOL 0x4000 +#define IAVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR 0x2000 +#define IAVF_AQC_SET_WOL_FILTER_ACTION_CLEAR 0 +#define IAVF_AQC_SET_WOL_FILTER_ACTION_SET 1 __le16 valid_flags; -#define AVF_AQC_SET_WOL_FILTER_ACTION_VALID 0x8000 -#define AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID 0x4000 +#define IAVF_AQC_SET_WOL_FILTER_ACTION_VALID 0x8000 +#define IAVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID 0x4000 u8 reserved[2]; __le32 address_high; __le32 address_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_wol_filter); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_wol_filter); -struct avf_aqc_set_wol_filter_data { +struct iavf_aqc_set_wol_filter_data { u8 filter[128]; u8 mask[16]; }; -AVF_CHECK_STRUCT_LEN(0x90, avf_aqc_set_wol_filter_data); +IAVF_CHECK_STRUCT_LEN(0x90, iavf_aqc_set_wol_filter_data); /* Get Wake Reason (0x0121) */ -struct avf_aqc_get_wake_reason_completion { +struct iavf_aqc_get_wake_reason_completion { u8 reserved_1[2]; __le16 wake_reason; -#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT 0 -#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_MASK (0xFF << \ - AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT) -#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT 8 -#define AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_MASK (0xFF << \ - AVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT) +#define IAVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT 0 +#define IAVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_MASK (0xFF << \ + IAVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_MATCHED_INDEX_SHIFT) +#define IAVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT 8 +#define IAVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_MASK (0xFF << \ + IAVF_AQC_GET_WAKE_UP_REASON_WOL_REASON_RESERVED_SHIFT) u8 reserved_2[12]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_wake_reason_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_wake_reason_completion); /* Switch configuration commands (0x02xx) */ /* Used by many indirect commands that only pass an seid and a buffer in the * command */ -struct avf_aqc_switch_seid { +struct iavf_aqc_switch_seid { __le16 seid; u8 reserved[6]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_switch_seid); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_switch_seid); /* Get Switch Configuration command (indirect 0x0200) - * uses avf_aqc_switch_seid for the descriptor + * uses iavf_aqc_switch_seid for the descriptor */ -struct avf_aqc_get_switch_config_header_resp { +struct iavf_aqc_get_switch_config_header_resp { __le16 num_reported; __le16 num_total; u8 reserved[12]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_config_header_resp); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_switch_config_header_resp); -struct avf_aqc_switch_config_element_resp { +struct iavf_aqc_switch_config_element_resp { u8 element_type; -#define AVF_AQ_SW_ELEM_TYPE_MAC 1 -#define AVF_AQ_SW_ELEM_TYPE_PF 2 -#define AVF_AQ_SW_ELEM_TYPE_VF 3 -#define AVF_AQ_SW_ELEM_TYPE_EMP 4 -#define AVF_AQ_SW_ELEM_TYPE_BMC 5 -#define AVF_AQ_SW_ELEM_TYPE_PV 16 -#define AVF_AQ_SW_ELEM_TYPE_VEB 17 -#define AVF_AQ_SW_ELEM_TYPE_PA 18 -#define AVF_AQ_SW_ELEM_TYPE_VSI 19 +#define IAVF_AQ_SW_ELEM_TYPE_MAC 1 +#define IAVF_AQ_SW_ELEM_TYPE_PF 2 +#define IAVF_AQ_SW_ELEM_TYPE_VF 3 +#define IAVF_AQ_SW_ELEM_TYPE_EMP 4 +#define IAVF_AQ_SW_ELEM_TYPE_BMC 5 +#define IAVF_AQ_SW_ELEM_TYPE_PV 16 +#define IAVF_AQ_SW_ELEM_TYPE_VEB 17 +#define IAVF_AQ_SW_ELEM_TYPE_PA 18 +#define IAVF_AQ_SW_ELEM_TYPE_VSI 19 u8 revision; -#define AVF_AQ_SW_ELEM_REV_1 1 +#define IAVF_AQ_SW_ELEM_REV_1 1 __le16 seid; __le16 uplink_seid; __le16 downlink_seid; u8 reserved[3]; u8 connection_type; -#define AVF_AQ_CONN_TYPE_REGULAR 0x1 -#define AVF_AQ_CONN_TYPE_DEFAULT 0x2 -#define AVF_AQ_CONN_TYPE_CASCADED 0x3 +#define IAVF_AQ_CONN_TYPE_REGULAR 0x1 +#define IAVF_AQ_CONN_TYPE_DEFAULT 0x2 +#define IAVF_AQ_CONN_TYPE_CASCADED 0x3 __le16 scheduler_id; __le16 element_info; }; -AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_config_element_resp); +IAVF_CHECK_STRUCT_LEN(0x10, iavf_aqc_switch_config_element_resp); /* Get Switch Configuration (indirect 0x0200) * an array of elements are returned in the response buffer * the first in the array is the header, remainder are elements */ -struct avf_aqc_get_switch_config_resp { - struct avf_aqc_get_switch_config_header_resp header; - struct avf_aqc_switch_config_element_resp element[1]; +struct iavf_aqc_get_switch_config_resp { + struct iavf_aqc_get_switch_config_header_resp header; + struct iavf_aqc_switch_config_element_resp element[1]; }; -AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_switch_config_resp); +IAVF_CHECK_STRUCT_LEN(0x20, iavf_aqc_get_switch_config_resp); /* Add Statistics (direct 0x0201) * Remove Statistics (direct 0x0202) */ -struct avf_aqc_add_remove_statistics { +struct iavf_aqc_add_remove_statistics { __le16 seid; __le16 vlan; __le16 stat_index; u8 reserved[10]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_statistics); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_remove_statistics); /* Set Port Parameters command (direct 0x0203) */ -struct avf_aqc_set_port_parameters { +struct iavf_aqc_set_port_parameters { __le16 command_flags; -#define AVF_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS 1 -#define AVF_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS 2 /* must set! */ -#define AVF_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA 4 +#define IAVF_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS 1 +#define IAVF_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS 2 /* must set! */ +#define IAVF_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA 4 __le16 bad_frame_vsi; -#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_SHIFT 0x0 -#define AVF_AQ_SET_P_PARAMS_BFRAME_SEID_MASK 0x3FF +#define IAVF_AQ_SET_P_PARAMS_BFRAME_SEID_SHIFT 0x0 +#define IAVF_AQ_SET_P_PARAMS_BFRAME_SEID_MASK 0x3FF __le16 default_seid; /* reserved for command */ u8 reserved[10]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_port_parameters); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_port_parameters); /* Get Switch Resource Allocation (indirect 0x0204) */ -struct avf_aqc_get_switch_resource_alloc { +struct iavf_aqc_get_switch_resource_alloc { u8 num_entries; /* reserved for command */ u8 reserved[7]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_switch_resource_alloc); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_switch_resource_alloc); /* expect an array of these structs in the response buffer */ -struct avf_aqc_switch_resource_alloc_element_resp { +struct iavf_aqc_switch_resource_alloc_element_resp { u8 resource_type; -#define AVF_AQ_RESOURCE_TYPE_VEB 0x0 -#define AVF_AQ_RESOURCE_TYPE_VSI 0x1 -#define AVF_AQ_RESOURCE_TYPE_MACADDR 0x2 -#define AVF_AQ_RESOURCE_TYPE_STAG 0x3 -#define AVF_AQ_RESOURCE_TYPE_ETAG 0x4 -#define AVF_AQ_RESOURCE_TYPE_MULTICAST_HASH 0x5 -#define AVF_AQ_RESOURCE_TYPE_UNICAST_HASH 0x6 -#define AVF_AQ_RESOURCE_TYPE_VLAN 0x7 -#define AVF_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY 0x8 -#define AVF_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY 0x9 -#define AVF_AQ_RESOURCE_TYPE_VLAN_STAT_POOL 0xA -#define AVF_AQ_RESOURCE_TYPE_MIRROR_RULE 0xB -#define AVF_AQ_RESOURCE_TYPE_QUEUE_SETS 0xC -#define AVF_AQ_RESOURCE_TYPE_VLAN_FILTERS 0xD -#define AVF_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS 0xF -#define AVF_AQ_RESOURCE_TYPE_IP_FILTERS 0x10 -#define AVF_AQ_RESOURCE_TYPE_GRE_VN_KEYS 0x11 -#define AVF_AQ_RESOURCE_TYPE_VN2_KEYS 0x12 -#define AVF_AQ_RESOURCE_TYPE_TUNNEL_PORTS 0x13 +#define IAVF_AQ_RESOURCE_TYPE_VEB 0x0 +#define IAVF_AQ_RESOURCE_TYPE_VSI 0x1 +#define IAVF_AQ_RESOURCE_TYPE_MACADDR 0x2 +#define IAVF_AQ_RESOURCE_TYPE_STAG 0x3 +#define IAVF_AQ_RESOURCE_TYPE_ETAG 0x4 +#define IAVF_AQ_RESOURCE_TYPE_MULTICAST_HASH 0x5 +#define IAVF_AQ_RESOURCE_TYPE_UNICAST_HASH 0x6 +#define IAVF_AQ_RESOURCE_TYPE_VLAN 0x7 +#define IAVF_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY 0x8 +#define IAVF_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY 0x9 +#define IAVF_AQ_RESOURCE_TYPE_VLAN_STAT_POOL 0xA +#define IAVF_AQ_RESOURCE_TYPE_MIRROR_RULE 0xB +#define IAVF_AQ_RESOURCE_TYPE_QUEUE_SETS 0xC +#define IAVF_AQ_RESOURCE_TYPE_VLAN_FILTERS 0xD +#define IAVF_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS 0xF +#define IAVF_AQ_RESOURCE_TYPE_IP_FILTERS 0x10 +#define IAVF_AQ_RESOURCE_TYPE_GRE_VN_KEYS 0x11 +#define IAVF_AQ_RESOURCE_TYPE_VN2_KEYS 0x12 +#define IAVF_AQ_RESOURCE_TYPE_TUNNEL_PORTS 0x13 u8 reserved1; __le16 guaranteed; __le16 total; @@ -782,15 +782,15 @@ struct avf_aqc_switch_resource_alloc_element_resp { u8 reserved2[6]; }; -AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_switch_resource_alloc_element_resp); +IAVF_CHECK_STRUCT_LEN(0x10, iavf_aqc_switch_resource_alloc_element_resp); /* Set Switch Configuration (direct 0x0205) */ -struct avf_aqc_set_switch_config { +struct iavf_aqc_set_switch_config { __le16 flags; /* flags used for both fields below */ -#define AVF_AQ_SET_SWITCH_CFG_PROMISC 0x0001 -#define AVF_AQ_SET_SWITCH_CFG_L2_FILTER 0x0002 -#define AVF_AQ_SET_SWITCH_CFG_HW_ATR_EVICT 0x0004 +#define IAVF_AQ_SET_SWITCH_CFG_PROMISC 0x0001 +#define IAVF_AQ_SET_SWITCH_CFG_L2_FILTER 0x0002 +#define IAVF_AQ_SET_SWITCH_CFG_HW_ATR_EVICT 0x0004 __le16 valid_flags; /* The ethertype in switch_tag is dropped on ingress and used * internally by the switch. Set this to zero for the default @@ -810,24 +810,24 @@ struct avf_aqc_set_switch_config { u8 reserved[6]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_switch_config); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_switch_config); /* Read Receive control registers (direct 0x0206) * Write Receive control registers (direct 0x0207) * used for accessing Rx control registers that can be * slow and need special handling when under high Rx load */ -struct avf_aqc_rx_ctl_reg_read_write { +struct iavf_aqc_rx_ctl_reg_read_write { __le32 reserved1; __le32 address; __le32 reserved2; __le32 value; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_rx_ctl_reg_read_write); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_rx_ctl_reg_read_write); /* Add VSI (indirect 0x0210) - * this indirect command uses struct avf_aqc_vsi_properties_data + * this indirect command uses struct iavf_aqc_vsi_properties_data * as the indirect buffer (128 bytes) * * Update VSI (indirect 0x211) @@ -836,30 +836,30 @@ AVF_CHECK_CMD_LENGTH(avf_aqc_rx_ctl_reg_read_write); * Get VSI (indirect 0x0212) * uses the same completion and data structure as Add VSI */ -struct avf_aqc_add_get_update_vsi { +struct iavf_aqc_add_get_update_vsi { __le16 uplink_seid; u8 connection_type; -#define AVF_AQ_VSI_CONN_TYPE_NORMAL 0x1 -#define AVF_AQ_VSI_CONN_TYPE_DEFAULT 0x2 -#define AVF_AQ_VSI_CONN_TYPE_CASCADED 0x3 +#define IAVF_AQ_VSI_CONN_TYPE_NORMAL 0x1 +#define IAVF_AQ_VSI_CONN_TYPE_DEFAULT 0x2 +#define IAVF_AQ_VSI_CONN_TYPE_CASCADED 0x3 u8 reserved1; u8 vf_id; u8 reserved2; __le16 vsi_flags; -#define AVF_AQ_VSI_TYPE_SHIFT 0x0 -#define AVF_AQ_VSI_TYPE_MASK (0x3 << AVF_AQ_VSI_TYPE_SHIFT) -#define AVF_AQ_VSI_TYPE_VF 0x0 -#define AVF_AQ_VSI_TYPE_VMDQ2 0x1 -#define AVF_AQ_VSI_TYPE_PF 0x2 -#define AVF_AQ_VSI_TYPE_EMP_MNG 0x3 -#define AVF_AQ_VSI_FLAG_CASCADED_PV 0x4 +#define IAVF_AQ_VSI_TYPE_SHIFT 0x0 +#define IAVF_AQ_VSI_TYPE_MASK (0x3 << IAVF_AQ_VSI_TYPE_SHIFT) +#define IAVF_AQ_VSI_TYPE_VF 0x0 +#define IAVF_AQ_VSI_TYPE_VMDQ2 0x1 +#define IAVF_AQ_VSI_TYPE_PF 0x2 +#define IAVF_AQ_VSI_TYPE_EMP_MNG 0x3 +#define IAVF_AQ_VSI_FLAG_CASCADED_PV 0x4 __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_get_update_vsi); -struct avf_aqc_add_get_update_vsi_completion { +struct iavf_aqc_add_get_update_vsi_completion { __le16 seid; __le16 vsi_number; __le16 vsi_used; @@ -868,116 +868,116 @@ struct avf_aqc_add_get_update_vsi_completion { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_get_update_vsi_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_get_update_vsi_completion); -struct avf_aqc_vsi_properties_data { +struct iavf_aqc_vsi_properties_data { /* first 96 byte are written by SW */ __le16 valid_sections; -#define AVF_AQ_VSI_PROP_SWITCH_VALID 0x0001 -#define AVF_AQ_VSI_PROP_SECURITY_VALID 0x0002 -#define AVF_AQ_VSI_PROP_VLAN_VALID 0x0004 -#define AVF_AQ_VSI_PROP_CAS_PV_VALID 0x0008 -#define AVF_AQ_VSI_PROP_INGRESS_UP_VALID 0x0010 -#define AVF_AQ_VSI_PROP_EGRESS_UP_VALID 0x0020 -#define AVF_AQ_VSI_PROP_QUEUE_MAP_VALID 0x0040 -#define AVF_AQ_VSI_PROP_QUEUE_OPT_VALID 0x0080 -#define AVF_AQ_VSI_PROP_OUTER_UP_VALID 0x0100 -#define AVF_AQ_VSI_PROP_SCHED_VALID 0x0200 +#define IAVF_AQ_VSI_PROP_SWITCH_VALID 0x0001 +#define IAVF_AQ_VSI_PROP_SECURITY_VALID 0x0002 +#define IAVF_AQ_VSI_PROP_VLAN_VALID 0x0004 +#define IAVF_AQ_VSI_PROP_CAS_PV_VALID 0x0008 +#define IAVF_AQ_VSI_PROP_INGRESS_UP_VALID 0x0010 +#define IAVF_AQ_VSI_PROP_EGRESS_UP_VALID 0x0020 +#define IAVF_AQ_VSI_PROP_QUEUE_MAP_VALID 0x0040 +#define IAVF_AQ_VSI_PROP_QUEUE_OPT_VALID 0x0080 +#define IAVF_AQ_VSI_PROP_OUTER_UP_VALID 0x0100 +#define IAVF_AQ_VSI_PROP_SCHED_VALID 0x0200 /* switch section */ __le16 switch_id; /* 12bit id combined with flags below */ -#define AVF_AQ_VSI_SW_ID_SHIFT 0x0000 -#define AVF_AQ_VSI_SW_ID_MASK (0xFFF << AVF_AQ_VSI_SW_ID_SHIFT) -#define AVF_AQ_VSI_SW_ID_FLAG_NOT_STAG 0x1000 -#define AVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB 0x2000 -#define AVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB 0x4000 +#define IAVF_AQ_VSI_SW_ID_SHIFT 0x0000 +#define IAVF_AQ_VSI_SW_ID_MASK (0xFFF << IAVF_AQ_VSI_SW_ID_SHIFT) +#define IAVF_AQ_VSI_SW_ID_FLAG_NOT_STAG 0x1000 +#define IAVF_AQ_VSI_SW_ID_FLAG_ALLOW_LB 0x2000 +#define IAVF_AQ_VSI_SW_ID_FLAG_LOCAL_LB 0x4000 u8 sw_reserved[2]; /* security section */ u8 sec_flags; -#define AVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD 0x01 -#define AVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK 0x02 -#define AVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK 0x04 +#define IAVF_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD 0x01 +#define IAVF_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK 0x02 +#define IAVF_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK 0x04 u8 sec_reserved; /* VLAN section */ __le16 pvid; /* VLANS include priority bits */ __le16 fcoe_pvid; u8 port_vlan_flags; -#define AVF_AQ_VSI_PVLAN_MODE_SHIFT 0x00 -#define AVF_AQ_VSI_PVLAN_MODE_MASK (0x03 << \ - AVF_AQ_VSI_PVLAN_MODE_SHIFT) -#define AVF_AQ_VSI_PVLAN_MODE_TAGGED 0x01 -#define AVF_AQ_VSI_PVLAN_MODE_UNTAGGED 0x02 -#define AVF_AQ_VSI_PVLAN_MODE_ALL 0x03 -#define AVF_AQ_VSI_PVLAN_INSERT_PVID 0x04 -#define AVF_AQ_VSI_PVLAN_EMOD_SHIFT 0x03 -#define AVF_AQ_VSI_PVLAN_EMOD_MASK (0x3 << \ - AVF_AQ_VSI_PVLAN_EMOD_SHIFT) -#define AVF_AQ_VSI_PVLAN_EMOD_STR_BOTH 0x0 -#define AVF_AQ_VSI_PVLAN_EMOD_STR_UP 0x08 -#define AVF_AQ_VSI_PVLAN_EMOD_STR 0x10 -#define AVF_AQ_VSI_PVLAN_EMOD_NOTHING 0x18 +#define IAVF_AQ_VSI_PVLAN_MODE_SHIFT 0x00 +#define IAVF_AQ_VSI_PVLAN_MODE_MASK (0x03 << \ + IAVF_AQ_VSI_PVLAN_MODE_SHIFT) +#define IAVF_AQ_VSI_PVLAN_MODE_TAGGED 0x01 +#define IAVF_AQ_VSI_PVLAN_MODE_UNTAGGED 0x02 +#define IAVF_AQ_VSI_PVLAN_MODE_ALL 0x03 +#define IAVF_AQ_VSI_PVLAN_INSERT_PVID 0x04 +#define IAVF_AQ_VSI_PVLAN_EMOD_SHIFT 0x03 +#define IAVF_AQ_VSI_PVLAN_EMOD_MASK (0x3 << \ + IAVF_AQ_VSI_PVLAN_EMOD_SHIFT) +#define IAVF_AQ_VSI_PVLAN_EMOD_STR_BOTH 0x0 +#define IAVF_AQ_VSI_PVLAN_EMOD_STR_UP 0x08 +#define IAVF_AQ_VSI_PVLAN_EMOD_STR 0x10 +#define IAVF_AQ_VSI_PVLAN_EMOD_NOTHING 0x18 u8 pvlan_reserved[3]; /* ingress egress up sections */ __le32 ingress_table; /* bitmap, 3 bits per up */ -#define AVF_AQ_VSI_UP_TABLE_UP0_SHIFT 0 -#define AVF_AQ_VSI_UP_TABLE_UP0_MASK (0x7 << \ - AVF_AQ_VSI_UP_TABLE_UP0_SHIFT) -#define AVF_AQ_VSI_UP_TABLE_UP1_SHIFT 3 -#define AVF_AQ_VSI_UP_TABLE_UP1_MASK (0x7 << \ - AVF_AQ_VSI_UP_TABLE_UP1_SHIFT) -#define AVF_AQ_VSI_UP_TABLE_UP2_SHIFT 6 -#define AVF_AQ_VSI_UP_TABLE_UP2_MASK (0x7 << \ - AVF_AQ_VSI_UP_TABLE_UP2_SHIFT) -#define AVF_AQ_VSI_UP_TABLE_UP3_SHIFT 9 -#define AVF_AQ_VSI_UP_TABLE_UP3_MASK (0x7 << \ - AVF_AQ_VSI_UP_TABLE_UP3_SHIFT) -#define AVF_AQ_VSI_UP_TABLE_UP4_SHIFT 12 -#define AVF_AQ_VSI_UP_TABLE_UP4_MASK (0x7 << \ - AVF_AQ_VSI_UP_TABLE_UP4_SHIFT) -#define AVF_AQ_VSI_UP_TABLE_UP5_SHIFT 15 -#define AVF_AQ_VSI_UP_TABLE_UP5_MASK (0x7 << \ - AVF_AQ_VSI_UP_TABLE_UP5_SHIFT) -#define AVF_AQ_VSI_UP_TABLE_UP6_SHIFT 18 -#define AVF_AQ_VSI_UP_TABLE_UP6_MASK (0x7 << \ - AVF_AQ_VSI_UP_TABLE_UP6_SHIFT) -#define AVF_AQ_VSI_UP_TABLE_UP7_SHIFT 21 -#define AVF_AQ_VSI_UP_TABLE_UP7_MASK (0x7 << \ - AVF_AQ_VSI_UP_TABLE_UP7_SHIFT) +#define IAVF_AQ_VSI_UP_TABLE_UP0_SHIFT 0 +#define IAVF_AQ_VSI_UP_TABLE_UP0_MASK (0x7 << \ + IAVF_AQ_VSI_UP_TABLE_UP0_SHIFT) +#define IAVF_AQ_VSI_UP_TABLE_UP1_SHIFT 3 +#define IAVF_AQ_VSI_UP_TABLE_UP1_MASK (0x7 << \ + IAVF_AQ_VSI_UP_TABLE_UP1_SHIFT) +#define IAVF_AQ_VSI_UP_TABLE_UP2_SHIFT 6 +#define IAVF_AQ_VSI_UP_TABLE_UP2_MASK (0x7 << \ + IAVF_AQ_VSI_UP_TABLE_UP2_SHIFT) +#define IAVF_AQ_VSI_UP_TABLE_UP3_SHIFT 9 +#define IAVF_AQ_VSI_UP_TABLE_UP3_MASK (0x7 << \ + IAVF_AQ_VSI_UP_TABLE_UP3_SHIFT) +#define IAVF_AQ_VSI_UP_TABLE_UP4_SHIFT 12 +#define IAVF_AQ_VSI_UP_TABLE_UP4_MASK (0x7 << \ + IAVF_AQ_VSI_UP_TABLE_UP4_SHIFT) +#define IAVF_AQ_VSI_UP_TABLE_UP5_SHIFT 15 +#define IAVF_AQ_VSI_UP_TABLE_UP5_MASK (0x7 << \ + IAVF_AQ_VSI_UP_TABLE_UP5_SHIFT) +#define IAVF_AQ_VSI_UP_TABLE_UP6_SHIFT 18 +#define IAVF_AQ_VSI_UP_TABLE_UP6_MASK (0x7 << \ + IAVF_AQ_VSI_UP_TABLE_UP6_SHIFT) +#define IAVF_AQ_VSI_UP_TABLE_UP7_SHIFT 21 +#define IAVF_AQ_VSI_UP_TABLE_UP7_MASK (0x7 << \ + IAVF_AQ_VSI_UP_TABLE_UP7_SHIFT) __le32 egress_table; /* same defines as for ingress table */ /* cascaded PV section */ __le16 cas_pv_tag; u8 cas_pv_flags; -#define AVF_AQ_VSI_CAS_PV_TAGX_SHIFT 0x00 -#define AVF_AQ_VSI_CAS_PV_TAGX_MASK (0x03 << \ - AVF_AQ_VSI_CAS_PV_TAGX_SHIFT) -#define AVF_AQ_VSI_CAS_PV_TAGX_LEAVE 0x00 -#define AVF_AQ_VSI_CAS_PV_TAGX_REMOVE 0x01 -#define AVF_AQ_VSI_CAS_PV_TAGX_COPY 0x02 -#define AVF_AQ_VSI_CAS_PV_INSERT_TAG 0x10 -#define AVF_AQ_VSI_CAS_PV_ETAG_PRUNE 0x20 -#define AVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG 0x40 +#define IAVF_AQ_VSI_CAS_PV_TAGX_SHIFT 0x00 +#define IAVF_AQ_VSI_CAS_PV_TAGX_MASK (0x03 << \ + IAVF_AQ_VSI_CAS_PV_TAGX_SHIFT) +#define IAVF_AQ_VSI_CAS_PV_TAGX_LEAVE 0x00 +#define IAVF_AQ_VSI_CAS_PV_TAGX_REMOVE 0x01 +#define IAVF_AQ_VSI_CAS_PV_TAGX_COPY 0x02 +#define IAVF_AQ_VSI_CAS_PV_INSERT_TAG 0x10 +#define IAVF_AQ_VSI_CAS_PV_ETAG_PRUNE 0x20 +#define IAVF_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG 0x40 u8 cas_pv_reserved; /* queue mapping section */ __le16 mapping_flags; -#define AVF_AQ_VSI_QUE_MAP_CONTIG 0x0 -#define AVF_AQ_VSI_QUE_MAP_NONCONTIG 0x1 +#define IAVF_AQ_VSI_QUE_MAP_CONTIG 0x0 +#define IAVF_AQ_VSI_QUE_MAP_NONCONTIG 0x1 __le16 queue_mapping[16]; -#define AVF_AQ_VSI_QUEUE_SHIFT 0x0 -#define AVF_AQ_VSI_QUEUE_MASK (0x7FF << AVF_AQ_VSI_QUEUE_SHIFT) +#define IAVF_AQ_VSI_QUEUE_SHIFT 0x0 +#define IAVF_AQ_VSI_QUEUE_MASK (0x7FF << IAVF_AQ_VSI_QUEUE_SHIFT) __le16 tc_mapping[8]; -#define AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT 0 -#define AVF_AQ_VSI_TC_QUE_OFFSET_MASK (0x1FF << \ - AVF_AQ_VSI_TC_QUE_OFFSET_SHIFT) -#define AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT 9 -#define AVF_AQ_VSI_TC_QUE_NUMBER_MASK (0x7 << \ - AVF_AQ_VSI_TC_QUE_NUMBER_SHIFT) +#define IAVF_AQ_VSI_TC_QUE_OFFSET_SHIFT 0 +#define IAVF_AQ_VSI_TC_QUE_OFFSET_MASK (0x1FF << \ + IAVF_AQ_VSI_TC_QUE_OFFSET_SHIFT) +#define IAVF_AQ_VSI_TC_QUE_NUMBER_SHIFT 9 +#define IAVF_AQ_VSI_TC_QUE_NUMBER_MASK (0x7 << \ + IAVF_AQ_VSI_TC_QUE_NUMBER_SHIFT) /* queueing option section */ u8 queueing_opt_flags; -#define AVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA 0x04 -#define AVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA 0x08 -#define AVF_AQ_VSI_QUE_OPT_TCP_ENA 0x10 -#define AVF_AQ_VSI_QUE_OPT_FCOE_ENA 0x20 -#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_PF 0x00 -#define AVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI 0x40 +#define IAVF_AQ_VSI_QUE_OPT_MULTICAST_UDP_ENA 0x04 +#define IAVF_AQ_VSI_QUE_OPT_UNICAST_UDP_ENA 0x08 +#define IAVF_AQ_VSI_QUE_OPT_TCP_ENA 0x10 +#define IAVF_AQ_VSI_QUE_OPT_FCOE_ENA 0x20 +#define IAVF_AQ_VSI_QUE_OPT_RSS_LUT_PF 0x00 +#define IAVF_AQ_VSI_QUE_OPT_RSS_LUT_VSI 0x40 u8 queueing_opt_reserved[3]; /* scheduler section */ u8 up_enable_bits; @@ -987,99 +987,99 @@ struct avf_aqc_vsi_properties_data { u8 cmd_reserved[8]; /* last 32 bytes are written by FW */ __le16 qs_handle[8]; -#define AVF_AQ_VSI_QS_HANDLE_INVALID 0xFFFF +#define IAVF_AQ_VSI_QS_HANDLE_INVALID 0xFFFF __le16 stat_counter_idx; __le16 sched_id; u8 resp_reserved[12]; }; -AVF_CHECK_STRUCT_LEN(128, avf_aqc_vsi_properties_data); +IAVF_CHECK_STRUCT_LEN(128, iavf_aqc_vsi_properties_data); /* Add Port Virtualizer (direct 0x0220) * also used for update PV (direct 0x0221) but only flags are used * (IS_CTRL_PORT only works on add PV) */ -struct avf_aqc_add_update_pv { +struct iavf_aqc_add_update_pv { __le16 command_flags; -#define AVF_AQC_PV_FLAG_PV_TYPE 0x1 -#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN 0x2 -#define AVF_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN 0x4 -#define AVF_AQC_PV_FLAG_IS_CTRL_PORT 0x8 +#define IAVF_AQC_PV_FLAG_PV_TYPE 0x1 +#define IAVF_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN 0x2 +#define IAVF_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN 0x4 +#define IAVF_AQC_PV_FLAG_IS_CTRL_PORT 0x8 __le16 uplink_seid; __le16 connected_seid; u8 reserved[10]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_update_pv); -struct avf_aqc_add_update_pv_completion { +struct iavf_aqc_add_update_pv_completion { /* reserved for update; for add also encodes error if rc == ENOSPC */ __le16 pv_seid; -#define AVF_AQC_PV_ERR_FLAG_NO_PV 0x1 -#define AVF_AQC_PV_ERR_FLAG_NO_SCHED 0x2 -#define AVF_AQC_PV_ERR_FLAG_NO_COUNTER 0x4 -#define AVF_AQC_PV_ERR_FLAG_NO_ENTRY 0x8 +#define IAVF_AQC_PV_ERR_FLAG_NO_PV 0x1 +#define IAVF_AQC_PV_ERR_FLAG_NO_SCHED 0x2 +#define IAVF_AQC_PV_ERR_FLAG_NO_COUNTER 0x4 +#define IAVF_AQC_PV_ERR_FLAG_NO_ENTRY 0x8 u8 reserved[14]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_update_pv_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_update_pv_completion); /* Get PV Params (direct 0x0222) - * uses avf_aqc_switch_seid for the descriptor + * uses iavf_aqc_switch_seid for the descriptor */ -struct avf_aqc_get_pv_params_completion { +struct iavf_aqc_get_pv_params_completion { __le16 seid; __le16 default_stag; __le16 pv_flags; /* same flags as add_pv */ -#define AVF_AQC_GET_PV_PV_TYPE 0x1 -#define AVF_AQC_GET_PV_FRWD_UNKNOWN_STAG 0x2 -#define AVF_AQC_GET_PV_FRWD_UNKNOWN_ETAG 0x4 +#define IAVF_AQC_GET_PV_PV_TYPE 0x1 +#define IAVF_AQC_GET_PV_FRWD_UNKNOWN_STAG 0x2 +#define IAVF_AQC_GET_PV_FRWD_UNKNOWN_ETAG 0x4 u8 reserved[8]; __le16 default_port_seid; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_pv_params_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_pv_params_completion); /* Add VEB (direct 0x0230) */ -struct avf_aqc_add_veb { +struct iavf_aqc_add_veb { __le16 uplink_seid; __le16 downlink_seid; __le16 veb_flags; -#define AVF_AQC_ADD_VEB_FLOATING 0x1 -#define AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT 1 -#define AVF_AQC_ADD_VEB_PORT_TYPE_MASK (0x3 << \ - AVF_AQC_ADD_VEB_PORT_TYPE_SHIFT) -#define AVF_AQC_ADD_VEB_PORT_TYPE_DEFAULT 0x2 -#define AVF_AQC_ADD_VEB_PORT_TYPE_DATA 0x4 -#define AVF_AQC_ADD_VEB_ENABLE_L2_FILTER 0x8 /* deprecated */ -#define AVF_AQC_ADD_VEB_ENABLE_DISABLE_STATS 0x10 +#define IAVF_AQC_ADD_VEB_FLOATING 0x1 +#define IAVF_AQC_ADD_VEB_PORT_TYPE_SHIFT 1 +#define IAVF_AQC_ADD_VEB_PORT_TYPE_MASK (0x3 << \ + IAVF_AQC_ADD_VEB_PORT_TYPE_SHIFT) +#define IAVF_AQC_ADD_VEB_PORT_TYPE_DEFAULT 0x2 +#define IAVF_AQC_ADD_VEB_PORT_TYPE_DATA 0x4 +#define IAVF_AQC_ADD_VEB_ENABLE_L2_FILTER 0x8 /* deprecated */ +#define IAVF_AQC_ADD_VEB_ENABLE_DISABLE_STATS 0x10 u8 enable_tcs; u8 reserved[9]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_veb); -struct avf_aqc_add_veb_completion { +struct iavf_aqc_add_veb_completion { u8 reserved[6]; __le16 switch_seid; /* also encodes error if rc == ENOSPC; codes are the same as add_pv */ __le16 veb_seid; -#define AVF_AQC_VEB_ERR_FLAG_NO_VEB 0x1 -#define AVF_AQC_VEB_ERR_FLAG_NO_SCHED 0x2 -#define AVF_AQC_VEB_ERR_FLAG_NO_COUNTER 0x4 -#define AVF_AQC_VEB_ERR_FLAG_NO_ENTRY 0x8 +#define IAVF_AQC_VEB_ERR_FLAG_NO_VEB 0x1 +#define IAVF_AQC_VEB_ERR_FLAG_NO_SCHED 0x2 +#define IAVF_AQC_VEB_ERR_FLAG_NO_COUNTER 0x4 +#define IAVF_AQC_VEB_ERR_FLAG_NO_ENTRY 0x8 __le16 statistic_index; __le16 vebs_used; __le16 vebs_free; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_veb_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_veb_completion); /* Get VEB Parameters (direct 0x0232) - * uses avf_aqc_switch_seid for the descriptor + * uses iavf_aqc_switch_seid for the descriptor */ -struct avf_aqc_get_veb_parameters_completion { +struct iavf_aqc_get_veb_parameters_completion { __le16 seid; __le16 switch_id; __le16 veb_flags; /* only the first/last flags from 0x0230 is valid */ @@ -1089,51 +1089,51 @@ struct avf_aqc_get_veb_parameters_completion { u8 reserved[4]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_veb_parameters_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_veb_parameters_completion); /* Delete Element (direct 0x0243) - * uses the generic avf_aqc_switch_seid + * uses the generic iavf_aqc_switch_seid */ /* Add MAC-VLAN (indirect 0x0250) */ /* used for the command for most vlan commands */ -struct avf_aqc_macvlan { +struct iavf_aqc_macvlan { __le16 num_addresses; __le16 seid[3]; -#define AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT 0 -#define AVF_AQC_MACVLAN_CMD_SEID_NUM_MASK (0x3FF << \ - AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT) -#define AVF_AQC_MACVLAN_CMD_SEID_VALID 0x8000 +#define IAVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT 0 +#define IAVF_AQC_MACVLAN_CMD_SEID_NUM_MASK (0x3FF << \ + IAVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT) +#define IAVF_AQC_MACVLAN_CMD_SEID_VALID 0x8000 __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_macvlan); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_macvlan); /* indirect data for command and response */ -struct avf_aqc_add_macvlan_element_data { +struct iavf_aqc_add_macvlan_element_data { u8 mac_addr[6]; __le16 vlan_tag; __le16 flags; -#define AVF_AQC_MACVLAN_ADD_PERFECT_MATCH 0x0001 -#define AVF_AQC_MACVLAN_ADD_HASH_MATCH 0x0002 -#define AVF_AQC_MACVLAN_ADD_IGNORE_VLAN 0x0004 -#define AVF_AQC_MACVLAN_ADD_TO_QUEUE 0x0008 -#define AVF_AQC_MACVLAN_ADD_USE_SHARED_MAC 0x0010 +#define IAVF_AQC_MACVLAN_ADD_PERFECT_MATCH 0x0001 +#define IAVF_AQC_MACVLAN_ADD_HASH_MATCH 0x0002 +#define IAVF_AQC_MACVLAN_ADD_IGNORE_VLAN 0x0004 +#define IAVF_AQC_MACVLAN_ADD_TO_QUEUE 0x0008 +#define IAVF_AQC_MACVLAN_ADD_USE_SHARED_MAC 0x0010 __le16 queue_number; -#define AVF_AQC_MACVLAN_CMD_QUEUE_SHIFT 0 -#define AVF_AQC_MACVLAN_CMD_QUEUE_MASK (0x7FF << \ - AVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT) +#define IAVF_AQC_MACVLAN_CMD_QUEUE_SHIFT 0 +#define IAVF_AQC_MACVLAN_CMD_QUEUE_MASK (0x7FF << \ + IAVF_AQC_MACVLAN_CMD_SEID_NUM_SHIFT) /* response section */ u8 match_method; -#define AVF_AQC_MM_PERFECT_MATCH 0x01 -#define AVF_AQC_MM_HASH_MATCH 0x02 -#define AVF_AQC_MM_ERR_NO_RES 0xFF +#define IAVF_AQC_MM_PERFECT_MATCH 0x01 +#define IAVF_AQC_MM_HASH_MATCH 0x02 +#define IAVF_AQC_MM_ERR_NO_RES 0xFF u8 reserved1[3]; }; -struct avf_aqc_add_remove_macvlan_completion { +struct iavf_aqc_add_remove_macvlan_completion { __le16 perfect_mac_used; __le16 perfect_mac_free; __le16 unicast_hash_free; @@ -1142,64 +1142,64 @@ struct avf_aqc_add_remove_macvlan_completion { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_macvlan_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_remove_macvlan_completion); /* Remove MAC-VLAN (indirect 0x0251) - * uses avf_aqc_macvlan for the descriptor + * uses iavf_aqc_macvlan for the descriptor * data points to an array of num_addresses of elements */ -struct avf_aqc_remove_macvlan_element_data { +struct iavf_aqc_remove_macvlan_element_data { u8 mac_addr[6]; __le16 vlan_tag; u8 flags; -#define AVF_AQC_MACVLAN_DEL_PERFECT_MATCH 0x01 -#define AVF_AQC_MACVLAN_DEL_HASH_MATCH 0x02 -#define AVF_AQC_MACVLAN_DEL_IGNORE_VLAN 0x08 -#define AVF_AQC_MACVLAN_DEL_ALL_VSIS 0x10 +#define IAVF_AQC_MACVLAN_DEL_PERFECT_MATCH 0x01 +#define IAVF_AQC_MACVLAN_DEL_HASH_MATCH 0x02 +#define IAVF_AQC_MACVLAN_DEL_IGNORE_VLAN 0x08 +#define IAVF_AQC_MACVLAN_DEL_ALL_VSIS 0x10 u8 reserved[3]; /* reply section */ u8 error_code; -#define AVF_AQC_REMOVE_MACVLAN_SUCCESS 0x0 -#define AVF_AQC_REMOVE_MACVLAN_FAIL 0xFF +#define IAVF_AQC_REMOVE_MACVLAN_SUCCESS 0x0 +#define IAVF_AQC_REMOVE_MACVLAN_FAIL 0xFF u8 reply_reserved[3]; }; /* Add VLAN (indirect 0x0252) * Remove VLAN (indirect 0x0253) - * use the generic avf_aqc_macvlan for the command + * use the generic iavf_aqc_macvlan for the command */ -struct avf_aqc_add_remove_vlan_element_data { +struct iavf_aqc_add_remove_vlan_element_data { __le16 vlan_tag; u8 vlan_flags; /* flags for add VLAN */ -#define AVF_AQC_ADD_VLAN_LOCAL 0x1 -#define AVF_AQC_ADD_PVLAN_TYPE_SHIFT 1 -#define AVF_AQC_ADD_PVLAN_TYPE_MASK (0x3 << AVF_AQC_ADD_PVLAN_TYPE_SHIFT) -#define AVF_AQC_ADD_PVLAN_TYPE_REGULAR 0x0 -#define AVF_AQC_ADD_PVLAN_TYPE_PRIMARY 0x2 -#define AVF_AQC_ADD_PVLAN_TYPE_SECONDARY 0x4 -#define AVF_AQC_VLAN_PTYPE_SHIFT 3 -#define AVF_AQC_VLAN_PTYPE_MASK (0x3 << AVF_AQC_VLAN_PTYPE_SHIFT) -#define AVF_AQC_VLAN_PTYPE_REGULAR_VSI 0x0 -#define AVF_AQC_VLAN_PTYPE_PROMISC_VSI 0x8 -#define AVF_AQC_VLAN_PTYPE_COMMUNITY_VSI 0x10 -#define AVF_AQC_VLAN_PTYPE_ISOLATED_VSI 0x18 +#define IAVF_AQC_ADD_VLAN_LOCAL 0x1 +#define IAVF_AQC_ADD_PVLAN_TYPE_SHIFT 1 +#define IAVF_AQC_ADD_PVLAN_TYPE_MASK (0x3 << IAVF_AQC_ADD_PVLAN_TYPE_SHIFT) +#define IAVF_AQC_ADD_PVLAN_TYPE_REGULAR 0x0 +#define IAVF_AQC_ADD_PVLAN_TYPE_PRIMARY 0x2 +#define IAVF_AQC_ADD_PVLAN_TYPE_SECONDARY 0x4 +#define IAVF_AQC_VLAN_PTYPE_SHIFT 3 +#define IAVF_AQC_VLAN_PTYPE_MASK (0x3 << IAVF_AQC_VLAN_PTYPE_SHIFT) +#define IAVF_AQC_VLAN_PTYPE_REGULAR_VSI 0x0 +#define IAVF_AQC_VLAN_PTYPE_PROMISC_VSI 0x8 +#define IAVF_AQC_VLAN_PTYPE_COMMUNITY_VSI 0x10 +#define IAVF_AQC_VLAN_PTYPE_ISOLATED_VSI 0x18 /* flags for remove VLAN */ -#define AVF_AQC_REMOVE_VLAN_ALL 0x1 +#define IAVF_AQC_REMOVE_VLAN_ALL 0x1 u8 reserved; u8 result; /* flags for add VLAN */ -#define AVF_AQC_ADD_VLAN_SUCCESS 0x0 -#define AVF_AQC_ADD_VLAN_FAIL_REQUEST 0xFE -#define AVF_AQC_ADD_VLAN_FAIL_RESOURCE 0xFF +#define IAVF_AQC_ADD_VLAN_SUCCESS 0x0 +#define IAVF_AQC_ADD_VLAN_FAIL_REQUEST 0xFE +#define IAVF_AQC_ADD_VLAN_FAIL_RESOURCE 0xFF /* flags for remove VLAN */ -#define AVF_AQC_REMOVE_VLAN_SUCCESS 0x0 -#define AVF_AQC_REMOVE_VLAN_FAIL 0xFF +#define IAVF_AQC_REMOVE_VLAN_SUCCESS 0x0 +#define IAVF_AQC_REMOVE_VLAN_FAIL 0xFF u8 reserved1[3]; }; -struct avf_aqc_add_remove_vlan_completion { +struct iavf_aqc_add_remove_vlan_completion { u8 reserved[4]; __le16 vlans_used; __le16 vlans_free; @@ -1208,70 +1208,70 @@ struct avf_aqc_add_remove_vlan_completion { }; /* Set VSI Promiscuous Modes (direct 0x0254) */ -struct avf_aqc_set_vsi_promiscuous_modes { +struct iavf_aqc_set_vsi_promiscuous_modes { __le16 promiscuous_flags; __le16 valid_flags; /* flags used for both fields above */ -#define AVF_AQC_SET_VSI_PROMISC_UNICAST 0x01 -#define AVF_AQC_SET_VSI_PROMISC_MULTICAST 0x02 -#define AVF_AQC_SET_VSI_PROMISC_BROADCAST 0x04 -#define AVF_AQC_SET_VSI_DEFAULT 0x08 -#define AVF_AQC_SET_VSI_PROMISC_VLAN 0x10 -#define AVF_AQC_SET_VSI_PROMISC_TX 0x8000 +#define IAVF_AQC_SET_VSI_PROMISC_UNICAST 0x01 +#define IAVF_AQC_SET_VSI_PROMISC_MULTICAST 0x02 +#define IAVF_AQC_SET_VSI_PROMISC_BROADCAST 0x04 +#define IAVF_AQC_SET_VSI_DEFAULT 0x08 +#define IAVF_AQC_SET_VSI_PROMISC_VLAN 0x10 +#define IAVF_AQC_SET_VSI_PROMISC_TX 0x8000 __le16 seid; -#define AVF_AQC_VSI_PROM_CMD_SEID_MASK 0x3FF +#define IAVF_AQC_VSI_PROM_CMD_SEID_MASK 0x3FF __le16 vlan_tag; -#define AVF_AQC_SET_VSI_VLAN_MASK 0x0FFF -#define AVF_AQC_SET_VSI_VLAN_VALID 0x8000 +#define IAVF_AQC_SET_VSI_VLAN_MASK 0x0FFF +#define IAVF_AQC_SET_VSI_VLAN_VALID 0x8000 u8 reserved[8]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_vsi_promiscuous_modes); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_vsi_promiscuous_modes); /* Add S/E-tag command (direct 0x0255) - * Uses generic avf_aqc_add_remove_tag_completion for completion + * Uses generic iavf_aqc_add_remove_tag_completion for completion */ -struct avf_aqc_add_tag { +struct iavf_aqc_add_tag { __le16 flags; -#define AVF_AQC_ADD_TAG_FLAG_TO_QUEUE 0x0001 +#define IAVF_AQC_ADD_TAG_FLAG_TO_QUEUE 0x0001 __le16 seid; -#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT 0 -#define AVF_AQC_ADD_TAG_CMD_SEID_NUM_MASK (0x3FF << \ - AVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT) +#define IAVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT 0 +#define IAVF_AQC_ADD_TAG_CMD_SEID_NUM_MASK (0x3FF << \ + IAVF_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT) __le16 tag; __le16 queue_number; u8 reserved[8]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_tag); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_tag); -struct avf_aqc_add_remove_tag_completion { +struct iavf_aqc_add_remove_tag_completion { u8 reserved[12]; __le16 tags_used; __le16 tags_free; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_tag_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_remove_tag_completion); /* Remove S/E-tag command (direct 0x0256) - * Uses generic avf_aqc_add_remove_tag_completion for completion + * Uses generic iavf_aqc_add_remove_tag_completion for completion */ -struct avf_aqc_remove_tag { +struct iavf_aqc_remove_tag { __le16 seid; -#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT 0 -#define AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK (0x3FF << \ - AVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT) +#define IAVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT 0 +#define IAVF_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK (0x3FF << \ + IAVF_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT) __le16 tag; u8 reserved[12]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_remove_tag); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_remove_tag); /* Add multicast E-Tag (direct 0x0257) * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields * and no external data */ -struct avf_aqc_add_remove_mcast_etag { +struct iavf_aqc_add_remove_mcast_etag { __le16 pv_seid; __le16 etag; u8 num_unicast_etags; @@ -1280,9 +1280,9 @@ struct avf_aqc_add_remove_mcast_etag { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_remove_mcast_etag); -struct avf_aqc_add_remove_mcast_etag_completion { +struct iavf_aqc_add_remove_mcast_etag_completion { u8 reserved[4]; __le16 mcast_etags_used; __le16 mcast_etags_free; @@ -1291,54 +1291,54 @@ struct avf_aqc_add_remove_mcast_etag_completion { }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_mcast_etag_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_remove_mcast_etag_completion); /* Update S/E-Tag (direct 0x0259) */ -struct avf_aqc_update_tag { +struct iavf_aqc_update_tag { __le16 seid; -#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT 0 -#define AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK (0x3FF << \ - AVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT) +#define IAVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT 0 +#define IAVF_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK (0x3FF << \ + IAVF_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT) __le16 old_tag; __le16 new_tag; u8 reserved[10]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_update_tag); -struct avf_aqc_update_tag_completion { +struct iavf_aqc_update_tag_completion { u8 reserved[12]; __le16 tags_used; __le16 tags_free; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_update_tag_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_update_tag_completion); /* Add Control Packet filter (direct 0x025A) * Remove Control Packet filter (direct 0x025B) - * uses the avf_aqc_add_oveb_cloud, + * uses the iavf_aqc_add_oveb_cloud, * and the generic direct completion structure */ -struct avf_aqc_add_remove_control_packet_filter { +struct iavf_aqc_add_remove_control_packet_filter { u8 mac[6]; __le16 etype; __le16 flags; -#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC 0x0001 -#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_DROP 0x0002 -#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE 0x0004 -#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_TX 0x0008 -#define AVF_AQC_ADD_CONTROL_PACKET_FLAGS_RX 0x0000 +#define IAVF_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC 0x0001 +#define IAVF_AQC_ADD_CONTROL_PACKET_FLAGS_DROP 0x0002 +#define IAVF_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE 0x0004 +#define IAVF_AQC_ADD_CONTROL_PACKET_FLAGS_TX 0x0008 +#define IAVF_AQC_ADD_CONTROL_PACKET_FLAGS_RX 0x0000 __le16 seid; -#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT 0 -#define AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK (0x3FF << \ - AVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT) +#define IAVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT 0 +#define IAVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK (0x3FF << \ + IAVF_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT) __le16 queue; u8 reserved[2]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_remove_control_packet_filter); -struct avf_aqc_add_remove_control_packet_filter_completion { +struct iavf_aqc_add_remove_control_packet_filter_completion { __le16 mac_etype_used; __le16 etype_used; __le16 mac_etype_free; @@ -1346,30 +1346,30 @@ struct avf_aqc_add_remove_control_packet_filter_completion { u8 reserved[8]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_control_packet_filter_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_remove_control_packet_filter_completion); /* Add Cloud filters (indirect 0x025C) * Remove Cloud filters (indirect 0x025D) - * uses the avf_aqc_add_remove_cloud_filters, + * uses the iavf_aqc_add_remove_cloud_filters, * and the generic indirect completion structure */ -struct avf_aqc_add_remove_cloud_filters { +struct iavf_aqc_add_remove_cloud_filters { u8 num_filters; u8 reserved; __le16 seid; -#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT 0 -#define AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK (0x3FF << \ - AVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT) +#define IAVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT 0 +#define IAVF_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK (0x3FF << \ + IAVF_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT) u8 big_buffer_flag; -#define AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER 1 +#define IAVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER 1 u8 reserved2[3]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_remove_cloud_filters); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_remove_cloud_filters); -struct avf_aqc_add_remove_cloud_filters_element_data { +struct iavf_aqc_add_remove_cloud_filters_element_data { u8 outer_mac[6]; u8 inner_mac[6]; __le16 inner_vlan; @@ -1383,97 +1383,97 @@ struct avf_aqc_add_remove_cloud_filters_element_data { } v6; } ipaddr; __le16 flags; -#define AVF_AQC_ADD_CLOUD_FILTER_SHIFT 0 -#define AVF_AQC_ADD_CLOUD_FILTER_MASK (0x3F << \ - AVF_AQC_ADD_CLOUD_FILTER_SHIFT) +#define IAVF_AQC_ADD_CLOUD_FILTER_SHIFT 0 +#define IAVF_AQC_ADD_CLOUD_FILTER_MASK (0x3F << \ + IAVF_AQC_ADD_CLOUD_FILTER_SHIFT) /* 0x0000 reserved */ -#define AVF_AQC_ADD_CLOUD_FILTER_OIP 0x0001 +#define IAVF_AQC_ADD_CLOUD_FILTER_OIP 0x0001 /* 0x0002 reserved */ -#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN 0x0003 -#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID 0x0004 +#define IAVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN 0x0003 +#define IAVF_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID 0x0004 /* 0x0005 reserved */ -#define AVF_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID 0x0006 +#define IAVF_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID 0x0006 /* 0x0007 reserved */ /* 0x0008 reserved */ -#define AVF_AQC_ADD_CLOUD_FILTER_OMAC 0x0009 -#define AVF_AQC_ADD_CLOUD_FILTER_IMAC 0x000A -#define AVF_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC 0x000B -#define AVF_AQC_ADD_CLOUD_FILTER_IIP 0x000C +#define IAVF_AQC_ADD_CLOUD_FILTER_OMAC 0x0009 +#define IAVF_AQC_ADD_CLOUD_FILTER_IMAC 0x000A +#define IAVF_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC 0x000B +#define IAVF_AQC_ADD_CLOUD_FILTER_IIP 0x000C /* 0x0010 to 0x0017 is for custom filters */ -#define AVF_AQC_ADD_CLOUD_FLAGS_TO_QUEUE 0x0080 -#define AVF_AQC_ADD_CLOUD_VNK_SHIFT 6 -#define AVF_AQC_ADD_CLOUD_VNK_MASK 0x00C0 -#define AVF_AQC_ADD_CLOUD_FLAGS_IPV4 0 -#define AVF_AQC_ADD_CLOUD_FLAGS_IPV6 0x0100 - -#define AVF_AQC_ADD_CLOUD_TNL_TYPE_SHIFT 9 -#define AVF_AQC_ADD_CLOUD_TNL_TYPE_MASK 0x1E00 -#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN 0 -#define AVF_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC 1 -#define AVF_AQC_ADD_CLOUD_TNL_TYPE_GENEVE 2 -#define AVF_AQC_ADD_CLOUD_TNL_TYPE_IP 3 -#define AVF_AQC_ADD_CLOUD_TNL_TYPE_RESERVED 4 -#define AVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE 5 - -#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_MAC 0x2000 -#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_INNER_MAC 0x4000 -#define AVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_IP 0x8000 +#define IAVF_AQC_ADD_CLOUD_FLAGS_TO_QUEUE 0x0080 +#define IAVF_AQC_ADD_CLOUD_VNK_SHIFT 6 +#define IAVF_AQC_ADD_CLOUD_VNK_MASK 0x00C0 +#define IAVF_AQC_ADD_CLOUD_FLAGS_IPV4 0 +#define IAVF_AQC_ADD_CLOUD_FLAGS_IPV6 0x0100 + +#define IAVF_AQC_ADD_CLOUD_TNL_TYPE_SHIFT 9 +#define IAVF_AQC_ADD_CLOUD_TNL_TYPE_MASK 0x1E00 +#define IAVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN 0 +#define IAVF_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC 1 +#define IAVF_AQC_ADD_CLOUD_TNL_TYPE_GENEVE 2 +#define IAVF_AQC_ADD_CLOUD_TNL_TYPE_IP 3 +#define IAVF_AQC_ADD_CLOUD_TNL_TYPE_RESERVED 4 +#define IAVF_AQC_ADD_CLOUD_TNL_TYPE_VXLAN_GPE 5 + +#define IAVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_MAC 0x2000 +#define IAVF_AQC_ADD_CLOUD_FLAGS_SHARED_INNER_MAC 0x4000 +#define IAVF_AQC_ADD_CLOUD_FLAGS_SHARED_OUTER_IP 0x8000 __le32 tenant_id; u8 reserved[4]; __le16 queue_number; -#define AVF_AQC_ADD_CLOUD_QUEUE_SHIFT 0 -#define AVF_AQC_ADD_CLOUD_QUEUE_MASK (0x7FF << \ - AVF_AQC_ADD_CLOUD_QUEUE_SHIFT) +#define IAVF_AQC_ADD_CLOUD_QUEUE_SHIFT 0 +#define IAVF_AQC_ADD_CLOUD_QUEUE_MASK (0x7FF << \ + IAVF_AQC_ADD_CLOUD_QUEUE_SHIFT) u8 reserved2[14]; /* response section */ u8 allocation_result; -#define AVF_AQC_ADD_CLOUD_FILTER_SUCCESS 0x0 -#define AVF_AQC_ADD_CLOUD_FILTER_FAIL 0xFF +#define IAVF_AQC_ADD_CLOUD_FILTER_SUCCESS 0x0 +#define IAVF_AQC_ADD_CLOUD_FILTER_FAIL 0xFF u8 response_reserved[7]; }; -/* avf_aqc_add_rm_cloud_filt_elem_ext is used when - * AVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER flag is set. +/* iavf_aqc_add_rm_cloud_filt_elem_ext is used when + * IAVF_AQC_ADD_REM_CLOUD_CMD_BIG_BUFFER flag is set. */ -struct avf_aqc_add_rm_cloud_filt_elem_ext { - struct avf_aqc_add_remove_cloud_filters_element_data element; +struct iavf_aqc_add_rm_cloud_filt_elem_ext { + struct iavf_aqc_add_remove_cloud_filters_element_data element; u16 general_fields[32]; -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0 0 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1 1 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2 2 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0 3 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1 4 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2 5 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0 6 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1 7 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2 8 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0 9 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1 10 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2 11 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0 12 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1 13 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2 14 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0 15 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1 16 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2 17 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3 18 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4 19 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5 20 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6 21 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7 22 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0 23 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1 24 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2 25 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3 26 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4 27 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5 28 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6 29 -#define AVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7 30 -}; - -struct avf_aqc_remove_cloud_filters_completion { +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD0 0 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD1 1 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X10_WORD2 2 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD0 3 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD1 4 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X11_WORD2 5 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD0 6 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD1 7 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X12_WORD2 8 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD0 9 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD1 10 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X13_WORD2 11 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD0 12 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD1 13 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X14_WORD2 14 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD0 15 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD1 16 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD2 17 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD3 18 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD4 19 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD5 20 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD6 21 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X16_WORD7 22 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD0 23 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD1 24 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD2 25 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD3 26 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD4 27 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD5 28 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD6 29 +#define IAVF_AQC_ADD_CLOUD_FV_FLU_0X17_WORD7 30 +}; + +struct iavf_aqc_remove_cloud_filters_completion { __le16 perfect_ovlan_used; __le16 perfect_ovlan_free; __le16 vlan_used; @@ -1482,24 +1482,24 @@ struct avf_aqc_remove_cloud_filters_completion { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_remove_cloud_filters_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_remove_cloud_filters_completion); /* Replace filter Command 0x025F - * uses the avf_aqc_replace_cloud_filters, + * uses the iavf_aqc_replace_cloud_filters, * and the generic indirect completion structure */ -struct avf_filter_data { +struct iavf_filter_data { u8 filter_type; u8 input[3]; }; -struct avf_aqc_replace_cloud_filters_cmd { +struct iavf_aqc_replace_cloud_filters_cmd { u8 valid_flags; -#define AVF_AQC_REPLACE_L1_FILTER 0x0 -#define AVF_AQC_REPLACE_CLOUD_FILTER 0x1 -#define AVF_AQC_GET_CLOUD_FILTERS 0x2 -#define AVF_AQC_MIRROR_CLOUD_FILTER 0x4 -#define AVF_AQC_HIGH_PRIORITY_CLOUD_FILTER 0x8 +#define IAVF_AQC_REPLACE_L1_FILTER 0x0 +#define IAVF_AQC_REPLACE_CLOUD_FILTER 0x1 +#define IAVF_AQC_GET_CLOUD_FILTERS 0x2 +#define IAVF_AQC_MIRROR_CLOUD_FILTER 0x4 +#define IAVF_AQC_HIGH_PRIORITY_CLOUD_FILTER 0x8 u8 old_filter_type; u8 new_filter_type; u8 tr_bit; @@ -1508,28 +1508,28 @@ struct avf_aqc_replace_cloud_filters_cmd { __le32 addr_low; }; -struct avf_aqc_replace_cloud_filters_cmd_buf { +struct iavf_aqc_replace_cloud_filters_cmd_buf { u8 data[32]; /* Filter type INPUT codes*/ -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX 3 -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED (1 << 7UL) +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_ENTRIES_MAX 3 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_VALIDATED (1 << 7UL) /* Field Vector offsets */ -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA 0 -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH 6 -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG 7 -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN 8 -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN 9 -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN 10 -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY 11 -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC 12 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_MAC_DA 0 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_ETH 6 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG 7 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_VLAN 8 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_OVLAN 9 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_STAG_IVLAN 10 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_TUNNLE_KEY 11 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IMAC 12 /* big FLU */ -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA 14 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_IP_DA 14 /* big FLU */ -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA 15 +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_OIP_DA 15 -#define AVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN 37 - struct avf_filter_data filters[8]; +#define IAVF_AQC_REPLACE_CLOUD_CMD_INPUT_FV_INNER_VLAN 37 + struct iavf_filter_data filters[8]; }; /* Add Mirror Rule (indirect or direct 0x0260) @@ -1537,26 +1537,26 @@ struct avf_aqc_replace_cloud_filters_cmd_buf { * note: some rule types (4,5) do not use an external buffer. * take care to set the flags correctly. */ -struct avf_aqc_add_delete_mirror_rule { +struct iavf_aqc_add_delete_mirror_rule { __le16 seid; __le16 rule_type; -#define AVF_AQC_MIRROR_RULE_TYPE_SHIFT 0 -#define AVF_AQC_MIRROR_RULE_TYPE_MASK (0x7 << \ - AVF_AQC_MIRROR_RULE_TYPE_SHIFT) -#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS 1 -#define AVF_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS 2 -#define AVF_AQC_MIRROR_RULE_TYPE_VLAN 3 -#define AVF_AQC_MIRROR_RULE_TYPE_ALL_INGRESS 4 -#define AVF_AQC_MIRROR_RULE_TYPE_ALL_EGRESS 5 +#define IAVF_AQC_MIRROR_RULE_TYPE_SHIFT 0 +#define IAVF_AQC_MIRROR_RULE_TYPE_MASK (0x7 << \ + IAVF_AQC_MIRROR_RULE_TYPE_SHIFT) +#define IAVF_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS 1 +#define IAVF_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS 2 +#define IAVF_AQC_MIRROR_RULE_TYPE_VLAN 3 +#define IAVF_AQC_MIRROR_RULE_TYPE_ALL_INGRESS 4 +#define IAVF_AQC_MIRROR_RULE_TYPE_ALL_EGRESS 5 __le16 num_entries; __le16 destination; /* VSI for add, rule id for delete */ __le32 addr_high; /* address of array of 2-byte VSI or VLAN ids */ __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_delete_mirror_rule); -struct avf_aqc_add_delete_mirror_rule_completion { +struct iavf_aqc_add_delete_mirror_rule_completion { u8 reserved[2]; __le16 rule_id; /* only used on add */ __le16 mirror_rules_used; @@ -1565,10 +1565,10 @@ struct avf_aqc_add_delete_mirror_rule_completion { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_delete_mirror_rule_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_delete_mirror_rule_completion); /* Dynamic Device Personalization */ -struct avf_aqc_write_personalization_profile { +struct iavf_aqc_write_personalization_profile { u8 flags; u8 reserved[3]; __le32 profile_track_id; @@ -1576,43 +1576,43 @@ struct avf_aqc_write_personalization_profile { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_write_personalization_profile); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_write_personalization_profile); -struct avf_aqc_write_ddp_resp { +struct iavf_aqc_write_ddp_resp { __le32 error_offset; __le32 error_info; __le32 addr_high; __le32 addr_low; }; -struct avf_aqc_get_applied_profiles { +struct iavf_aqc_get_applied_profiles { u8 flags; -#define AVF_AQC_GET_DDP_GET_CONF 0x1 -#define AVF_AQC_GET_DDP_GET_RDPU_CONF 0x2 +#define IAVF_AQC_GET_DDP_GET_CONF 0x1 +#define IAVF_AQC_GET_DDP_GET_RDPU_CONF 0x2 u8 rsv[3]; __le32 reserved; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_applied_profiles); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_applied_profiles); /* DCB 0x03xx*/ /* PFC Ignore (direct 0x0301) * the command and response use the same descriptor structure */ -struct avf_aqc_pfc_ignore { +struct iavf_aqc_pfc_ignore { u8 tc_bitmap; u8 command_flags; /* unused on response */ -#define AVF_AQC_PFC_IGNORE_SET 0x80 -#define AVF_AQC_PFC_IGNORE_CLEAR 0x0 +#define IAVF_AQC_PFC_IGNORE_SET 0x80 +#define IAVF_AQC_PFC_IGNORE_CLEAR 0x0 u8 reserved[14]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_pfc_ignore); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_pfc_ignore); -/* DCB Update (direct 0x0302) uses the avf_aq_desc structure +/* DCB Update (direct 0x0302) uses the iavf_aq_desc structure * with no parameters */ @@ -1621,22 +1621,22 @@ AVF_CHECK_CMD_LENGTH(avf_aqc_pfc_ignore); /* Almost all the indirect commands use * this generic struct to pass the SEID in param0 */ -struct avf_aqc_tx_sched_ind { +struct iavf_aqc_tx_sched_ind { __le16 vsi_seid; u8 reserved[6]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_tx_sched_ind); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_tx_sched_ind); /* Several commands respond with a set of queue set handles */ -struct avf_aqc_qs_handles_resp { +struct iavf_aqc_qs_handles_resp { __le16 qs_handles[8]; }; /* Configure VSI BW limits (direct 0x0400) */ -struct avf_aqc_configure_vsi_bw_limit { +struct iavf_aqc_configure_vsi_bw_limit { __le16 vsi_seid; u8 reserved[2]; __le16 credit; @@ -1645,12 +1645,12 @@ struct avf_aqc_configure_vsi_bw_limit { u8 reserved2[7]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_configure_vsi_bw_limit); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_configure_vsi_bw_limit); /* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406) - * responds with avf_aqc_qs_handles_resp + * responds with iavf_aqc_qs_handles_resp */ -struct avf_aqc_configure_vsi_ets_sla_bw_data { +struct iavf_aqc_configure_vsi_ets_sla_bw_data { u8 tc_valid_bits; u8 reserved[15]; __le16 tc_bw_credits[8]; /* FW writesback QS handles here */ @@ -1660,12 +1660,12 @@ struct avf_aqc_configure_vsi_ets_sla_bw_data { u8 reserved1[28]; }; -AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_configure_vsi_ets_sla_bw_data); +IAVF_CHECK_STRUCT_LEN(0x40, iavf_aqc_configure_vsi_ets_sla_bw_data); /* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407) - * responds with avf_aqc_qs_handles_resp + * responds with iavf_aqc_qs_handles_resp */ -struct avf_aqc_configure_vsi_tc_bw_data { +struct iavf_aqc_configure_vsi_tc_bw_data { u8 tc_valid_bits; u8 reserved[3]; u8 tc_bw_credits[8]; @@ -1673,10 +1673,10 @@ struct avf_aqc_configure_vsi_tc_bw_data { __le16 qs_handles[8]; }; -AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_vsi_tc_bw_data); +IAVF_CHECK_STRUCT_LEN(0x20, iavf_aqc_configure_vsi_tc_bw_data); /* Query vsi bw configuration (indirect 0x0408) */ -struct avf_aqc_query_vsi_bw_config_resp { +struct iavf_aqc_query_vsi_bw_config_resp { u8 tc_valid_bits; u8 tc_suspended_bits; u8 reserved[14]; @@ -1688,10 +1688,10 @@ struct avf_aqc_query_vsi_bw_config_resp { u8 reserved3[23]; }; -AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_vsi_bw_config_resp); +IAVF_CHECK_STRUCT_LEN(0x40, iavf_aqc_query_vsi_bw_config_resp); /* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */ -struct avf_aqc_query_vsi_ets_sla_config_resp { +struct iavf_aqc_query_vsi_ets_sla_config_resp { u8 tc_valid_bits; u8 reserved[3]; u8 share_credits[8]; @@ -1701,10 +1701,10 @@ struct avf_aqc_query_vsi_ets_sla_config_resp { __le16 tc_bw_max[2]; }; -AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_vsi_ets_sla_config_resp); +IAVF_CHECK_STRUCT_LEN(0x20, iavf_aqc_query_vsi_ets_sla_config_resp); /* Configure Switching Component Bandwidth Limit (direct 0x0410) */ -struct avf_aqc_configure_switching_comp_bw_limit { +struct iavf_aqc_configure_switching_comp_bw_limit { __le16 seid; u8 reserved[2]; __le16 credit; @@ -1713,27 +1713,27 @@ struct avf_aqc_configure_switching_comp_bw_limit { u8 reserved2[7]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_configure_switching_comp_bw_limit); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_configure_switching_comp_bw_limit); /* Enable Physical Port ETS (indirect 0x0413) * Modify Physical Port ETS (indirect 0x0414) * Disable Physical Port ETS (indirect 0x0415) */ -struct avf_aqc_configure_switching_comp_ets_data { +struct iavf_aqc_configure_switching_comp_ets_data { u8 reserved[4]; u8 tc_valid_bits; u8 seepage; -#define AVF_AQ_ETS_SEEPAGE_EN_MASK 0x1 +#define IAVF_AQ_ETS_SEEPAGE_EN_MASK 0x1 u8 tc_strict_priority_flags; u8 reserved1[17]; u8 tc_bw_share_credits[8]; u8 reserved2[96]; }; -AVF_CHECK_STRUCT_LEN(0x80, avf_aqc_configure_switching_comp_ets_data); +IAVF_CHECK_STRUCT_LEN(0x80, iavf_aqc_configure_switching_comp_ets_data); /* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */ -struct avf_aqc_configure_switching_comp_ets_bw_limit_data { +struct iavf_aqc_configure_switching_comp_ets_bw_limit_data { u8 tc_valid_bits; u8 reserved[15]; __le16 tc_bw_credit[8]; @@ -1743,13 +1743,13 @@ struct avf_aqc_configure_switching_comp_ets_bw_limit_data { u8 reserved1[28]; }; -AVF_CHECK_STRUCT_LEN(0x40, - avf_aqc_configure_switching_comp_ets_bw_limit_data); +IAVF_CHECK_STRUCT_LEN(0x40, + iavf_aqc_configure_switching_comp_ets_bw_limit_data); /* Configure Switching Component Bandwidth Allocation per Tc * (indirect 0x0417) */ -struct avf_aqc_configure_switching_comp_bw_config_data { +struct iavf_aqc_configure_switching_comp_bw_config_data { u8 tc_valid_bits; u8 reserved[2]; u8 absolute_credits; /* bool */ @@ -1757,10 +1757,10 @@ struct avf_aqc_configure_switching_comp_bw_config_data { u8 reserved1[20]; }; -AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_configure_switching_comp_bw_config_data); +IAVF_CHECK_STRUCT_LEN(0x20, iavf_aqc_configure_switching_comp_bw_config_data); /* Query Switching Component Configuration (indirect 0x0418) */ -struct avf_aqc_query_switching_comp_ets_config_resp { +struct iavf_aqc_query_switching_comp_ets_config_resp { u8 tc_valid_bits; u8 reserved[35]; __le16 port_bw_limit; @@ -1769,10 +1769,10 @@ struct avf_aqc_query_switching_comp_ets_config_resp { u8 reserved2[23]; }; -AVF_CHECK_STRUCT_LEN(0x40, avf_aqc_query_switching_comp_ets_config_resp); +IAVF_CHECK_STRUCT_LEN(0x40, iavf_aqc_query_switching_comp_ets_config_resp); /* Query PhysicalPort ETS Configuration (indirect 0x0419) */ -struct avf_aqc_query_port_ets_config_resp { +struct iavf_aqc_query_port_ets_config_resp { u8 reserved[4]; u8 tc_valid_bits; u8 reserved1; @@ -1786,12 +1786,12 @@ struct avf_aqc_query_port_ets_config_resp { u8 reserved3[32]; }; -AVF_CHECK_STRUCT_LEN(0x44, avf_aqc_query_port_ets_config_resp); +IAVF_CHECK_STRUCT_LEN(0x44, iavf_aqc_query_port_ets_config_resp); /* Query Switching Component Bandwidth Allocation per Traffic Type * (indirect 0x041A) */ -struct avf_aqc_query_switching_comp_bw_config_resp { +struct iavf_aqc_query_switching_comp_bw_config_resp { u8 tc_valid_bits; u8 reserved[2]; u8 absolute_credits_enable; /* bool */ @@ -1802,7 +1802,7 @@ struct avf_aqc_query_switching_comp_bw_config_resp { __le16 tc_bw_max[2]; }; -AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_switching_comp_bw_config_resp); +IAVF_CHECK_STRUCT_LEN(0x20, iavf_aqc_query_switching_comp_bw_config_resp); /* Suspend/resume port TX traffic * (direct 0x041B and 0x041C) uses the generic SEID struct @@ -1811,99 +1811,99 @@ AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_query_switching_comp_bw_config_resp); /* Configure partition BW * (indirect 0x041D) */ -struct avf_aqc_configure_partition_bw_data { +struct iavf_aqc_configure_partition_bw_data { __le16 pf_valid_bits; u8 min_bw[16]; /* guaranteed bandwidth */ u8 max_bw[16]; /* bandwidth limit */ }; -AVF_CHECK_STRUCT_LEN(0x22, avf_aqc_configure_partition_bw_data); +IAVF_CHECK_STRUCT_LEN(0x22, iavf_aqc_configure_partition_bw_data); /* Get and set the active HMC resource profile and status. * (direct 0x0500) and (direct 0x0501) */ -struct avf_aq_get_set_hmc_resource_profile { +struct iavf_aq_get_set_hmc_resource_profile { u8 pm_profile; u8 pe_vf_enabled; u8 reserved[14]; }; -AVF_CHECK_CMD_LENGTH(avf_aq_get_set_hmc_resource_profile); +IAVF_CHECK_CMD_LENGTH(iavf_aq_get_set_hmc_resource_profile); -enum avf_aq_hmc_profile { - /* AVF_HMC_PROFILE_NO_CHANGE = 0, reserved */ - AVF_HMC_PROFILE_DEFAULT = 1, - AVF_HMC_PROFILE_FAVOR_VF = 2, - AVF_HMC_PROFILE_EQUAL = 3, +enum iavf_aq_hmc_profile { + /* IAVF_HMC_PROFILE_NO_CHANGE = 0, reserved */ + IAVF_HMC_PROFILE_DEFAULT = 1, + IAVF_HMC_PROFILE_FAVOR_VF = 2, + IAVF_HMC_PROFILE_EQUAL = 3, }; /* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */ /* set in param0 for get phy abilities to report qualified modules */ -#define AVF_AQ_PHY_REPORT_QUALIFIED_MODULES 0x0001 -#define AVF_AQ_PHY_REPORT_INITIAL_VALUES 0x0002 - -enum avf_aq_phy_type { - AVF_PHY_TYPE_SGMII = 0x0, - AVF_PHY_TYPE_1000BASE_KX = 0x1, - AVF_PHY_TYPE_10GBASE_KX4 = 0x2, - AVF_PHY_TYPE_10GBASE_KR = 0x3, - AVF_PHY_TYPE_40GBASE_KR4 = 0x4, - AVF_PHY_TYPE_XAUI = 0x5, - AVF_PHY_TYPE_XFI = 0x6, - AVF_PHY_TYPE_SFI = 0x7, - AVF_PHY_TYPE_XLAUI = 0x8, - AVF_PHY_TYPE_XLPPI = 0x9, - AVF_PHY_TYPE_40GBASE_CR4_CU = 0xA, - AVF_PHY_TYPE_10GBASE_CR1_CU = 0xB, - AVF_PHY_TYPE_10GBASE_AOC = 0xC, - AVF_PHY_TYPE_40GBASE_AOC = 0xD, - AVF_PHY_TYPE_UNRECOGNIZED = 0xE, - AVF_PHY_TYPE_UNSUPPORTED = 0xF, - AVF_PHY_TYPE_100BASE_TX = 0x11, - AVF_PHY_TYPE_1000BASE_T = 0x12, - AVF_PHY_TYPE_10GBASE_T = 0x13, - AVF_PHY_TYPE_10GBASE_SR = 0x14, - AVF_PHY_TYPE_10GBASE_LR = 0x15, - AVF_PHY_TYPE_10GBASE_SFPP_CU = 0x16, - AVF_PHY_TYPE_10GBASE_CR1 = 0x17, - AVF_PHY_TYPE_40GBASE_CR4 = 0x18, - AVF_PHY_TYPE_40GBASE_SR4 = 0x19, - AVF_PHY_TYPE_40GBASE_LR4 = 0x1A, - AVF_PHY_TYPE_1000BASE_SX = 0x1B, - AVF_PHY_TYPE_1000BASE_LX = 0x1C, - AVF_PHY_TYPE_1000BASE_T_OPTICAL = 0x1D, - AVF_PHY_TYPE_20GBASE_KR2 = 0x1E, - AVF_PHY_TYPE_25GBASE_KR = 0x1F, - AVF_PHY_TYPE_25GBASE_CR = 0x20, - AVF_PHY_TYPE_25GBASE_SR = 0x21, - AVF_PHY_TYPE_25GBASE_LR = 0x22, - AVF_PHY_TYPE_25GBASE_AOC = 0x23, - AVF_PHY_TYPE_25GBASE_ACC = 0x24, - AVF_PHY_TYPE_MAX, - AVF_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP = 0xFD, - AVF_PHY_TYPE_EMPTY = 0xFE, - AVF_PHY_TYPE_DEFAULT = 0xFF, -}; - -#define AVF_LINK_SPEED_100MB_SHIFT 0x1 -#define AVF_LINK_SPEED_1000MB_SHIFT 0x2 -#define AVF_LINK_SPEED_10GB_SHIFT 0x3 -#define AVF_LINK_SPEED_40GB_SHIFT 0x4 -#define AVF_LINK_SPEED_20GB_SHIFT 0x5 -#define AVF_LINK_SPEED_25GB_SHIFT 0x6 - -enum avf_aq_link_speed { - AVF_LINK_SPEED_UNKNOWN = 0, - AVF_LINK_SPEED_100MB = (1 << AVF_LINK_SPEED_100MB_SHIFT), - AVF_LINK_SPEED_1GB = (1 << AVF_LINK_SPEED_1000MB_SHIFT), - AVF_LINK_SPEED_10GB = (1 << AVF_LINK_SPEED_10GB_SHIFT), - AVF_LINK_SPEED_40GB = (1 << AVF_LINK_SPEED_40GB_SHIFT), - AVF_LINK_SPEED_20GB = (1 << AVF_LINK_SPEED_20GB_SHIFT), - AVF_LINK_SPEED_25GB = (1 << AVF_LINK_SPEED_25GB_SHIFT), -}; - -struct avf_aqc_module_desc { +#define IAVF_AQ_PHY_REPORT_QUALIFIED_MODULES 0x0001 +#define IAVF_AQ_PHY_REPORT_INITIAL_VALUES 0x0002 + +enum iavf_aq_phy_type { + IAVF_PHY_TYPE_SGMII = 0x0, + IAVF_PHY_TYPE_1000BASE_KX = 0x1, + IAVF_PHY_TYPE_10GBASE_KX4 = 0x2, + IAVF_PHY_TYPE_10GBASE_KR = 0x3, + IAVF_PHY_TYPE_40GBASE_KR4 = 0x4, + IAVF_PHY_TYPE_XAUI = 0x5, + IAVF_PHY_TYPE_XFI = 0x6, + IAVF_PHY_TYPE_SFI = 0x7, + IAVF_PHY_TYPE_XLAUI = 0x8, + IAVF_PHY_TYPE_XLPPI = 0x9, + IAVF_PHY_TYPE_40GBASE_CR4_CU = 0xA, + IAVF_PHY_TYPE_10GBASE_CR1_CU = 0xB, + IAVF_PHY_TYPE_10GBASE_AOC = 0xC, + IAVF_PHY_TYPE_40GBASE_AOC = 0xD, + IAVF_PHY_TYPE_UNRECOGNIZED = 0xE, + IAVF_PHY_TYPE_UNSUPPORTED = 0xF, + IAVF_PHY_TYPE_100BASE_TX = 0x11, + IAVF_PHY_TYPE_1000BASE_T = 0x12, + IAVF_PHY_TYPE_10GBASE_T = 0x13, + IAVF_PHY_TYPE_10GBASE_SR = 0x14, + IAVF_PHY_TYPE_10GBASE_LR = 0x15, + IAVF_PHY_TYPE_10GBASE_SFPP_CU = 0x16, + IAVF_PHY_TYPE_10GBASE_CR1 = 0x17, + IAVF_PHY_TYPE_40GBASE_CR4 = 0x18, + IAVF_PHY_TYPE_40GBASE_SR4 = 0x19, + IAVF_PHY_TYPE_40GBASE_LR4 = 0x1A, + IAVF_PHY_TYPE_1000BASE_SX = 0x1B, + IAVF_PHY_TYPE_1000BASE_LX = 0x1C, + IAVF_PHY_TYPE_1000BASE_T_OPTICAL = 0x1D, + IAVF_PHY_TYPE_20GBASE_KR2 = 0x1E, + IAVF_PHY_TYPE_25GBASE_KR = 0x1F, + IAVF_PHY_TYPE_25GBASE_CR = 0x20, + IAVF_PHY_TYPE_25GBASE_SR = 0x21, + IAVF_PHY_TYPE_25GBASE_LR = 0x22, + IAVF_PHY_TYPE_25GBASE_AOC = 0x23, + IAVF_PHY_TYPE_25GBASE_ACC = 0x24, + IAVF_PHY_TYPE_MAX, + IAVF_PHY_TYPE_NOT_SUPPORTED_HIGH_TEMP = 0xFD, + IAVF_PHY_TYPE_EMPTY = 0xFE, + IAVF_PHY_TYPE_DEFAULT = 0xFF, +}; + +#define IAVF_LINK_SPEED_100MB_SHIFT 0x1 +#define IAVF_LINK_SPEED_1000MB_SHIFT 0x2 +#define IAVF_LINK_SPEED_10GB_SHIFT 0x3 +#define IAVF_LINK_SPEED_40GB_SHIFT 0x4 +#define IAVF_LINK_SPEED_20GB_SHIFT 0x5 +#define IAVF_LINK_SPEED_25GB_SHIFT 0x6 + +enum iavf_aq_link_speed { + IAVF_LINK_SPEED_UNKNOWN = 0, + IAVF_LINK_SPEED_100MB = (1 << IAVF_LINK_SPEED_100MB_SHIFT), + IAVF_LINK_SPEED_1GB = (1 << IAVF_LINK_SPEED_1000MB_SHIFT), + IAVF_LINK_SPEED_10GB = (1 << IAVF_LINK_SPEED_10GB_SHIFT), + IAVF_LINK_SPEED_40GB = (1 << IAVF_LINK_SPEED_40GB_SHIFT), + IAVF_LINK_SPEED_20GB = (1 << IAVF_LINK_SPEED_20GB_SHIFT), + IAVF_LINK_SPEED_25GB = (1 << IAVF_LINK_SPEED_25GB_SHIFT), +}; + +struct iavf_aqc_module_desc { u8 oui[3]; u8 reserved1; u8 part_number[16]; @@ -1911,184 +1911,184 @@ struct avf_aqc_module_desc { u8 reserved2[8]; }; -AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_module_desc); +IAVF_CHECK_STRUCT_LEN(0x20, iavf_aqc_module_desc); -struct avf_aq_get_phy_abilities_resp { +struct iavf_aq_get_phy_abilities_resp { __le32 phy_type; /* bitmap using the above enum for offsets */ u8 link_speed; /* bitmap using the above enum bit patterns */ u8 abilities; -#define AVF_AQ_PHY_FLAG_PAUSE_TX 0x01 -#define AVF_AQ_PHY_FLAG_PAUSE_RX 0x02 -#define AVF_AQ_PHY_FLAG_LOW_POWER 0x04 -#define AVF_AQ_PHY_LINK_ENABLED 0x08 -#define AVF_AQ_PHY_AN_ENABLED 0x10 -#define AVF_AQ_PHY_FLAG_MODULE_QUAL 0x20 -#define AVF_AQ_PHY_FEC_ABILITY_KR 0x40 -#define AVF_AQ_PHY_FEC_ABILITY_RS 0x80 +#define IAVF_AQ_PHY_FLAG_PAUSE_TX 0x01 +#define IAVF_AQ_PHY_FLAG_PAUSE_RX 0x02 +#define IAVF_AQ_PHY_FLAG_LOW_POWER 0x04 +#define IAVF_AQ_PHY_LINK_ENABLED 0x08 +#define IAVF_AQ_PHY_AN_ENABLED 0x10 +#define IAVF_AQ_PHY_FLAG_MODULE_QUAL 0x20 +#define IAVF_AQ_PHY_FEC_ABILITY_KR 0x40 +#define IAVF_AQ_PHY_FEC_ABILITY_RS 0x80 __le16 eee_capability; -#define AVF_AQ_EEE_100BASE_TX 0x0002 -#define AVF_AQ_EEE_1000BASE_T 0x0004 -#define AVF_AQ_EEE_10GBASE_T 0x0008 -#define AVF_AQ_EEE_1000BASE_KX 0x0010 -#define AVF_AQ_EEE_10GBASE_KX4 0x0020 -#define AVF_AQ_EEE_10GBASE_KR 0x0040 +#define IAVF_AQ_EEE_100BASE_TX 0x0002 +#define IAVF_AQ_EEE_1000BASE_T 0x0004 +#define IAVF_AQ_EEE_10GBASE_T 0x0008 +#define IAVF_AQ_EEE_1000BASE_KX 0x0010 +#define IAVF_AQ_EEE_10GBASE_KX4 0x0020 +#define IAVF_AQ_EEE_10GBASE_KR 0x0040 __le32 eeer_val; u8 d3_lpan; -#define AVF_AQ_SET_PHY_D3_LPAN_ENA 0x01 +#define IAVF_AQ_SET_PHY_D3_LPAN_ENA 0x01 u8 phy_type_ext; -#define AVF_AQ_PHY_TYPE_EXT_25G_KR 0x01 -#define AVF_AQ_PHY_TYPE_EXT_25G_CR 0x02 -#define AVF_AQ_PHY_TYPE_EXT_25G_SR 0x04 -#define AVF_AQ_PHY_TYPE_EXT_25G_LR 0x08 -#define AVF_AQ_PHY_TYPE_EXT_25G_AOC 0x10 -#define AVF_AQ_PHY_TYPE_EXT_25G_ACC 0x20 +#define IAVF_AQ_PHY_TYPE_EXT_25G_KR 0x01 +#define IAVF_AQ_PHY_TYPE_EXT_25G_CR 0x02 +#define IAVF_AQ_PHY_TYPE_EXT_25G_SR 0x04 +#define IAVF_AQ_PHY_TYPE_EXT_25G_LR 0x08 +#define IAVF_AQ_PHY_TYPE_EXT_25G_AOC 0x10 +#define IAVF_AQ_PHY_TYPE_EXT_25G_ACC 0x20 u8 fec_cfg_curr_mod_ext_info; -#define AVF_AQ_ENABLE_FEC_KR 0x01 -#define AVF_AQ_ENABLE_FEC_RS 0x02 -#define AVF_AQ_REQUEST_FEC_KR 0x04 -#define AVF_AQ_REQUEST_FEC_RS 0x08 -#define AVF_AQ_ENABLE_FEC_AUTO 0x10 -#define AVF_AQ_FEC -#define AVF_AQ_MODULE_TYPE_EXT_MASK 0xE0 -#define AVF_AQ_MODULE_TYPE_EXT_SHIFT 5 +#define IAVF_AQ_ENABLE_FEC_KR 0x01 +#define IAVF_AQ_ENABLE_FEC_RS 0x02 +#define IAVF_AQ_REQUEST_FEC_KR 0x04 +#define IAVF_AQ_REQUEST_FEC_RS 0x08 +#define IAVF_AQ_ENABLE_FEC_AUTO 0x10 +#define IAVF_AQ_FEC +#define IAVF_AQ_MODULE_TYPE_EXT_MASK 0xE0 +#define IAVF_AQ_MODULE_TYPE_EXT_SHIFT 5 u8 ext_comp_code; u8 phy_id[4]; u8 module_type[3]; u8 qualified_module_count; -#define AVF_AQ_PHY_MAX_QMS 16 - struct avf_aqc_module_desc qualified_module[AVF_AQ_PHY_MAX_QMS]; +#define IAVF_AQ_PHY_MAX_QMS 16 + struct iavf_aqc_module_desc qualified_module[IAVF_AQ_PHY_MAX_QMS]; }; -AVF_CHECK_STRUCT_LEN(0x218, avf_aq_get_phy_abilities_resp); +IAVF_CHECK_STRUCT_LEN(0x218, iavf_aq_get_phy_abilities_resp); /* Set PHY Config (direct 0x0601) */ -struct avf_aq_set_phy_config { /* same bits as above in all */ +struct iavf_aq_set_phy_config { /* same bits as above in all */ __le32 phy_type; u8 link_speed; u8 abilities; /* bits 0-2 use the values from get_phy_abilities_resp */ -#define AVF_AQ_PHY_ENABLE_LINK 0x08 -#define AVF_AQ_PHY_ENABLE_AN 0x10 -#define AVF_AQ_PHY_ENABLE_ATOMIC_LINK 0x20 +#define IAVF_AQ_PHY_ENABLE_LINK 0x08 +#define IAVF_AQ_PHY_ENABLE_AN 0x10 +#define IAVF_AQ_PHY_ENABLE_ATOMIC_LINK 0x20 __le16 eee_capability; __le32 eeer; u8 low_power_ctrl; u8 phy_type_ext; u8 fec_config; -#define AVF_AQ_SET_FEC_ABILITY_KR BIT(0) -#define AVF_AQ_SET_FEC_ABILITY_RS BIT(1) -#define AVF_AQ_SET_FEC_REQUEST_KR BIT(2) -#define AVF_AQ_SET_FEC_REQUEST_RS BIT(3) -#define AVF_AQ_SET_FEC_AUTO BIT(4) -#define AVF_AQ_PHY_FEC_CONFIG_SHIFT 0x0 -#define AVF_AQ_PHY_FEC_CONFIG_MASK (0x1F << AVF_AQ_PHY_FEC_CONFIG_SHIFT) +#define IAVF_AQ_SET_FEC_ABILITY_KR BIT(0) +#define IAVF_AQ_SET_FEC_ABILITY_RS BIT(1) +#define IAVF_AQ_SET_FEC_REQUEST_KR BIT(2) +#define IAVF_AQ_SET_FEC_REQUEST_RS BIT(3) +#define IAVF_AQ_SET_FEC_AUTO BIT(4) +#define IAVF_AQ_PHY_FEC_CONFIG_SHIFT 0x0 +#define IAVF_AQ_PHY_FEC_CONFIG_MASK (0x1F << IAVF_AQ_PHY_FEC_CONFIG_SHIFT) u8 reserved; }; -AVF_CHECK_CMD_LENGTH(avf_aq_set_phy_config); +IAVF_CHECK_CMD_LENGTH(iavf_aq_set_phy_config); /* Set MAC Config command data structure (direct 0x0603) */ -struct avf_aq_set_mac_config { +struct iavf_aq_set_mac_config { __le16 max_frame_size; u8 params; -#define AVF_AQ_SET_MAC_CONFIG_CRC_EN 0x04 -#define AVF_AQ_SET_MAC_CONFIG_PACING_MASK 0x78 -#define AVF_AQ_SET_MAC_CONFIG_PACING_SHIFT 3 -#define AVF_AQ_SET_MAC_CONFIG_PACING_NONE 0x0 -#define AVF_AQ_SET_MAC_CONFIG_PACING_1B_13TX 0xF -#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_9TX 0x9 -#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_4TX 0x8 -#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_7TX 0x7 -#define AVF_AQ_SET_MAC_CONFIG_PACING_2DW_3TX 0x6 -#define AVF_AQ_SET_MAC_CONFIG_PACING_1DW_1TX 0x5 -#define AVF_AQ_SET_MAC_CONFIG_PACING_3DW_2TX 0x4 -#define AVF_AQ_SET_MAC_CONFIG_PACING_7DW_3TX 0x3 -#define AVF_AQ_SET_MAC_CONFIG_PACING_4DW_1TX 0x2 -#define AVF_AQ_SET_MAC_CONFIG_PACING_9DW_1TX 0x1 +#define IAVF_AQ_SET_MAC_CONFIG_CRC_EN 0x04 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_MASK 0x78 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_SHIFT 3 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_NONE 0x0 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_1B_13TX 0xF +#define IAVF_AQ_SET_MAC_CONFIG_PACING_1DW_9TX 0x9 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_1DW_4TX 0x8 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_3DW_7TX 0x7 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_2DW_3TX 0x6 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_1DW_1TX 0x5 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_3DW_2TX 0x4 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_7DW_3TX 0x3 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_4DW_1TX 0x2 +#define IAVF_AQ_SET_MAC_CONFIG_PACING_9DW_1TX 0x1 u8 tx_timer_priority; /* bitmap */ __le16 tx_timer_value; __le16 fc_refresh_threshold; u8 reserved[8]; }; -AVF_CHECK_CMD_LENGTH(avf_aq_set_mac_config); +IAVF_CHECK_CMD_LENGTH(iavf_aq_set_mac_config); /* Restart Auto-Negotiation (direct 0x605) */ -struct avf_aqc_set_link_restart_an { +struct iavf_aqc_set_link_restart_an { u8 command; -#define AVF_AQ_PHY_RESTART_AN 0x02 -#define AVF_AQ_PHY_LINK_ENABLE 0x04 +#define IAVF_AQ_PHY_RESTART_AN 0x02 +#define IAVF_AQ_PHY_LINK_ENABLE 0x04 u8 reserved[15]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_link_restart_an); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_link_restart_an); /* Get Link Status cmd & response data structure (direct 0x0607) */ -struct avf_aqc_get_link_status { +struct iavf_aqc_get_link_status { __le16 command_flags; /* only field set on command */ -#define AVF_AQ_LSE_MASK 0x3 -#define AVF_AQ_LSE_NOP 0x0 -#define AVF_AQ_LSE_DISABLE 0x2 -#define AVF_AQ_LSE_ENABLE 0x3 +#define IAVF_AQ_LSE_MASK 0x3 +#define IAVF_AQ_LSE_NOP 0x0 +#define IAVF_AQ_LSE_DISABLE 0x2 +#define IAVF_AQ_LSE_ENABLE 0x3 /* only response uses this flag */ -#define AVF_AQ_LSE_IS_ENABLED 0x1 - u8 phy_type; /* avf_aq_phy_type */ - u8 link_speed; /* avf_aq_link_speed */ +#define IAVF_AQ_LSE_IS_ENABLED 0x1 + u8 phy_type; /* iavf_aq_phy_type */ + u8 link_speed; /* iavf_aq_link_speed */ u8 link_info; -#define AVF_AQ_LINK_UP 0x01 /* obsolete */ -#define AVF_AQ_LINK_UP_FUNCTION 0x01 -#define AVF_AQ_LINK_FAULT 0x02 -#define AVF_AQ_LINK_FAULT_TX 0x04 -#define AVF_AQ_LINK_FAULT_RX 0x08 -#define AVF_AQ_LINK_FAULT_REMOTE 0x10 -#define AVF_AQ_LINK_UP_PORT 0x20 -#define AVF_AQ_MEDIA_AVAILABLE 0x40 -#define AVF_AQ_SIGNAL_DETECT 0x80 +#define IAVF_AQ_LINK_UP 0x01 /* obsolete */ +#define IAVF_AQ_LINK_UP_FUNCTION 0x01 +#define IAVF_AQ_LINK_FAULT 0x02 +#define IAVF_AQ_LINK_FAULT_TX 0x04 +#define IAVF_AQ_LINK_FAULT_RX 0x08 +#define IAVF_AQ_LINK_FAULT_REMOTE 0x10 +#define IAVF_AQ_LINK_UP_PORT 0x20 +#define IAVF_AQ_MEDIA_AVAILABLE 0x40 +#define IAVF_AQ_SIGNAL_DETECT 0x80 u8 an_info; -#define AVF_AQ_AN_COMPLETED 0x01 -#define AVF_AQ_LP_AN_ABILITY 0x02 -#define AVF_AQ_PD_FAULT 0x04 -#define AVF_AQ_FEC_EN 0x08 -#define AVF_AQ_PHY_LOW_POWER 0x10 -#define AVF_AQ_LINK_PAUSE_TX 0x20 -#define AVF_AQ_LINK_PAUSE_RX 0x40 -#define AVF_AQ_QUALIFIED_MODULE 0x80 +#define IAVF_AQ_AN_COMPLETED 0x01 +#define IAVF_AQ_LP_AN_ABILITY 0x02 +#define IAVF_AQ_PD_FAULT 0x04 +#define IAVF_AQ_FEC_EN 0x08 +#define IAVF_AQ_PHY_LOW_POWER 0x10 +#define IAVF_AQ_LINK_PAUSE_TX 0x20 +#define IAVF_AQ_LINK_PAUSE_RX 0x40 +#define IAVF_AQ_QUALIFIED_MODULE 0x80 u8 ext_info; -#define AVF_AQ_LINK_PHY_TEMP_ALARM 0x01 -#define AVF_AQ_LINK_XCESSIVE_ERRORS 0x02 -#define AVF_AQ_LINK_TX_SHIFT 0x02 -#define AVF_AQ_LINK_TX_MASK (0x03 << AVF_AQ_LINK_TX_SHIFT) -#define AVF_AQ_LINK_TX_ACTIVE 0x00 -#define AVF_AQ_LINK_TX_DRAINED 0x01 -#define AVF_AQ_LINK_TX_FLUSHED 0x03 -#define AVF_AQ_LINK_FORCED_40G 0x10 +#define IAVF_AQ_LINK_PHY_TEMP_ALARM 0x01 +#define IAVF_AQ_LINK_XCESSIVE_ERRORS 0x02 +#define IAVF_AQ_LINK_TX_SHIFT 0x02 +#define IAVF_AQ_LINK_TX_MASK (0x03 << IAVF_AQ_LINK_TX_SHIFT) +#define IAVF_AQ_LINK_TX_ACTIVE 0x00 +#define IAVF_AQ_LINK_TX_DRAINED 0x01 +#define IAVF_AQ_LINK_TX_FLUSHED 0x03 +#define IAVF_AQ_LINK_FORCED_40G 0x10 /* 25G Error Codes */ -#define AVF_AQ_25G_NO_ERR 0X00 -#define AVF_AQ_25G_NOT_PRESENT 0X01 -#define AVF_AQ_25G_NVM_CRC_ERR 0X02 -#define AVF_AQ_25G_SBUS_UCODE_ERR 0X03 -#define AVF_AQ_25G_SERDES_UCODE_ERR 0X04 -#define AVF_AQ_25G_NIMB_UCODE_ERR 0X05 - u8 loopback; /* use defines from avf_aqc_set_lb_mode */ +#define IAVF_AQ_25G_NO_ERR 0X00 +#define IAVF_AQ_25G_NOT_PRESENT 0X01 +#define IAVF_AQ_25G_NVM_CRC_ERR 0X02 +#define IAVF_AQ_25G_SBUS_UCODE_ERR 0X03 +#define IAVF_AQ_25G_SERDES_UCODE_ERR 0X04 +#define IAVF_AQ_25G_NIMB_UCODE_ERR 0X05 + u8 loopback; /* use defines from iavf_aqc_set_lb_mode */ /* Since firmware API 1.7 loopback field keeps power class info as well */ -#define AVF_AQ_LOOPBACK_MASK 0x07 -#define AVF_AQ_PWR_CLASS_SHIFT_LB 6 -#define AVF_AQ_PWR_CLASS_MASK_LB (0x03 << AVF_AQ_PWR_CLASS_SHIFT_LB) +#define IAVF_AQ_LOOPBACK_MASK 0x07 +#define IAVF_AQ_PWR_CLASS_SHIFT_LB 6 +#define IAVF_AQ_PWR_CLASS_MASK_LB (0x03 << IAVF_AQ_PWR_CLASS_SHIFT_LB) __le16 max_frame_size; u8 config; -#define AVF_AQ_CONFIG_FEC_KR_ENA 0x01 -#define AVF_AQ_CONFIG_FEC_RS_ENA 0x02 -#define AVF_AQ_CONFIG_CRC_ENA 0x04 -#define AVF_AQ_CONFIG_PACING_MASK 0x78 +#define IAVF_AQ_CONFIG_FEC_KR_ENA 0x01 +#define IAVF_AQ_CONFIG_FEC_RS_ENA 0x02 +#define IAVF_AQ_CONFIG_CRC_ENA 0x04 +#define IAVF_AQ_CONFIG_PACING_MASK 0x78 union { struct { u8 power_desc; -#define AVF_AQ_LINK_POWER_CLASS_1 0x00 -#define AVF_AQ_LINK_POWER_CLASS_2 0x01 -#define AVF_AQ_LINK_POWER_CLASS_3 0x02 -#define AVF_AQ_LINK_POWER_CLASS_4 0x03 -#define AVF_AQ_PWR_CLASS_MASK 0x03 +#define IAVF_AQ_LINK_POWER_CLASS_1 0x00 +#define IAVF_AQ_LINK_POWER_CLASS_2 0x01 +#define IAVF_AQ_LINK_POWER_CLASS_3 0x02 +#define IAVF_AQ_LINK_POWER_CLASS_4 0x03 +#define IAVF_AQ_PWR_CLASS_MASK 0x03 u8 reserved[4]; }; struct { @@ -2098,93 +2098,93 @@ struct avf_aqc_get_link_status { }; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_link_status); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_link_status); /* Set event mask command (direct 0x613) */ -struct avf_aqc_set_phy_int_mask { +struct iavf_aqc_set_phy_int_mask { u8 reserved[8]; __le16 event_mask; -#define AVF_AQ_EVENT_LINK_UPDOWN 0x0002 -#define AVF_AQ_EVENT_MEDIA_NA 0x0004 -#define AVF_AQ_EVENT_LINK_FAULT 0x0008 -#define AVF_AQ_EVENT_PHY_TEMP_ALARM 0x0010 -#define AVF_AQ_EVENT_EXCESSIVE_ERRORS 0x0020 -#define AVF_AQ_EVENT_SIGNAL_DETECT 0x0040 -#define AVF_AQ_EVENT_AN_COMPLETED 0x0080 -#define AVF_AQ_EVENT_MODULE_QUAL_FAIL 0x0100 -#define AVF_AQ_EVENT_PORT_TX_SUSPENDED 0x0200 +#define IAVF_AQ_EVENT_LINK_UPDOWN 0x0002 +#define IAVF_AQ_EVENT_MEDIA_NA 0x0004 +#define IAVF_AQ_EVENT_LINK_FAULT 0x0008 +#define IAVF_AQ_EVENT_PHY_TEMP_ALARM 0x0010 +#define IAVF_AQ_EVENT_EXCESSIVE_ERRORS 0x0020 +#define IAVF_AQ_EVENT_SIGNAL_DETECT 0x0040 +#define IAVF_AQ_EVENT_AN_COMPLETED 0x0080 +#define IAVF_AQ_EVENT_MODULE_QUAL_FAIL 0x0100 +#define IAVF_AQ_EVENT_PORT_TX_SUSPENDED 0x0200 u8 reserved1[6]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_int_mask); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_phy_int_mask); /* Get Local AN advt register (direct 0x0614) * Set Local AN advt register (direct 0x0615) * Get Link Partner AN advt register (direct 0x0616) */ -struct avf_aqc_an_advt_reg { +struct iavf_aqc_an_advt_reg { __le32 local_an_reg0; __le16 local_an_reg1; u8 reserved[10]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_an_advt_reg); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_an_advt_reg); /* Set Loopback mode (0x0618) */ -struct avf_aqc_set_lb_mode { +struct iavf_aqc_set_lb_mode { u8 lb_level; -#define AVF_AQ_LB_NONE 0 -#define AVF_AQ_LB_MAC 1 -#define AVF_AQ_LB_SERDES 2 -#define AVF_AQ_LB_PHY_INT 3 -#define AVF_AQ_LB_PHY_EXT 4 -#define AVF_AQ_LB_CPVL_PCS 5 -#define AVF_AQ_LB_CPVL_EXT 6 -#define AVF_AQ_LB_PHY_LOCAL 0x01 -#define AVF_AQ_LB_PHY_REMOTE 0x02 -#define AVF_AQ_LB_MAC_LOCAL 0x04 +#define IAVF_AQ_LB_NONE 0 +#define IAVF_AQ_LB_MAC 1 +#define IAVF_AQ_LB_SERDES 2 +#define IAVF_AQ_LB_PHY_INT 3 +#define IAVF_AQ_LB_PHY_EXT 4 +#define IAVF_AQ_LB_CPVL_PCS 5 +#define IAVF_AQ_LB_CPVL_EXT 6 +#define IAVF_AQ_LB_PHY_LOCAL 0x01 +#define IAVF_AQ_LB_PHY_REMOTE 0x02 +#define IAVF_AQ_LB_MAC_LOCAL 0x04 u8 lb_type; -#define AVF_AQ_LB_LOCAL 0 -#define AVF_AQ_LB_FAR 0x01 +#define IAVF_AQ_LB_LOCAL 0 +#define IAVF_AQ_LB_FAR 0x01 u8 speed; -#define AVF_AQ_LB_SPEED_NONE 0 -#define AVF_AQ_LB_SPEED_1G 1 -#define AVF_AQ_LB_SPEED_10G 2 -#define AVF_AQ_LB_SPEED_40G 3 -#define AVF_AQ_LB_SPEED_20G 4 +#define IAVF_AQ_LB_SPEED_NONE 0 +#define IAVF_AQ_LB_SPEED_1G 1 +#define IAVF_AQ_LB_SPEED_10G 2 +#define IAVF_AQ_LB_SPEED_40G 3 +#define IAVF_AQ_LB_SPEED_20G 4 u8 force_speed; u8 reserved[12]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_lb_mode); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_lb_mode); /* Set PHY Debug command (0x0622) */ -struct avf_aqc_set_phy_debug { +struct iavf_aqc_set_phy_debug { u8 command_flags; -#define AVF_AQ_PHY_DEBUG_RESET_INTERNAL 0x02 -#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT 2 -#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK (0x03 << \ - AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT) -#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE 0x00 -#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD 0x01 -#define AVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT 0x02 +#define IAVF_AQ_PHY_DEBUG_RESET_INTERNAL 0x02 +#define IAVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT 2 +#define IAVF_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK (0x03 << \ + IAVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT) +#define IAVF_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE 0x00 +#define IAVF_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD 0x01 +#define IAVF_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT 0x02 /* Disable link manageability on a single port */ -#define AVF_AQ_PHY_DEBUG_DISABLE_LINK_FW 0x10 +#define IAVF_AQ_PHY_DEBUG_DISABLE_LINK_FW 0x10 /* Disable link manageability on all ports needs both bits 4 and 5 */ -#define AVF_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW 0x20 +#define IAVF_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW 0x20 u8 reserved[15]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_phy_debug); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_phy_debug); -enum avf_aq_phy_reg_type { - AVF_AQC_PHY_REG_INTERNAL = 0x1, - AVF_AQC_PHY_REG_EXERNAL_BASET = 0x2, - AVF_AQC_PHY_REG_EXERNAL_MODULE = 0x3 +enum iavf_aq_phy_reg_type { + IAVF_AQC_PHY_REG_INTERNAL = 0x1, + IAVF_AQC_PHY_REG_EXERNAL_BASET = 0x2, + IAVF_AQC_PHY_REG_EXERNAL_MODULE = 0x3 }; /* Run PHY Activity (0x0626) */ -struct avf_aqc_run_phy_activity { +struct iavf_aqc_run_phy_activity { __le16 activity_id; u8 flags; u8 reserved1; @@ -2193,15 +2193,15 @@ struct avf_aqc_run_phy_activity { u8 reserved2[4]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_run_phy_activity); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_run_phy_activity); /* Set PHY Register command (0x0628) */ /* Get PHY Register command (0x0629) */ -struct avf_aqc_phy_register_access { +struct iavf_aqc_phy_register_access { u8 phy_interface; -#define AVF_AQ_PHY_REG_ACCESS_INTERNAL 0 -#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL 1 -#define AVF_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE 2 +#define IAVF_AQ_PHY_REG_ACCESS_INTERNAL 0 +#define IAVF_AQ_PHY_REG_ACCESS_EXTERNAL 1 +#define IAVF_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE 2 u8 dev_addres; u8 reserved1[2]; __le32 reg_address; @@ -2209,20 +2209,20 @@ struct avf_aqc_phy_register_access { u8 reserved2[4]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_phy_register_access); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_phy_register_access); /* NVM Read command (indirect 0x0701) * NVM Erase commands (direct 0x0702) * NVM Update commands (indirect 0x0703) */ -struct avf_aqc_nvm_update { +struct iavf_aqc_nvm_update { u8 command_flags; -#define AVF_AQ_NVM_LAST_CMD 0x01 -#define AVF_AQ_NVM_FLASH_ONLY 0x80 -#define AVF_AQ_NVM_PRESERVATION_FLAGS_SHIFT 1 -#define AVF_AQ_NVM_PRESERVATION_FLAGS_MASK 0x03 -#define AVF_AQ_NVM_PRESERVATION_FLAGS_SELECTED 0x03 -#define AVF_AQ_NVM_PRESERVATION_FLAGS_ALL 0x01 +#define IAVF_AQ_NVM_LAST_CMD 0x01 +#define IAVF_AQ_NVM_FLASH_ONLY 0x80 +#define IAVF_AQ_NVM_PRESERVATION_FLAGS_SHIFT 1 +#define IAVF_AQ_NVM_PRESERVATION_FLAGS_MASK 0x03 +#define IAVF_AQ_NVM_PRESERVATION_FLAGS_SELECTED 0x03 +#define IAVF_AQ_NVM_PRESERVATION_FLAGS_ALL 0x01 u8 module_pointer; __le16 length; __le32 offset; @@ -2230,14 +2230,14 @@ struct avf_aqc_nvm_update { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_update); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_nvm_update); /* NVM Config Read (indirect 0x0704) */ -struct avf_aqc_nvm_config_read { +struct iavf_aqc_nvm_config_read { __le16 cmd_flags; -#define AVF_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK 1 -#define AVF_AQ_ANVM_READ_SINGLE_FEATURE 0 -#define AVF_AQ_ANVM_READ_MULTIPLE_FEATURES 1 +#define IAVF_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK 1 +#define IAVF_AQ_ANVM_READ_SINGLE_FEATURE 0 +#define IAVF_AQ_ANVM_READ_MULTIPLE_FEATURES 1 __le16 element_count; __le16 element_id; /* Feature/field ID */ __le16 element_id_msw; /* MSWord of field ID */ @@ -2245,10 +2245,10 @@ struct avf_aqc_nvm_config_read { __le32 address_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_read); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_nvm_config_read); /* NVM Config Write (indirect 0x0705) */ -struct avf_aqc_nvm_config_write { +struct iavf_aqc_nvm_config_write { __le16 cmd_flags; __le16 element_count; u8 reserved[4]; @@ -2256,162 +2256,162 @@ struct avf_aqc_nvm_config_write { __le32 address_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_nvm_config_write); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_nvm_config_write); /* Used for 0x0704 as well as for 0x0705 commands */ -#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT 1 -#define AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \ - (1 << AVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT) -#define AVF_AQ_ANVM_FEATURE 0 -#define AVF_AQ_ANVM_IMMEDIATE_FIELD (1 << FEATURE_OR_IMMEDIATE_SHIFT) -struct avf_aqc_nvm_config_data_feature { +#define IAVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT 1 +#define IAVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \ + (1 << IAVF_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT) +#define IAVF_AQ_ANVM_FEATURE 0 +#define IAVF_AQ_ANVM_IMMEDIATE_FIELD (1 << FEATURE_OR_IMMEDIATE_SHIFT) +struct iavf_aqc_nvm_config_data_feature { __le16 feature_id; -#define AVF_AQ_ANVM_FEATURE_OPTION_OEM_ONLY 0x01 -#define AVF_AQ_ANVM_FEATURE_OPTION_DWORD_MAP 0x08 -#define AVF_AQ_ANVM_FEATURE_OPTION_POR_CSR 0x10 +#define IAVF_AQ_ANVM_FEATURE_OPTION_OEM_ONLY 0x01 +#define IAVF_AQ_ANVM_FEATURE_OPTION_DWORD_MAP 0x08 +#define IAVF_AQ_ANVM_FEATURE_OPTION_POR_CSR 0x10 __le16 feature_options; __le16 feature_selection; }; -AVF_CHECK_STRUCT_LEN(0x6, avf_aqc_nvm_config_data_feature); +IAVF_CHECK_STRUCT_LEN(0x6, iavf_aqc_nvm_config_data_feature); -struct avf_aqc_nvm_config_data_immediate_field { +struct iavf_aqc_nvm_config_data_immediate_field { __le32 field_id; __le32 field_value; __le16 field_options; __le16 reserved; }; -AVF_CHECK_STRUCT_LEN(0xc, avf_aqc_nvm_config_data_immediate_field); +IAVF_CHECK_STRUCT_LEN(0xc, iavf_aqc_nvm_config_data_immediate_field); /* OEM Post Update (indirect 0x0720) * no command data struct used */ -struct avf_aqc_nvm_oem_post_update { -#define AVF_AQ_NVM_OEM_POST_UPDATE_EXTERNAL_DATA 0x01 +struct iavf_aqc_nvm_oem_post_update { +#define IAVF_AQ_NVM_OEM_POST_UPDATE_EXTERNAL_DATA 0x01 u8 sel_data; u8 reserved[7]; }; -AVF_CHECK_STRUCT_LEN(0x8, avf_aqc_nvm_oem_post_update); +IAVF_CHECK_STRUCT_LEN(0x8, iavf_aqc_nvm_oem_post_update); -struct avf_aqc_nvm_oem_post_update_buffer { +struct iavf_aqc_nvm_oem_post_update_buffer { u8 str_len; u8 dev_addr; __le16 eeprom_addr; u8 data[36]; }; -AVF_CHECK_STRUCT_LEN(0x28, avf_aqc_nvm_oem_post_update_buffer); +IAVF_CHECK_STRUCT_LEN(0x28, iavf_aqc_nvm_oem_post_update_buffer); /* Thermal Sensor (indirect 0x0721) * read or set thermal sensor configs and values * takes a sensor and command specific data buffer, not detailed here */ -struct avf_aqc_thermal_sensor { +struct iavf_aqc_thermal_sensor { u8 sensor_action; -#define AVF_AQ_THERMAL_SENSOR_READ_CONFIG 0 -#define AVF_AQ_THERMAL_SENSOR_SET_CONFIG 1 -#define AVF_AQ_THERMAL_SENSOR_READ_TEMP 2 +#define IAVF_AQ_THERMAL_SENSOR_READ_CONFIG 0 +#define IAVF_AQ_THERMAL_SENSOR_SET_CONFIG 1 +#define IAVF_AQ_THERMAL_SENSOR_READ_TEMP 2 u8 reserved[7]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_thermal_sensor); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_thermal_sensor); /* Send to PF command (indirect 0x0801) id is only used by PF * Send to VF command (indirect 0x0802) id is only used by PF * Send to Peer PF command (indirect 0x0803) */ -struct avf_aqc_pf_vf_message { +struct iavf_aqc_pf_vf_message { __le32 id; u8 reserved[4]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_pf_vf_message); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_pf_vf_message); /* Alternate structure */ /* Direct write (direct 0x0900) * Direct read (direct 0x0902) */ -struct avf_aqc_alternate_write { +struct iavf_aqc_alternate_write { __le32 address0; __le32 data0; __le32 address1; __le32 data1; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_alternate_write); /* Indirect write (indirect 0x0901) * Indirect read (indirect 0x0903) */ -struct avf_aqc_alternate_ind_write { +struct iavf_aqc_alternate_ind_write { __le32 address; __le32 length; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_ind_write); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_alternate_ind_write); /* Done alternate write (direct 0x0904) - * uses avf_aq_desc + * uses iavf_aq_desc */ -struct avf_aqc_alternate_write_done { +struct iavf_aqc_alternate_write_done { __le16 cmd_flags; -#define AVF_AQ_ALTERNATE_MODE_BIOS_MASK 1 -#define AVF_AQ_ALTERNATE_MODE_BIOS_LEGACY 0 -#define AVF_AQ_ALTERNATE_MODE_BIOS_UEFI 1 -#define AVF_AQ_ALTERNATE_RESET_NEEDED 2 +#define IAVF_AQ_ALTERNATE_MODE_BIOS_MASK 1 +#define IAVF_AQ_ALTERNATE_MODE_BIOS_LEGACY 0 +#define IAVF_AQ_ALTERNATE_MODE_BIOS_UEFI 1 +#define IAVF_AQ_ALTERNATE_RESET_NEEDED 2 u8 reserved[14]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_write_done); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_alternate_write_done); /* Set OEM mode (direct 0x0905) */ -struct avf_aqc_alternate_set_mode { +struct iavf_aqc_alternate_set_mode { __le32 mode; -#define AVF_AQ_ALTERNATE_MODE_NONE 0 -#define AVF_AQ_ALTERNATE_MODE_OEM 1 +#define IAVF_AQ_ALTERNATE_MODE_NONE 0 +#define IAVF_AQ_ALTERNATE_MODE_OEM 1 u8 reserved[12]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_alternate_set_mode); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_alternate_set_mode); -/* Clear port Alternate RAM (direct 0x0906) uses avf_aq_desc */ +/* Clear port Alternate RAM (direct 0x0906) uses iavf_aq_desc */ /* async events 0x10xx */ /* Lan Queue Overflow Event (direct, 0x1001) */ -struct avf_aqc_lan_overflow { +struct iavf_aqc_lan_overflow { __le32 prtdcb_rupto; __le32 otx_ctl; u8 reserved[8]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_lan_overflow); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_lan_overflow); /* Get LLDP MIB (indirect 0x0A00) */ -struct avf_aqc_lldp_get_mib { +struct iavf_aqc_lldp_get_mib { u8 type; u8 reserved1; -#define AVF_AQ_LLDP_MIB_TYPE_MASK 0x3 -#define AVF_AQ_LLDP_MIB_LOCAL 0x0 -#define AVF_AQ_LLDP_MIB_REMOTE 0x1 -#define AVF_AQ_LLDP_MIB_LOCAL_AND_REMOTE 0x2 -#define AVF_AQ_LLDP_BRIDGE_TYPE_MASK 0xC -#define AVF_AQ_LLDP_BRIDGE_TYPE_SHIFT 0x2 -#define AVF_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE 0x0 -#define AVF_AQ_LLDP_BRIDGE_TYPE_NON_TPMR 0x1 -#define AVF_AQ_LLDP_TX_SHIFT 0x4 -#define AVF_AQ_LLDP_TX_MASK (0x03 << AVF_AQ_LLDP_TX_SHIFT) -/* TX pause flags use AVF_AQ_LINK_TX_* above */ +#define IAVF_AQ_LLDP_MIB_TYPE_MASK 0x3 +#define IAVF_AQ_LLDP_MIB_LOCAL 0x0 +#define IAVF_AQ_LLDP_MIB_REMOTE 0x1 +#define IAVF_AQ_LLDP_MIB_LOCAL_AND_REMOTE 0x2 +#define IAVF_AQ_LLDP_BRIDGE_TYPE_MASK 0xC +#define IAVF_AQ_LLDP_BRIDGE_TYPE_SHIFT 0x2 +#define IAVF_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE 0x0 +#define IAVF_AQ_LLDP_BRIDGE_TYPE_NON_TPMR 0x1 +#define IAVF_AQ_LLDP_TX_SHIFT 0x4 +#define IAVF_AQ_LLDP_TX_MASK (0x03 << IAVF_AQ_LLDP_TX_SHIFT) +/* TX pause flags use IAVF_AQ_LINK_TX_* above */ __le16 local_len; __le16 remote_len; u8 reserved2[2]; @@ -2419,26 +2419,26 @@ struct avf_aqc_lldp_get_mib { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_get_mib); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_lldp_get_mib); /* Configure LLDP MIB Change Event (direct 0x0A01) * also used for the event (with type in the command field) */ -struct avf_aqc_lldp_update_mib { +struct iavf_aqc_lldp_update_mib { u8 command; -#define AVF_AQ_LLDP_MIB_UPDATE_ENABLE 0x0 -#define AVF_AQ_LLDP_MIB_UPDATE_DISABLE 0x1 +#define IAVF_AQ_LLDP_MIB_UPDATE_ENABLE 0x0 +#define IAVF_AQ_LLDP_MIB_UPDATE_DISABLE 0x1 u8 reserved[7]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_mib); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_lldp_update_mib); /* Add LLDP TLV (indirect 0x0A02) * Delete LLDP TLV (indirect 0x0A04) */ -struct avf_aqc_lldp_add_tlv { +struct iavf_aqc_lldp_add_tlv { u8 type; /* only nearest bridge and non-TPMR from 0x0A00 */ u8 reserved1[1]; __le16 len; @@ -2447,10 +2447,10 @@ struct avf_aqc_lldp_add_tlv { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_add_tlv); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_lldp_add_tlv); /* Update LLDP TLV (indirect 0x0A03) */ -struct avf_aqc_lldp_update_tlv { +struct iavf_aqc_lldp_update_tlv { u8 type; /* only nearest bridge and non-TPMR from 0x0A00 */ u8 reserved; __le16 old_len; @@ -2460,65 +2460,65 @@ struct avf_aqc_lldp_update_tlv { __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_update_tlv); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_lldp_update_tlv); /* Stop LLDP (direct 0x0A05) */ -struct avf_aqc_lldp_stop { +struct iavf_aqc_lldp_stop { u8 command; -#define AVF_AQ_LLDP_AGENT_STOP 0x0 -#define AVF_AQ_LLDP_AGENT_SHUTDOWN 0x1 +#define IAVF_AQ_LLDP_AGENT_STOP 0x0 +#define IAVF_AQ_LLDP_AGENT_SHUTDOWN 0x1 u8 reserved[15]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_lldp_stop); /* Start LLDP (direct 0x0A06) */ -struct avf_aqc_lldp_start { +struct iavf_aqc_lldp_start { u8 command; -#define AVF_AQ_LLDP_AGENT_START 0x1 +#define IAVF_AQ_LLDP_AGENT_START 0x1 u8 reserved[15]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_start); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_lldp_start); /* Set DCB (direct 0x0303) */ -struct avf_aqc_set_dcb_parameters { +struct iavf_aqc_set_dcb_parameters { u8 command; -#define AVF_AQ_DCB_SET_AGENT 0x1 -#define AVF_DCB_VALID 0x1 +#define IAVF_AQ_DCB_SET_AGENT 0x1 +#define IAVF_DCB_VALID 0x1 u8 valid_flags; u8 reserved[14]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_set_dcb_parameters); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_set_dcb_parameters); /* Get CEE DCBX Oper Config (0x0A07) * uses the generic descriptor struct * returns below as indirect response */ -#define AVF_AQC_CEE_APP_FCOE_SHIFT 0x0 -#define AVF_AQC_CEE_APP_FCOE_MASK (0x7 << AVF_AQC_CEE_APP_FCOE_SHIFT) -#define AVF_AQC_CEE_APP_ISCSI_SHIFT 0x3 -#define AVF_AQC_CEE_APP_ISCSI_MASK (0x7 << AVF_AQC_CEE_APP_ISCSI_SHIFT) -#define AVF_AQC_CEE_APP_FIP_SHIFT 0x8 -#define AVF_AQC_CEE_APP_FIP_MASK (0x7 << AVF_AQC_CEE_APP_FIP_SHIFT) - -#define AVF_AQC_CEE_PG_STATUS_SHIFT 0x0 -#define AVF_AQC_CEE_PG_STATUS_MASK (0x7 << AVF_AQC_CEE_PG_STATUS_SHIFT) -#define AVF_AQC_CEE_PFC_STATUS_SHIFT 0x3 -#define AVF_AQC_CEE_PFC_STATUS_MASK (0x7 << AVF_AQC_CEE_PFC_STATUS_SHIFT) -#define AVF_AQC_CEE_APP_STATUS_SHIFT 0x8 -#define AVF_AQC_CEE_APP_STATUS_MASK (0x7 << AVF_AQC_CEE_APP_STATUS_SHIFT) -#define AVF_AQC_CEE_FCOE_STATUS_SHIFT 0x8 -#define AVF_AQC_CEE_FCOE_STATUS_MASK (0x7 << AVF_AQC_CEE_FCOE_STATUS_SHIFT) -#define AVF_AQC_CEE_ISCSI_STATUS_SHIFT 0xB -#define AVF_AQC_CEE_ISCSI_STATUS_MASK (0x7 << AVF_AQC_CEE_ISCSI_STATUS_SHIFT) -#define AVF_AQC_CEE_FIP_STATUS_SHIFT 0x10 -#define AVF_AQC_CEE_FIP_STATUS_MASK (0x7 << AVF_AQC_CEE_FIP_STATUS_SHIFT) - -/* struct avf_aqc_get_cee_dcb_cfg_v1_resp was originally defined with +#define IAVF_AQC_CEE_APP_FCOE_SHIFT 0x0 +#define IAVF_AQC_CEE_APP_FCOE_MASK (0x7 << IAVF_AQC_CEE_APP_FCOE_SHIFT) +#define IAVF_AQC_CEE_APP_ISCSI_SHIFT 0x3 +#define IAVF_AQC_CEE_APP_ISCSI_MASK (0x7 << IAVF_AQC_CEE_APP_ISCSI_SHIFT) +#define IAVF_AQC_CEE_APP_FIP_SHIFT 0x8 +#define IAVF_AQC_CEE_APP_FIP_MASK (0x7 << IAVF_AQC_CEE_APP_FIP_SHIFT) + +#define IAVF_AQC_CEE_PG_STATUS_SHIFT 0x0 +#define IAVF_AQC_CEE_PG_STATUS_MASK (0x7 << IAVF_AQC_CEE_PG_STATUS_SHIFT) +#define IAVF_AQC_CEE_PFC_STATUS_SHIFT 0x3 +#define IAVF_AQC_CEE_PFC_STATUS_MASK (0x7 << IAVF_AQC_CEE_PFC_STATUS_SHIFT) +#define IAVF_AQC_CEE_APP_STATUS_SHIFT 0x8 +#define IAVF_AQC_CEE_APP_STATUS_MASK (0x7 << IAVF_AQC_CEE_APP_STATUS_SHIFT) +#define IAVF_AQC_CEE_FCOE_STATUS_SHIFT 0x8 +#define IAVF_AQC_CEE_FCOE_STATUS_MASK (0x7 << IAVF_AQC_CEE_FCOE_STATUS_SHIFT) +#define IAVF_AQC_CEE_ISCSI_STATUS_SHIFT 0xB +#define IAVF_AQC_CEE_ISCSI_STATUS_MASK (0x7 << IAVF_AQC_CEE_ISCSI_STATUS_SHIFT) +#define IAVF_AQC_CEE_FIP_STATUS_SHIFT 0x10 +#define IAVF_AQC_CEE_FIP_STATUS_MASK (0x7 << IAVF_AQC_CEE_FIP_STATUS_SHIFT) + +/* struct iavf_aqc_get_cee_dcb_cfg_v1_resp was originally defined with * word boundary layout issues, which the Linux compilers silently deal * with by adding padding, making the actual struct larger than designed. * However, the FW compiler for the NIC is less lenient and complains @@ -2526,7 +2526,7 @@ AVF_CHECK_CMD_LENGTH(avf_aqc_set_dcb_parameters); * fields reserved3 and reserved4 to directly acknowledge that padding, * and the new length is used in the length check macro. */ -struct avf_aqc_get_cee_dcb_cfg_v1_resp { +struct iavf_aqc_get_cee_dcb_cfg_v1_resp { u8 reserved1; u8 oper_num_tc; u8 oper_prio_tc[4]; @@ -2539,9 +2539,9 @@ struct avf_aqc_get_cee_dcb_cfg_v1_resp { __le16 tlv_status; }; -AVF_CHECK_STRUCT_LEN(0x18, avf_aqc_get_cee_dcb_cfg_v1_resp); +IAVF_CHECK_STRUCT_LEN(0x18, iavf_aqc_get_cee_dcb_cfg_v1_resp); -struct avf_aqc_get_cee_dcb_cfg_resp { +struct iavf_aqc_get_cee_dcb_cfg_resp { u8 oper_num_tc; u8 oper_prio_tc[4]; u8 oper_tc_bw[8]; @@ -2551,12 +2551,12 @@ struct avf_aqc_get_cee_dcb_cfg_resp { u8 reserved[12]; }; -AVF_CHECK_STRUCT_LEN(0x20, avf_aqc_get_cee_dcb_cfg_resp); +IAVF_CHECK_STRUCT_LEN(0x20, iavf_aqc_get_cee_dcb_cfg_resp); /* Set Local LLDP MIB (indirect 0x0A08) * Used to replace the local MIB of a given LLDP agent. e.g. DCBx */ -struct avf_aqc_lldp_set_local_mib { +struct iavf_aqc_lldp_set_local_mib { #define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT 0 #define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK (1 << \ SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT) @@ -2573,65 +2573,65 @@ struct avf_aqc_lldp_set_local_mib { __le32 address_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_set_local_mib); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_lldp_set_local_mib); -struct avf_aqc_lldp_set_local_mib_resp { +struct iavf_aqc_lldp_set_local_mib_resp { #define SET_LOCAL_MIB_RESP_EVENT_TRIGGERED_MASK 0x01 u8 status; u8 reserved[15]; }; -AVF_CHECK_STRUCT_LEN(0x10, avf_aqc_lldp_set_local_mib_resp); +IAVF_CHECK_STRUCT_LEN(0x10, iavf_aqc_lldp_set_local_mib_resp); /* Stop/Start LLDP Agent (direct 0x0A09) * Used for stopping/starting specific LLDP agent. e.g. DCBx */ -struct avf_aqc_lldp_stop_start_specific_agent { -#define AVF_AQC_START_SPECIFIC_AGENT_SHIFT 0 -#define AVF_AQC_START_SPECIFIC_AGENT_MASK \ - (1 << AVF_AQC_START_SPECIFIC_AGENT_SHIFT) +struct iavf_aqc_lldp_stop_start_specific_agent { +#define IAVF_AQC_START_SPECIFIC_AGENT_SHIFT 0 +#define IAVF_AQC_START_SPECIFIC_AGENT_MASK \ + (1 << IAVF_AQC_START_SPECIFIC_AGENT_SHIFT) u8 command; u8 reserved[15]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_lldp_stop_start_specific_agent); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_lldp_stop_start_specific_agent); /* Add Udp Tunnel command and completion (direct 0x0B00) */ -struct avf_aqc_add_udp_tunnel { +struct iavf_aqc_add_udp_tunnel { __le16 udp_port; u8 reserved0[3]; u8 protocol_type; -#define AVF_AQC_TUNNEL_TYPE_VXLAN 0x00 -#define AVF_AQC_TUNNEL_TYPE_NGE 0x01 -#define AVF_AQC_TUNNEL_TYPE_TEREDO 0x10 -#define AVF_AQC_TUNNEL_TYPE_VXLAN_GPE 0x11 +#define IAVF_AQC_TUNNEL_TYPE_VXLAN 0x00 +#define IAVF_AQC_TUNNEL_TYPE_NGE 0x01 +#define IAVF_AQC_TUNNEL_TYPE_TEREDO 0x10 +#define IAVF_AQC_TUNNEL_TYPE_VXLAN_GPE 0x11 u8 reserved1[10]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_udp_tunnel); -struct avf_aqc_add_udp_tunnel_completion { +struct iavf_aqc_add_udp_tunnel_completion { __le16 udp_port; u8 filter_entry_index; u8 multiple_pfs; -#define AVF_AQC_SINGLE_PF 0x0 -#define AVF_AQC_MULTIPLE_PFS 0x1 +#define IAVF_AQC_SINGLE_PF 0x0 +#define IAVF_AQC_MULTIPLE_PFS 0x1 u8 total_filters; u8 reserved[11]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_add_udp_tunnel_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_add_udp_tunnel_completion); /* remove UDP Tunnel command (0x0B01) */ -struct avf_aqc_remove_udp_tunnel { +struct iavf_aqc_remove_udp_tunnel { u8 reserved[2]; u8 index; /* 0 to 15 */ u8 reserved2[13]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_remove_udp_tunnel); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_remove_udp_tunnel); -struct avf_aqc_del_udp_tunnel_completion { +struct iavf_aqc_del_udp_tunnel_completion { __le16 udp_port; u8 index; /* 0 to 15 */ u8 multiple_pfs; @@ -2639,95 +2639,95 @@ struct avf_aqc_del_udp_tunnel_completion { u8 reserved1[11]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_del_udp_tunnel_completion); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_del_udp_tunnel_completion); -struct avf_aqc_get_set_rss_key { -#define AVF_AQC_SET_RSS_KEY_VSI_VALID (0x1 << 15) -#define AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT 0 -#define AVF_AQC_SET_RSS_KEY_VSI_ID_MASK (0x3FF << \ - AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) +struct iavf_aqc_get_set_rss_key { +#define IAVF_AQC_SET_RSS_KEY_VSI_VALID (0x1 << 15) +#define IAVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT 0 +#define IAVF_AQC_SET_RSS_KEY_VSI_ID_MASK (0x3FF << \ + IAVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) __le16 vsi_id; u8 reserved[6]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_key); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_set_rss_key); -struct avf_aqc_get_set_rss_key_data { +struct iavf_aqc_get_set_rss_key_data { u8 standard_rss_key[0x28]; u8 extended_hash_key[0xc]; }; -AVF_CHECK_STRUCT_LEN(0x34, avf_aqc_get_set_rss_key_data); +IAVF_CHECK_STRUCT_LEN(0x34, iavf_aqc_get_set_rss_key_data); -struct avf_aqc_get_set_rss_lut { -#define AVF_AQC_SET_RSS_LUT_VSI_VALID (0x1 << 15) -#define AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT 0 -#define AVF_AQC_SET_RSS_LUT_VSI_ID_MASK (0x3FF << \ - AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) +struct iavf_aqc_get_set_rss_lut { +#define IAVF_AQC_SET_RSS_LUT_VSI_VALID (0x1 << 15) +#define IAVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT 0 +#define IAVF_AQC_SET_RSS_LUT_VSI_ID_MASK (0x3FF << \ + IAVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) __le16 vsi_id; -#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT 0 -#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK (0x1 << \ - AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) +#define IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT 0 +#define IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK (0x1 << \ + IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) -#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI 0 -#define AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF 1 +#define IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI 0 +#define IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF 1 __le16 flags; u8 reserved[4]; __le32 addr_high; __le32 addr_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_get_set_rss_lut); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_get_set_rss_lut); /* tunnel key structure 0x0B10 */ -struct avf_aqc_tunnel_key_structure { +struct iavf_aqc_tunnel_key_structure { u8 key1_off; u8 key2_off; u8 key1_len; /* 0 to 15 */ u8 key2_len; /* 0 to 15 */ u8 flags; -#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDE 0x01 +#define IAVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDE 0x01 /* response flags */ -#define AVF_AQC_TUNNEL_KEY_STRUCT_SUCCESS 0x01 -#define AVF_AQC_TUNNEL_KEY_STRUCT_MODIFIED 0x02 -#define AVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN 0x03 +#define IAVF_AQC_TUNNEL_KEY_STRUCT_SUCCESS 0x01 +#define IAVF_AQC_TUNNEL_KEY_STRUCT_MODIFIED 0x02 +#define IAVF_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN 0x03 u8 network_key_index; -#define AVF_AQC_NETWORK_KEY_INDEX_VXLAN 0x0 -#define AVF_AQC_NETWORK_KEY_INDEX_NGE 0x1 -#define AVF_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP 0x2 -#define AVF_AQC_NETWORK_KEY_INDEX_GRE 0x3 +#define IAVF_AQC_NETWORK_KEY_INDEX_VXLAN 0x0 +#define IAVF_AQC_NETWORK_KEY_INDEX_NGE 0x1 +#define IAVF_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP 0x2 +#define IAVF_AQC_NETWORK_KEY_INDEX_GRE 0x3 u8 reserved[10]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_tunnel_key_structure); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_tunnel_key_structure); /* OEM mode commands (direct 0xFE0x) */ -struct avf_aqc_oem_param_change { +struct iavf_aqc_oem_param_change { __le32 param_type; -#define AVF_AQ_OEM_PARAM_TYPE_PF_CTL 0 -#define AVF_AQ_OEM_PARAM_TYPE_BW_CTL 1 -#define AVF_AQ_OEM_PARAM_MAC 2 +#define IAVF_AQ_OEM_PARAM_TYPE_PF_CTL 0 +#define IAVF_AQ_OEM_PARAM_TYPE_BW_CTL 1 +#define IAVF_AQ_OEM_PARAM_MAC 2 __le32 param_value1; __le16 param_value2; u8 reserved[6]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_oem_param_change); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_oem_param_change); -struct avf_aqc_oem_state_change { +struct iavf_aqc_oem_state_change { __le32 state; -#define AVF_AQ_OEM_STATE_LINK_DOWN 0x0 -#define AVF_AQ_OEM_STATE_LINK_UP 0x1 +#define IAVF_AQ_OEM_STATE_LINK_DOWN 0x0 +#define IAVF_AQ_OEM_STATE_LINK_UP 0x1 u8 reserved[12]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_oem_state_change); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_oem_state_change); /* Initialize OCSD (0xFE02, direct) */ -struct avf_aqc_opc_oem_ocsd_initialize { +struct iavf_aqc_opc_oem_ocsd_initialize { u8 type_status; u8 reserved1[3]; __le32 ocsd_memory_block_addr_high; @@ -2735,10 +2735,10 @@ struct avf_aqc_opc_oem_ocsd_initialize { __le32 requested_update_interval; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocsd_initialize); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_opc_oem_ocsd_initialize); /* Initialize OCBB (0xFE03, direct) */ -struct avf_aqc_opc_oem_ocbb_initialize { +struct iavf_aqc_opc_oem_ocbb_initialize { u8 type_status; u8 reserved1[3]; __le32 ocbb_memory_block_addr_high; @@ -2746,7 +2746,7 @@ struct avf_aqc_opc_oem_ocbb_initialize { u8 reserved2[4]; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocbb_initialize); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_opc_oem_ocbb_initialize); /* debug commands */ @@ -2754,71 +2754,71 @@ AVF_CHECK_CMD_LENGTH(avf_aqc_opc_oem_ocbb_initialize); /* set test more (0xFF01, internal) */ -struct avf_acq_set_test_mode { +struct iavf_acq_set_test_mode { u8 mode; -#define AVF_AQ_TEST_PARTIAL 0 -#define AVF_AQ_TEST_FULL 1 -#define AVF_AQ_TEST_NVM 2 +#define IAVF_AQ_TEST_PARTIAL 0 +#define IAVF_AQ_TEST_FULL 1 +#define IAVF_AQ_TEST_NVM 2 u8 reserved[3]; u8 command; -#define AVF_AQ_TEST_OPEN 0 -#define AVF_AQ_TEST_CLOSE 1 -#define AVF_AQ_TEST_INC 2 +#define IAVF_AQ_TEST_OPEN 0 +#define IAVF_AQ_TEST_CLOSE 1 +#define IAVF_AQ_TEST_INC 2 u8 reserved2[3]; __le32 address_high; __le32 address_low; }; -AVF_CHECK_CMD_LENGTH(avf_acq_set_test_mode); +IAVF_CHECK_CMD_LENGTH(iavf_acq_set_test_mode); /* Debug Read Register command (0xFF03) * Debug Write Register command (0xFF04) */ -struct avf_aqc_debug_reg_read_write { +struct iavf_aqc_debug_reg_read_write { __le32 reserved; __le32 address; __le32 value_high; __le32 value_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_debug_reg_read_write); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_debug_reg_read_write); /* Scatter/gather Reg Read (indirect 0xFF05) * Scatter/gather Reg Write (indirect 0xFF06) */ -/* avf_aq_desc is used for the command */ -struct avf_aqc_debug_reg_sg_element_data { +/* iavf_aq_desc is used for the command */ +struct iavf_aqc_debug_reg_sg_element_data { __le32 address; __le32 value; }; /* Debug Modify register (direct 0xFF07) */ -struct avf_aqc_debug_modify_reg { +struct iavf_aqc_debug_modify_reg { __le32 address; __le32 value; __le32 clear_mask; __le32 set_mask; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_reg); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_debug_modify_reg); /* dump internal data (0xFF08, indirect) */ -#define AVF_AQ_CLUSTER_ID_AUX 0 -#define AVF_AQ_CLUSTER_ID_SWITCH_FLU 1 -#define AVF_AQ_CLUSTER_ID_TXSCHED 2 -#define AVF_AQ_CLUSTER_ID_HMC 3 -#define AVF_AQ_CLUSTER_ID_MAC0 4 -#define AVF_AQ_CLUSTER_ID_MAC1 5 -#define AVF_AQ_CLUSTER_ID_MAC2 6 -#define AVF_AQ_CLUSTER_ID_MAC3 7 -#define AVF_AQ_CLUSTER_ID_DCB 8 -#define AVF_AQ_CLUSTER_ID_EMP_MEM 9 -#define AVF_AQ_CLUSTER_ID_PKT_BUF 10 -#define AVF_AQ_CLUSTER_ID_ALTRAM 11 - -struct avf_aqc_debug_dump_internals { +#define IAVF_AQ_CLUSTER_ID_AUX 0 +#define IAVF_AQ_CLUSTER_ID_SWITCH_FLU 1 +#define IAVF_AQ_CLUSTER_ID_TXSCHED 2 +#define IAVF_AQ_CLUSTER_ID_HMC 3 +#define IAVF_AQ_CLUSTER_ID_MAC0 4 +#define IAVF_AQ_CLUSTER_ID_MAC1 5 +#define IAVF_AQ_CLUSTER_ID_MAC2 6 +#define IAVF_AQ_CLUSTER_ID_MAC3 7 +#define IAVF_AQ_CLUSTER_ID_DCB 8 +#define IAVF_AQ_CLUSTER_ID_EMP_MEM 9 +#define IAVF_AQ_CLUSTER_ID_PKT_BUF 10 +#define IAVF_AQ_CLUSTER_ID_ALTRAM 11 + +struct iavf_aqc_debug_dump_internals { u8 cluster_id; u8 table_id; __le16 data_size; @@ -2827,15 +2827,15 @@ struct avf_aqc_debug_dump_internals { __le32 address_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_debug_dump_internals); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_debug_dump_internals); -struct avf_aqc_debug_modify_internals { +struct iavf_aqc_debug_modify_internals { u8 cluster_id; u8 cluster_specific_params[7]; __le32 address_high; __le32 address_low; }; -AVF_CHECK_CMD_LENGTH(avf_aqc_debug_modify_internals); +IAVF_CHECK_CMD_LENGTH(iavf_aqc_debug_modify_internals); -#endif /* _AVF_ADMINQ_CMD_H_ */ +#endif /* _IAVF_ADMINQ_CMD_H_ */ diff --git a/drivers/net/iavf/base/iavf_alloc.h b/drivers/net/iavf/base/iavf_alloc.h index 21e29bd0e..be61b3313 100644 --- a/drivers/net/iavf/base/iavf_alloc.h +++ b/drivers/net/iavf/base/iavf_alloc.h @@ -31,35 +31,35 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_ALLOC_H_ -#define _AVF_ALLOC_H_ +#ifndef _IAVF_ALLOC_H_ +#define _IAVF_ALLOC_H_ -struct avf_hw; +struct iavf_hw; /* Memory allocation types */ -enum avf_memory_type { - avf_mem_arq_buf = 0, /* ARQ indirect command buffer */ - avf_mem_asq_buf = 1, - avf_mem_atq_buf = 2, /* ATQ indirect command buffer */ - avf_mem_arq_ring = 3, /* ARQ descriptor ring */ - avf_mem_atq_ring = 4, /* ATQ descriptor ring */ - avf_mem_pd = 5, /* Page Descriptor */ - avf_mem_bp = 6, /* Backing Page - 4KB */ - avf_mem_bp_jumbo = 7, /* Backing Page - > 4KB */ - avf_mem_reserved +enum iavf_memory_type { + iavf_mem_arq_buf = 0, /* ARQ indirect command buffer */ + iavf_mem_asq_buf = 1, + iavf_mem_atq_buf = 2, /* ATQ indirect command buffer */ + iavf_mem_arq_ring = 3, /* ARQ descriptor ring */ + iavf_mem_atq_ring = 4, /* ATQ descriptor ring */ + iavf_mem_pd = 5, /* Page Descriptor */ + iavf_mem_bp = 6, /* Backing Page - 4KB */ + iavf_mem_bp_jumbo = 7, /* Backing Page - > 4KB */ + iavf_mem_reserved }; /* prototype for functions used for dynamic memory allocation */ -enum avf_status_code avf_allocate_dma_mem(struct avf_hw *hw, - struct avf_dma_mem *mem, - enum avf_memory_type type, +enum iavf_status_code iavf_allocate_dma_mem(struct iavf_hw *hw, + struct iavf_dma_mem *mem, + enum iavf_memory_type type, u64 size, u32 alignment); -enum avf_status_code avf_free_dma_mem(struct avf_hw *hw, - struct avf_dma_mem *mem); -enum avf_status_code avf_allocate_virt_mem(struct avf_hw *hw, - struct avf_virt_mem *mem, +enum iavf_status_code iavf_free_dma_mem(struct iavf_hw *hw, + struct iavf_dma_mem *mem); +enum iavf_status_code iavf_allocate_virt_mem(struct iavf_hw *hw, + struct iavf_virt_mem *mem, u32 size); -enum avf_status_code avf_free_virt_mem(struct avf_hw *hw, - struct avf_virt_mem *mem); +enum iavf_status_code iavf_free_virt_mem(struct iavf_hw *hw, + struct iavf_virt_mem *mem); -#endif /* _AVF_ALLOC_H_ */ +#endif /* _IAVF_ALLOC_H_ */ diff --git a/drivers/net/iavf/base/iavf_common.c b/drivers/net/iavf/base/iavf_common.c index 5b7f09128..be700dde2 100644 --- a/drivers/net/iavf/base/iavf_common.c +++ b/drivers/net/iavf/base/iavf_common.c @@ -38,93 +38,93 @@ POSSIBILITY OF SUCH DAMAGE. /** - * avf_set_mac_type - Sets MAC type + * iavf_set_mac_type - Sets MAC type * @hw: pointer to the HW structure * * This function sets the mac type of the adapter based on the * vendor ID and device ID stored in the hw structure. **/ -enum avf_status_code avf_set_mac_type(struct avf_hw *hw) +enum iavf_status_code iavf_set_mac_type(struct iavf_hw *hw) { - enum avf_status_code status = AVF_SUCCESS; + enum iavf_status_code status = IAVF_SUCCESS; - DEBUGFUNC("avf_set_mac_type\n"); + DEBUGFUNC("iavf_set_mac_type\n"); - if (hw->vendor_id == AVF_INTEL_VENDOR_ID) { + if (hw->vendor_id == IAVF_INTEL_VENDOR_ID) { switch (hw->device_id) { /* TODO: remove undefined device ID now, need to think how to * remove them in share code */ - case AVF_DEV_ID_ADAPTIVE_VF: - hw->mac.type = AVF_MAC_VF; + case IAVF_DEV_ID_ADAPTIVE_VF: + hw->mac.type = IAVF_MAC_VF; break; default: - hw->mac.type = AVF_MAC_GENERIC; + hw->mac.type = IAVF_MAC_GENERIC; break; } } else { - status = AVF_ERR_DEVICE_NOT_SUPPORTED; + status = IAVF_ERR_DEVICE_NOT_SUPPORTED; } - DEBUGOUT2("avf_set_mac_type found mac: %d, returns: %d\n", + DEBUGOUT2("iavf_set_mac_type found mac: %d, returns: %d\n", hw->mac.type, status); return status; } /** - * avf_aq_str - convert AQ err code to a string + * iavf_aq_str - convert AQ err code to a string * @hw: pointer to the HW structure * @aq_err: the AQ error code to convert **/ -const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err) +const char *iavf_aq_str(struct iavf_hw *hw, enum iavf_admin_queue_err aq_err) { switch (aq_err) { - case AVF_AQ_RC_OK: + case IAVF_AQ_RC_OK: return "OK"; - case AVF_AQ_RC_EPERM: - return "AVF_AQ_RC_EPERM"; - case AVF_AQ_RC_ENOENT: - return "AVF_AQ_RC_ENOENT"; - case AVF_AQ_RC_ESRCH: - return "AVF_AQ_RC_ESRCH"; - case AVF_AQ_RC_EINTR: - return "AVF_AQ_RC_EINTR"; - case AVF_AQ_RC_EIO: - return "AVF_AQ_RC_EIO"; - case AVF_AQ_RC_ENXIO: - return "AVF_AQ_RC_ENXIO"; - case AVF_AQ_RC_E2BIG: - return "AVF_AQ_RC_E2BIG"; - case AVF_AQ_RC_EAGAIN: - return "AVF_AQ_RC_EAGAIN"; - case AVF_AQ_RC_ENOMEM: - return "AVF_AQ_RC_ENOMEM"; - case AVF_AQ_RC_EACCES: - return "AVF_AQ_RC_EACCES"; - case AVF_AQ_RC_EFAULT: - return "AVF_AQ_RC_EFAULT"; - case AVF_AQ_RC_EBUSY: - return "AVF_AQ_RC_EBUSY"; - case AVF_AQ_RC_EEXIST: - return "AVF_AQ_RC_EEXIST"; - case AVF_AQ_RC_EINVAL: - return "AVF_AQ_RC_EINVAL"; - case AVF_AQ_RC_ENOTTY: - return "AVF_AQ_RC_ENOTTY"; - case AVF_AQ_RC_ENOSPC: - return "AVF_AQ_RC_ENOSPC"; - case AVF_AQ_RC_ENOSYS: - return "AVF_AQ_RC_ENOSYS"; - case AVF_AQ_RC_ERANGE: - return "AVF_AQ_RC_ERANGE"; - case AVF_AQ_RC_EFLUSHED: - return "AVF_AQ_RC_EFLUSHED"; - case AVF_AQ_RC_BAD_ADDR: - return "AVF_AQ_RC_BAD_ADDR"; - case AVF_AQ_RC_EMODE: - return "AVF_AQ_RC_EMODE"; - case AVF_AQ_RC_EFBIG: - return "AVF_AQ_RC_EFBIG"; + case IAVF_AQ_RC_EPERM: + return "IAVF_AQ_RC_EPERM"; + case IAVF_AQ_RC_ENOENT: + return "IAVF_AQ_RC_ENOENT"; + case IAVF_AQ_RC_ESRCH: + return "IAVF_AQ_RC_ESRCH"; + case IAVF_AQ_RC_EINTR: + return "IAVF_AQ_RC_EINTR"; + case IAVF_AQ_RC_EIO: + return "IAVF_AQ_RC_EIO"; + case IAVF_AQ_RC_ENXIO: + return "IAVF_AQ_RC_ENXIO"; + case IAVF_AQ_RC_E2BIG: + return "IAVF_AQ_RC_E2BIG"; + case IAVF_AQ_RC_EAGAIN: + return "IAVF_AQ_RC_EAGAIN"; + case IAVF_AQ_RC_ENOMEM: + return "IAVF_AQ_RC_ENOMEM"; + case IAVF_AQ_RC_EACCES: + return "IAVF_AQ_RC_EACCES"; + case IAVF_AQ_RC_EFAULT: + return "IAVF_AQ_RC_EFAULT"; + case IAVF_AQ_RC_EBUSY: + return "IAVF_AQ_RC_EBUSY"; + case IAVF_AQ_RC_EEXIST: + return "IAVF_AQ_RC_EEXIST"; + case IAVF_AQ_RC_EINVAL: + return "IAVF_AQ_RC_EINVAL"; + case IAVF_AQ_RC_ENOTTY: + return "IAVF_AQ_RC_ENOTTY"; + case IAVF_AQ_RC_ENOSPC: + return "IAVF_AQ_RC_ENOSPC"; + case IAVF_AQ_RC_ENOSYS: + return "IAVF_AQ_RC_ENOSYS"; + case IAVF_AQ_RC_ERANGE: + return "IAVF_AQ_RC_ERANGE"; + case IAVF_AQ_RC_EFLUSHED: + return "IAVF_AQ_RC_EFLUSHED"; + case IAVF_AQ_RC_BAD_ADDR: + return "IAVF_AQ_RC_BAD_ADDR"; + case IAVF_AQ_RC_EMODE: + return "IAVF_AQ_RC_EMODE"; + case IAVF_AQ_RC_EFBIG: + return "IAVF_AQ_RC_EFBIG"; } snprintf(hw->err_str, sizeof(hw->err_str), "%d", aq_err); @@ -132,147 +132,147 @@ const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err) } /** - * avf_stat_str - convert status err code to a string + * iavf_stat_str - convert status err code to a string * @hw: pointer to the HW structure * @stat_err: the status error code to convert **/ -const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err) +const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status_code stat_err) { switch (stat_err) { - case AVF_SUCCESS: + case IAVF_SUCCESS: return "OK"; - case AVF_ERR_NVM: - return "AVF_ERR_NVM"; - case AVF_ERR_NVM_CHECKSUM: - return "AVF_ERR_NVM_CHECKSUM"; - case AVF_ERR_PHY: - return "AVF_ERR_PHY"; - case AVF_ERR_CONFIG: - return "AVF_ERR_CONFIG"; - case AVF_ERR_PARAM: - return "AVF_ERR_PARAM"; - case AVF_ERR_MAC_TYPE: - return "AVF_ERR_MAC_TYPE"; - case AVF_ERR_UNKNOWN_PHY: - return "AVF_ERR_UNKNOWN_PHY"; - case AVF_ERR_LINK_SETUP: - return "AVF_ERR_LINK_SETUP"; - case AVF_ERR_ADAPTER_STOPPED: - return "AVF_ERR_ADAPTER_STOPPED"; - case AVF_ERR_INVALID_MAC_ADDR: - return "AVF_ERR_INVALID_MAC_ADDR"; - case AVF_ERR_DEVICE_NOT_SUPPORTED: - return "AVF_ERR_DEVICE_NOT_SUPPORTED"; - case AVF_ERR_MASTER_REQUESTS_PENDING: - return "AVF_ERR_MASTER_REQUESTS_PENDING"; - case AVF_ERR_INVALID_LINK_SETTINGS: - return "AVF_ERR_INVALID_LINK_SETTINGS"; - case AVF_ERR_AUTONEG_NOT_COMPLETE: - return "AVF_ERR_AUTONEG_NOT_COMPLETE"; - case AVF_ERR_RESET_FAILED: - return "AVF_ERR_RESET_FAILED"; - case AVF_ERR_SWFW_SYNC: - return "AVF_ERR_SWFW_SYNC"; - case AVF_ERR_NO_AVAILABLE_VSI: - return "AVF_ERR_NO_AVAILABLE_VSI"; - case AVF_ERR_NO_MEMORY: - return "AVF_ERR_NO_MEMORY"; - case AVF_ERR_BAD_PTR: - return "AVF_ERR_BAD_PTR"; - case AVF_ERR_RING_FULL: - return "AVF_ERR_RING_FULL"; - case AVF_ERR_INVALID_PD_ID: - return "AVF_ERR_INVALID_PD_ID"; - case AVF_ERR_INVALID_QP_ID: - return "AVF_ERR_INVALID_QP_ID"; - case AVF_ERR_INVALID_CQ_ID: - return "AVF_ERR_INVALID_CQ_ID"; - case AVF_ERR_INVALID_CEQ_ID: - return "AVF_ERR_INVALID_CEQ_ID"; - case AVF_ERR_INVALID_AEQ_ID: - return "AVF_ERR_INVALID_AEQ_ID"; - case AVF_ERR_INVALID_SIZE: - return "AVF_ERR_INVALID_SIZE"; - case AVF_ERR_INVALID_ARP_INDEX: - return "AVF_ERR_INVALID_ARP_INDEX"; - case AVF_ERR_INVALID_FPM_FUNC_ID: - return "AVF_ERR_INVALID_FPM_FUNC_ID"; - case AVF_ERR_QP_INVALID_MSG_SIZE: - return "AVF_ERR_QP_INVALID_MSG_SIZE"; - case AVF_ERR_QP_TOOMANY_WRS_POSTED: - return "AVF_ERR_QP_TOOMANY_WRS_POSTED"; - case AVF_ERR_INVALID_FRAG_COUNT: - return "AVF_ERR_INVALID_FRAG_COUNT"; - case AVF_ERR_QUEUE_EMPTY: - return "AVF_ERR_QUEUE_EMPTY"; - case AVF_ERR_INVALID_ALIGNMENT: - return "AVF_ERR_INVALID_ALIGNMENT"; - case AVF_ERR_FLUSHED_QUEUE: - return "AVF_ERR_FLUSHED_QUEUE"; - case AVF_ERR_INVALID_PUSH_PAGE_INDEX: - return "AVF_ERR_INVALID_PUSH_PAGE_INDEX"; - case AVF_ERR_INVALID_IMM_DATA_SIZE: - return "AVF_ERR_INVALID_IMM_DATA_SIZE"; - case AVF_ERR_TIMEOUT: - return "AVF_ERR_TIMEOUT"; - case AVF_ERR_OPCODE_MISMATCH: - return "AVF_ERR_OPCODE_MISMATCH"; - case AVF_ERR_CQP_COMPL_ERROR: - return "AVF_ERR_CQP_COMPL_ERROR"; - case AVF_ERR_INVALID_VF_ID: - return "AVF_ERR_INVALID_VF_ID"; - case AVF_ERR_INVALID_HMCFN_ID: - return "AVF_ERR_INVALID_HMCFN_ID"; - case AVF_ERR_BACKING_PAGE_ERROR: - return "AVF_ERR_BACKING_PAGE_ERROR"; - case AVF_ERR_NO_PBLCHUNKS_AVAILABLE: - return "AVF_ERR_NO_PBLCHUNKS_AVAILABLE"; - case AVF_ERR_INVALID_PBLE_INDEX: - return "AVF_ERR_INVALID_PBLE_INDEX"; - case AVF_ERR_INVALID_SD_INDEX: - return "AVF_ERR_INVALID_SD_INDEX"; - case AVF_ERR_INVALID_PAGE_DESC_INDEX: - return "AVF_ERR_INVALID_PAGE_DESC_INDEX"; - case AVF_ERR_INVALID_SD_TYPE: - return "AVF_ERR_INVALID_SD_TYPE"; - case AVF_ERR_MEMCPY_FAILED: - return "AVF_ERR_MEMCPY_FAILED"; - case AVF_ERR_INVALID_HMC_OBJ_INDEX: - return "AVF_ERR_INVALID_HMC_OBJ_INDEX"; - case AVF_ERR_INVALID_HMC_OBJ_COUNT: - return "AVF_ERR_INVALID_HMC_OBJ_COUNT"; - case AVF_ERR_INVALID_SRQ_ARM_LIMIT: - return "AVF_ERR_INVALID_SRQ_ARM_LIMIT"; - case AVF_ERR_SRQ_ENABLED: - return "AVF_ERR_SRQ_ENABLED"; - case AVF_ERR_ADMIN_QUEUE_ERROR: - return "AVF_ERR_ADMIN_QUEUE_ERROR"; - case AVF_ERR_ADMIN_QUEUE_TIMEOUT: - return "AVF_ERR_ADMIN_QUEUE_TIMEOUT"; - case AVF_ERR_BUF_TOO_SHORT: - return "AVF_ERR_BUF_TOO_SHORT"; - case AVF_ERR_ADMIN_QUEUE_FULL: - return "AVF_ERR_ADMIN_QUEUE_FULL"; - case AVF_ERR_ADMIN_QUEUE_NO_WORK: - return "AVF_ERR_ADMIN_QUEUE_NO_WORK"; - case AVF_ERR_BAD_IWARP_CQE: - return "AVF_ERR_BAD_IWARP_CQE"; - case AVF_ERR_NVM_BLANK_MODE: - return "AVF_ERR_NVM_BLANK_MODE"; - case AVF_ERR_NOT_IMPLEMENTED: - return "AVF_ERR_NOT_IMPLEMENTED"; - case AVF_ERR_PE_DOORBELL_NOT_ENABLED: - return "AVF_ERR_PE_DOORBELL_NOT_ENABLED"; - case AVF_ERR_DIAG_TEST_FAILED: - return "AVF_ERR_DIAG_TEST_FAILED"; - case AVF_ERR_NOT_READY: - return "AVF_ERR_NOT_READY"; - case AVF_NOT_SUPPORTED: - return "AVF_NOT_SUPPORTED"; - case AVF_ERR_FIRMWARE_API_VERSION: - return "AVF_ERR_FIRMWARE_API_VERSION"; - case AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR: - return "AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR"; + case IAVF_ERR_NVM: + return "IAVF_ERR_NVM"; + case IAVF_ERR_NVM_CHECKSUM: + return "IAVF_ERR_NVM_CHECKSUM"; + case IAVF_ERR_PHY: + return "IAVF_ERR_PHY"; + case IAVF_ERR_CONFIG: + return "IAVF_ERR_CONFIG"; + case IAVF_ERR_PARAM: + return "IAVF_ERR_PARAM"; + case IAVF_ERR_MAC_TYPE: + return "IAVF_ERR_MAC_TYPE"; + case IAVF_ERR_UNKNOWN_PHY: + return "IAVF_ERR_UNKNOWN_PHY"; + case IAVF_ERR_LINK_SETUP: + return "IAVF_ERR_LINK_SETUP"; + case IAVF_ERR_ADAPTER_STOPPED: + return "IAVF_ERR_ADAPTER_STOPPED"; + case IAVF_ERR_INVALID_MAC_ADDR: + return "IAVF_ERR_INVALID_MAC_ADDR"; + case IAVF_ERR_DEVICE_NOT_SUPPORTED: + return "IAVF_ERR_DEVICE_NOT_SUPPORTED"; + case IAVF_ERR_MASTER_REQUESTS_PENDING: + return "IAVF_ERR_MASTER_REQUESTS_PENDING"; + case IAVF_ERR_INVALID_LINK_SETTINGS: + return "IAVF_ERR_INVALID_LINK_SETTINGS"; + case IAVF_ERR_AUTONEG_NOT_COMPLETE: + return "IAVF_ERR_AUTONEG_NOT_COMPLETE"; + case IAVF_ERR_RESET_FAILED: + return "IAVF_ERR_RESET_FAILED"; + case IAVF_ERR_SWFW_SYNC: + return "IAVF_ERR_SWFW_SYNC"; + case IAVF_ERR_NO_AVAILABLE_VSI: + return "IAVF_ERR_NO_AVAILABLE_VSI"; + case IAVF_ERR_NO_MEMORY: + return "IAVF_ERR_NO_MEMORY"; + case IAVF_ERR_BAD_PTR: + return "IAVF_ERR_BAD_PTR"; + case IAVF_ERR_RING_FULL: + return "IAVF_ERR_RING_FULL"; + case IAVF_ERR_INVALID_PD_ID: + return "IAVF_ERR_INVALID_PD_ID"; + case IAVF_ERR_INVALID_QP_ID: + return "IAVF_ERR_INVALID_QP_ID"; + case IAVF_ERR_INVALID_CQ_ID: + return "IAVF_ERR_INVALID_CQ_ID"; + case IAVF_ERR_INVALID_CEQ_ID: + return "IAVF_ERR_INVALID_CEQ_ID"; + case IAVF_ERR_INVALID_AEQ_ID: + return "IAVF_ERR_INVALID_AEQ_ID"; + case IAVF_ERR_INVALID_SIZE: + return "IAVF_ERR_INVALID_SIZE"; + case IAVF_ERR_INVALID_ARP_INDEX: + return "IAVF_ERR_INVALID_ARP_INDEX"; + case IAVF_ERR_INVALID_FPM_FUNC_ID: + return "IAVF_ERR_INVALID_FPM_FUNC_ID"; + case IAVF_ERR_QP_INVALID_MSG_SIZE: + return "IAVF_ERR_QP_INVALID_MSG_SIZE"; + case IAVF_ERR_QP_TOOMANY_WRS_POSTED: + return "IAVF_ERR_QP_TOOMANY_WRS_POSTED"; + case IAVF_ERR_INVALID_FRAG_COUNT: + return "IAVF_ERR_INVALID_FRAG_COUNT"; + case IAVF_ERR_QUEUE_EMPTY: + return "IAVF_ERR_QUEUE_EMPTY"; + case IAVF_ERR_INVALID_ALIGNMENT: + return "IAVF_ERR_INVALID_ALIGNMENT"; + case IAVF_ERR_FLUSHED_QUEUE: + return "IAVF_ERR_FLUSHED_QUEUE"; + case IAVF_ERR_INVALID_PUSH_PAGE_INDEX: + return "IAVF_ERR_INVALID_PUSH_PAGE_INDEX"; + case IAVF_ERR_INVALID_IMM_DATA_SIZE: + return "IAVF_ERR_INVALID_IMM_DATA_SIZE"; + case IAVF_ERR_TIMEOUT: + return "IAVF_ERR_TIMEOUT"; + case IAVF_ERR_OPCODE_MISMATCH: + return "IAVF_ERR_OPCODE_MISMATCH"; + case IAVF_ERR_CQP_COMPL_ERROR: + return "IAVF_ERR_CQP_COMPL_ERROR"; + case IAVF_ERR_INVALID_VF_ID: + return "IAVF_ERR_INVALID_VF_ID"; + case IAVF_ERR_INVALID_HMCFN_ID: + return "IAVF_ERR_INVALID_HMCFN_ID"; + case IAVF_ERR_BACKING_PAGE_ERROR: + return "IAVF_ERR_BACKING_PAGE_ERROR"; + case IAVF_ERR_NO_PBLCHUNKS_AVAILABLE: + return "IAVF_ERR_NO_PBLCHUNKS_AVAILABLE"; + case IAVF_ERR_INVALID_PBLE_INDEX: + return "IAVF_ERR_INVALID_PBLE_INDEX"; + case IAVF_ERR_INVALID_SD_INDEX: + return "IAVF_ERR_INVALID_SD_INDEX"; + case IAVF_ERR_INVALID_PAGE_DESC_INDEX: + return "IAVF_ERR_INVALID_PAGE_DESC_INDEX"; + case IAVF_ERR_INVALID_SD_TYPE: + return "IAVF_ERR_INVALID_SD_TYPE"; + case IAVF_ERR_MEMCPY_FAILED: + return "IAVF_ERR_MEMCPY_FAILED"; + case IAVF_ERR_INVALID_HMC_OBJ_INDEX: + return "IAVF_ERR_INVALID_HMC_OBJ_INDEX"; + case IAVF_ERR_INVALID_HMC_OBJ_COUNT: + return "IAVF_ERR_INVALID_HMC_OBJ_COUNT"; + case IAVF_ERR_INVALID_SRQ_ARM_LIMIT: + return "IAVF_ERR_INVALID_SRQ_ARM_LIMIT"; + case IAVF_ERR_SRQ_ENABLED: + return "IAVF_ERR_SRQ_ENABLED"; + case IAVF_ERR_ADMIN_QUEUE_ERROR: + return "IAVF_ERR_ADMIN_QUEUE_ERROR"; + case IAVF_ERR_ADMIN_QUEUE_TIMEOUT: + return "IAVF_ERR_ADMIN_QUEUE_TIMEOUT"; + case IAVF_ERR_BUF_TOO_SHORT: + return "IAVF_ERR_BUF_TOO_SHORT"; + case IAVF_ERR_ADMIN_QUEUE_FULL: + return "IAVF_ERR_ADMIN_QUEUE_FULL"; + case IAVF_ERR_ADMIN_QUEUE_NO_WORK: + return "IAVF_ERR_ADMIN_QUEUE_NO_WORK"; + case IAVF_ERR_BAD_IWARP_CQE: + return "IAVF_ERR_BAD_IWARP_CQE"; + case IAVF_ERR_NVM_BLANK_MODE: + return "IAVF_ERR_NVM_BLANK_MODE"; + case IAVF_ERR_NOT_IMPLEMENTED: + return "IAVF_ERR_NOT_IMPLEMENTED"; + case IAVF_ERR_PE_DOORBELL_NOT_ENABLED: + return "IAVF_ERR_PE_DOORBELL_NOT_ENABLED"; + case IAVF_ERR_DIAG_TEST_FAILED: + return "IAVF_ERR_DIAG_TEST_FAILED"; + case IAVF_ERR_NOT_READY: + return "IAVF_ERR_NOT_READY"; + case IAVF_NOT_SUPPORTED: + return "IAVF_NOT_SUPPORTED"; + case IAVF_ERR_FIRMWARE_API_VERSION: + return "IAVF_ERR_FIRMWARE_API_VERSION"; + case IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR: + return "IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR"; } snprintf(hw->err_str, sizeof(hw->err_str), "%d", stat_err); @@ -280,7 +280,7 @@ const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err) } /** - * avf_debug_aq + * iavf_debug_aq * @hw: debug mask related to admin queue * @mask: debug mask * @desc: pointer to admin queue descriptor @@ -289,10 +289,10 @@ const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err) * * Dumps debug log about adminq command with descriptor contents. **/ -void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc, +void iavf_debug_aq(struct iavf_hw *hw, enum iavf_debug_mask mask, void *desc, void *buffer, u16 buf_len) { - struct avf_aq_desc *aq_desc = (struct avf_aq_desc *)desc; + struct iavf_aq_desc *aq_desc = (struct iavf_aq_desc *)desc; u8 *buf = (u8 *)buffer; u16 len; u16 i = 0; @@ -302,29 +302,29 @@ void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc, len = LE16_TO_CPU(aq_desc->datalen); - avf_debug(hw, mask, + iavf_debug(hw, mask, "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n", LE16_TO_CPU(aq_desc->opcode), LE16_TO_CPU(aq_desc->flags), LE16_TO_CPU(aq_desc->datalen), LE16_TO_CPU(aq_desc->retval)); - avf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n", + iavf_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n", LE32_TO_CPU(aq_desc->cookie_high), LE32_TO_CPU(aq_desc->cookie_low)); - avf_debug(hw, mask, "\tparam (0,1) 0x%08X 0x%08X\n", + iavf_debug(hw, mask, "\tparam (0,1) 0x%08X 0x%08X\n", LE32_TO_CPU(aq_desc->params.internal.param0), LE32_TO_CPU(aq_desc->params.internal.param1)); - avf_debug(hw, mask, "\taddr (h,l) 0x%08X 0x%08X\n", + iavf_debug(hw, mask, "\taddr (h,l) 0x%08X 0x%08X\n", LE32_TO_CPU(aq_desc->params.external.addr_high), LE32_TO_CPU(aq_desc->params.external.addr_low)); if ((buffer != NULL) && (aq_desc->datalen != 0)) { - avf_debug(hw, mask, "AQ CMD Buffer:\n"); + iavf_debug(hw, mask, "AQ CMD Buffer:\n"); if (buf_len < len) len = buf_len; /* write the full 16-byte chunks */ for (i = 0; i < (len - 16); i += 16) - avf_debug(hw, mask, + iavf_debug(hw, mask, "\t0x%04X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n", i, buf[i], buf[i+1], buf[i+2], buf[i+3], buf[i+4], buf[i+5], buf[i+6], buf[i+7], @@ -339,7 +339,7 @@ void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc, memset(d_buf, 0, sizeof(d_buf)); for (j = 0; i < len; j++, i++) d_buf[j] = buf[i]; - avf_debug(hw, mask, + iavf_debug(hw, mask, "\t0x%04X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n", i_sav, d_buf[0], d_buf[1], d_buf[2], d_buf[3], d_buf[4], d_buf[5], d_buf[6], d_buf[7], @@ -350,53 +350,53 @@ void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, void *desc, } /** - * avf_check_asq_alive + * iavf_check_asq_alive * @hw: pointer to the hw struct * * Returns true if Queue is enabled else false. **/ -bool avf_check_asq_alive(struct avf_hw *hw) +bool iavf_check_asq_alive(struct iavf_hw *hw) { if (hw->aq.asq.len) #ifdef INTEGRATED_VF - if (avf_is_vf(hw)) + if (iavf_is_vf(hw)) return !!(rd32(hw, hw->aq.asq.len) & - AVF_ATQLEN1_ATQENABLE_MASK); + IAVF_ATQLEN1_ATQENABLE_MASK); #else return !!(rd32(hw, hw->aq.asq.len) & - AVF_ATQLEN1_ATQENABLE_MASK); + IAVF_ATQLEN1_ATQENABLE_MASK); #endif /* INTEGRATED_VF */ return false; } /** - * avf_aq_queue_shutdown + * iavf_aq_queue_shutdown * @hw: pointer to the hw struct * @unloading: is the driver unloading itself * * Tell the Firmware that we're shutting down the AdminQ and whether * or not the driver is unloading as well. **/ -enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, +enum iavf_status_code iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading) { - struct avf_aq_desc desc; - struct avf_aqc_queue_shutdown *cmd = - (struct avf_aqc_queue_shutdown *)&desc.params.raw; - enum avf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_queue_shutdown *cmd = + (struct iavf_aqc_queue_shutdown *)&desc.params.raw; + enum iavf_status_code status; - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_queue_shutdown); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_queue_shutdown); if (unloading) - cmd->driver_unloading = CPU_TO_LE32(AVF_AQ_DRIVER_UNLOADING); - status = avf_asq_send_command(hw, &desc, NULL, 0, NULL); + cmd->driver_unloading = CPU_TO_LE32(IAVF_AQ_DRIVER_UNLOADING); + status = iavf_asq_send_command(hw, &desc, NULL, 0, NULL); return status; } /** - * avf_aq_get_set_rss_lut + * iavf_aq_get_set_rss_lut * @hw: pointer to the hardware structure * @vsi_id: vsi fw index * @pf_lut: for PF table set true, for VSI table set false @@ -406,51 +406,51 @@ enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, * * Internal function to get or set RSS look up table **/ -STATIC enum avf_status_code avf_aq_get_set_rss_lut(struct avf_hw *hw, +STATIC enum iavf_status_code iavf_aq_get_set_rss_lut(struct iavf_hw *hw, u16 vsi_id, bool pf_lut, u8 *lut, u16 lut_size, bool set) { - enum avf_status_code status; - struct avf_aq_desc desc; - struct avf_aqc_get_set_rss_lut *cmd_resp = - (struct avf_aqc_get_set_rss_lut *)&desc.params.raw; + enum iavf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_get_set_rss_lut *cmd_resp = + (struct iavf_aqc_get_set_rss_lut *)&desc.params.raw; if (set) - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_set_rss_lut); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_set_rss_lut); else - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_get_rss_lut); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_get_rss_lut); /* Indirect command */ - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_RD); cmd_resp->vsi_id = CPU_TO_LE16((u16)((vsi_id << - AVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) & - AVF_AQC_SET_RSS_LUT_VSI_ID_MASK)); - cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_LUT_VSI_VALID); + IAVF_AQC_SET_RSS_LUT_VSI_ID_SHIFT) & + IAVF_AQC_SET_RSS_LUT_VSI_ID_MASK)); + cmd_resp->vsi_id |= CPU_TO_LE16((u16)IAVF_AQC_SET_RSS_LUT_VSI_VALID); if (pf_lut) cmd_resp->flags |= CPU_TO_LE16((u16) - ((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF << - AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) & - AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK)); + ((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_PF << + IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) & + IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK)); else cmd_resp->flags |= CPU_TO_LE16((u16) - ((AVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI << - AVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) & - AVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK)); + ((IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_VSI << + IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT) & + IAVF_AQC_SET_RSS_LUT_TABLE_TYPE_MASK)); - status = avf_asq_send_command(hw, &desc, lut, lut_size, NULL); + status = iavf_asq_send_command(hw, &desc, lut, lut_size, NULL); return status; } /** - * avf_aq_get_rss_lut + * iavf_aq_get_rss_lut * @hw: pointer to the hardware structure * @vsi_id: vsi fw index * @pf_lut: for PF table set true, for VSI table set false @@ -459,15 +459,15 @@ STATIC enum avf_status_code avf_aq_get_set_rss_lut(struct avf_hw *hw, * * get the RSS lookup table, PF or VSI type **/ -enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 vsi_id, +enum iavf_status_code iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 vsi_id, bool pf_lut, u8 *lut, u16 lut_size) { - return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, + return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false); } /** - * avf_aq_set_rss_lut + * iavf_aq_set_rss_lut * @hw: pointer to the hardware structure * @vsi_id: vsi fw index * @pf_lut: for PF table set true, for VSI table set false @@ -476,14 +476,14 @@ enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 vsi_id, * * set the RSS lookup table, PF or VSI type **/ -enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 vsi_id, +enum iavf_status_code iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 vsi_id, bool pf_lut, u8 *lut, u16 lut_size) { - return avf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true); + return iavf_aq_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true); } /** - * avf_aq_get_set_rss_key + * iavf_aq_get_set_rss_key * @hw: pointer to the hw struct * @vsi_id: vsi fw index * @key: pointer to key info struct @@ -491,69 +491,69 @@ enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 vsi_id, * * get the RSS key per VSI **/ -STATIC enum avf_status_code avf_aq_get_set_rss_key(struct avf_hw *hw, +STATIC enum iavf_status_code iavf_aq_get_set_rss_key(struct iavf_hw *hw, u16 vsi_id, - struct avf_aqc_get_set_rss_key_data *key, + struct iavf_aqc_get_set_rss_key_data *key, bool set) { - enum avf_status_code status; - struct avf_aq_desc desc; - struct avf_aqc_get_set_rss_key *cmd_resp = - (struct avf_aqc_get_set_rss_key *)&desc.params.raw; - u16 key_size = sizeof(struct avf_aqc_get_set_rss_key_data); + enum iavf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_get_set_rss_key *cmd_resp = + (struct iavf_aqc_get_set_rss_key *)&desc.params.raw; + u16 key_size = sizeof(struct iavf_aqc_get_set_rss_key_data); if (set) - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_set_rss_key); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_set_rss_key); else - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_get_rss_key); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_get_rss_key); /* Indirect command */ - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_RD); cmd_resp->vsi_id = CPU_TO_LE16((u16)((vsi_id << - AVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) & - AVF_AQC_SET_RSS_KEY_VSI_ID_MASK)); - cmd_resp->vsi_id |= CPU_TO_LE16((u16)AVF_AQC_SET_RSS_KEY_VSI_VALID); + IAVF_AQC_SET_RSS_KEY_VSI_ID_SHIFT) & + IAVF_AQC_SET_RSS_KEY_VSI_ID_MASK)); + cmd_resp->vsi_id |= CPU_TO_LE16((u16)IAVF_AQC_SET_RSS_KEY_VSI_VALID); - status = avf_asq_send_command(hw, &desc, key, key_size, NULL); + status = iavf_asq_send_command(hw, &desc, key, key_size, NULL); return status; } /** - * avf_aq_get_rss_key + * iavf_aq_get_rss_key * @hw: pointer to the hw struct * @vsi_id: vsi fw index * @key: pointer to key info struct * **/ -enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw, +enum iavf_status_code iavf_aq_get_rss_key(struct iavf_hw *hw, u16 vsi_id, - struct avf_aqc_get_set_rss_key_data *key) + struct iavf_aqc_get_set_rss_key_data *key) { - return avf_aq_get_set_rss_key(hw, vsi_id, key, false); + return iavf_aq_get_set_rss_key(hw, vsi_id, key, false); } /** - * avf_aq_set_rss_key + * iavf_aq_set_rss_key * @hw: pointer to the hw struct * @vsi_id: vsi fw index * @key: pointer to key info struct * * set the RSS key per VSI **/ -enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw, +enum iavf_status_code iavf_aq_set_rss_key(struct iavf_hw *hw, u16 vsi_id, - struct avf_aqc_get_set_rss_key_data *key) + struct iavf_aqc_get_set_rss_key_data *key) { - return avf_aq_get_set_rss_key(hw, vsi_id, key, true); + return iavf_aq_get_set_rss_key(hw, vsi_id, key, true); } -/* The avf_ptype_lookup table is used to convert from the 8-bit ptype in the +/* The iavf_ptype_lookup table is used to convert from the 8-bit ptype in the * hardware to a bit-field that can be used by SW to more easily determine the * packet type. * @@ -566,385 +566,385 @@ enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw, * * Typical work flow: * - * IF NOT avf_ptype_lookup[ptype].known + * IF NOT iavf_ptype_lookup[ptype].known * THEN * Packet is unknown - * ELSE IF avf_ptype_lookup[ptype].outer_ip == AVF_RX_PTYPE_OUTER_IP + * ELSE IF iavf_ptype_lookup[ptype].outer_ip == IAVF_RX_PTYPE_OUTER_IP * Use the rest of the fields to look at the tunnels, inner protocols, etc * ELSE - * Use the enum avf_rx_l2_ptype to decode the packet type + * Use the enum iavf_rx_l2_ptype to decode the packet type * ENDIF */ /* macro to make the table lines short */ -#define AVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ +#define IAVF_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ { PTYPE, \ 1, \ - AVF_RX_PTYPE_OUTER_##OUTER_IP, \ - AVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \ - AVF_RX_PTYPE_##OUTER_FRAG, \ - AVF_RX_PTYPE_TUNNEL_##T, \ - AVF_RX_PTYPE_TUNNEL_END_##TE, \ - AVF_RX_PTYPE_##TEF, \ - AVF_RX_PTYPE_INNER_PROT_##I, \ - AVF_RX_PTYPE_PAYLOAD_LAYER_##PL } - -#define AVF_PTT_UNUSED_ENTRY(PTYPE) \ + IAVF_RX_PTYPE_OUTER_##OUTER_IP, \ + IAVF_RX_PTYPE_OUTER_##OUTER_IP_VER, \ + IAVF_RX_PTYPE_##OUTER_FRAG, \ + IAVF_RX_PTYPE_TUNNEL_##T, \ + IAVF_RX_PTYPE_TUNNEL_END_##TE, \ + IAVF_RX_PTYPE_##TEF, \ + IAVF_RX_PTYPE_INNER_PROT_##I, \ + IAVF_RX_PTYPE_PAYLOAD_LAYER_##PL } + +#define IAVF_PTT_UNUSED_ENTRY(PTYPE) \ { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 } /* shorter macros makes the table fit but are terse */ -#define AVF_RX_PTYPE_NOF AVF_RX_PTYPE_NOT_FRAG -#define AVF_RX_PTYPE_FRG AVF_RX_PTYPE_FRAG -#define AVF_RX_PTYPE_INNER_PROT_TS AVF_RX_PTYPE_INNER_PROT_TIMESYNC +#define IAVF_RX_PTYPE_NOF IAVF_RX_PTYPE_NOT_FRAG +#define IAVF_RX_PTYPE_FRG IAVF_RX_PTYPE_FRAG +#define IAVF_RX_PTYPE_INNER_PROT_TS IAVF_RX_PTYPE_INNER_PROT_TIMESYNC /* Lookup table mapping the HW PTYPE to the bit field for decoding */ -struct avf_rx_ptype_decoded avf_ptype_lookup[] = { +struct iavf_rx_ptype_decoded iavf_ptype_lookup[] = { /* L2 Packet types */ - AVF_PTT_UNUSED_ENTRY(0), - AVF_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - AVF_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), - AVF_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - AVF_PTT_UNUSED_ENTRY(4), - AVF_PTT_UNUSED_ENTRY(5), - AVF_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - AVF_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - AVF_PTT_UNUSED_ENTRY(8), - AVF_PTT_UNUSED_ENTRY(9), - AVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - AVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - AVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT_UNUSED_ENTRY(0), + IAVF_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + IAVF_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), + IAVF_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + IAVF_PTT_UNUSED_ENTRY(4), + IAVF_PTT_UNUSED_ENTRY(5), + IAVF_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + IAVF_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + IAVF_PTT_UNUSED_ENTRY(8), + IAVF_PTT_UNUSED_ENTRY(9), + IAVF_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + IAVF_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), + IAVF_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), /* Non Tunneled IPv4 */ - AVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(25), - AVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), - AVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), - AVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), + IAVF_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(25), + IAVF_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), + IAVF_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), + IAVF_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), /* IPv4 --> IPv4 */ - AVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - AVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - AVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(32), - AVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - AVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - AVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), + IAVF_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), + IAVF_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), + IAVF_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(32), + IAVF_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), + IAVF_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), + IAVF_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), /* IPv4 --> IPv6 */ - AVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - AVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - AVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(39), - AVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - AVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - AVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), + IAVF_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), + IAVF_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), + IAVF_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(39), + IAVF_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), + IAVF_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), + IAVF_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), /* IPv4 --> GRE/NAT */ - AVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), + IAVF_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), /* IPv4 --> GRE/NAT --> IPv4 */ - AVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - AVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - AVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(47), - AVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - AVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - AVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), + IAVF_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), + IAVF_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), + IAVF_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(47), + IAVF_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), + IAVF_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), + IAVF_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), /* IPv4 --> GRE/NAT --> IPv6 */ - AVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - AVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - AVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(54), - AVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - AVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - AVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), + IAVF_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), + IAVF_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), + IAVF_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(54), + IAVF_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), + IAVF_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), + IAVF_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), /* IPv4 --> GRE/NAT --> MAC */ - AVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), + IAVF_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ - AVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - AVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - AVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(62), - AVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - AVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - AVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), + IAVF_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), + IAVF_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), + IAVF_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(62), + IAVF_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), + IAVF_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), + IAVF_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ - AVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - AVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - AVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(69), - AVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - AVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - AVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), + IAVF_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), + IAVF_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), + IAVF_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(69), + IAVF_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), + IAVF_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), + IAVF_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), /* IPv4 --> GRE/NAT --> MAC/VLAN */ - AVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), + IAVF_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ - AVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - AVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - AVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(77), - AVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - AVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - AVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), + IAVF_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), + IAVF_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), + IAVF_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(77), + IAVF_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), + IAVF_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), + IAVF_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ - AVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - AVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - AVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(84), - AVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - AVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - AVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), + IAVF_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), + IAVF_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), + IAVF_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(84), + IAVF_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), + IAVF_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), + IAVF_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), /* Non Tunneled IPv6 */ - AVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), - AVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(91), - AVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), - AVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), - AVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), + IAVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), + IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(91), + IAVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), + IAVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), + IAVF_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), /* IPv6 --> IPv4 */ - AVF_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - AVF_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - AVF_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(98), - AVF_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - AVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - AVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), + IAVF_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), + IAVF_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), + IAVF_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(98), + IAVF_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), + IAVF_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), + IAVF_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), /* IPv6 --> IPv6 */ - AVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - AVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - AVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(105), - AVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - AVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - AVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), + IAVF_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), + IAVF_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), + IAVF_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(105), + IAVF_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), + IAVF_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), + IAVF_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), /* IPv6 --> GRE/NAT */ - AVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), + IAVF_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), /* IPv6 --> GRE/NAT -> IPv4 */ - AVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - AVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - AVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(113), - AVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - AVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - AVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), + IAVF_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), + IAVF_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), + IAVF_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(113), + IAVF_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), + IAVF_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), + IAVF_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), /* IPv6 --> GRE/NAT -> IPv6 */ - AVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - AVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - AVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(120), - AVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - AVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - AVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), + IAVF_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), + IAVF_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), + IAVF_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(120), + IAVF_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), + IAVF_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), + IAVF_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), /* IPv6 --> GRE/NAT -> MAC */ - AVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), + IAVF_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ - AVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - AVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - AVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(128), - AVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - AVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - AVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), + IAVF_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), + IAVF_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), + IAVF_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(128), + IAVF_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), + IAVF_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), + IAVF_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ - AVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - AVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - AVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(135), - AVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - AVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - AVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), + IAVF_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), + IAVF_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), + IAVF_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(135), + IAVF_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), + IAVF_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), + IAVF_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), /* IPv6 --> GRE/NAT -> MAC/VLAN */ - AVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), + IAVF_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ - AVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - AVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - AVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(143), - AVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - AVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - AVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), + IAVF_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), + IAVF_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), + IAVF_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(143), + IAVF_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), + IAVF_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), + IAVF_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ - AVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - AVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - AVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - AVF_PTT_UNUSED_ENTRY(150), - AVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - AVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - AVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), + IAVF_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), + IAVF_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), + IAVF_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), + IAVF_PTT_UNUSED_ENTRY(150), + IAVF_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), + IAVF_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), + IAVF_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), /* unused entries */ - AVF_PTT_UNUSED_ENTRY(154), - AVF_PTT_UNUSED_ENTRY(155), - AVF_PTT_UNUSED_ENTRY(156), - AVF_PTT_UNUSED_ENTRY(157), - AVF_PTT_UNUSED_ENTRY(158), - AVF_PTT_UNUSED_ENTRY(159), - - AVF_PTT_UNUSED_ENTRY(160), - AVF_PTT_UNUSED_ENTRY(161), - AVF_PTT_UNUSED_ENTRY(162), - AVF_PTT_UNUSED_ENTRY(163), - AVF_PTT_UNUSED_ENTRY(164), - AVF_PTT_UNUSED_ENTRY(165), - AVF_PTT_UNUSED_ENTRY(166), - AVF_PTT_UNUSED_ENTRY(167), - AVF_PTT_UNUSED_ENTRY(168), - AVF_PTT_UNUSED_ENTRY(169), - - AVF_PTT_UNUSED_ENTRY(170), - AVF_PTT_UNUSED_ENTRY(171), - AVF_PTT_UNUSED_ENTRY(172), - AVF_PTT_UNUSED_ENTRY(173), - AVF_PTT_UNUSED_ENTRY(174), - AVF_PTT_UNUSED_ENTRY(175), - AVF_PTT_UNUSED_ENTRY(176), - AVF_PTT_UNUSED_ENTRY(177), - AVF_PTT_UNUSED_ENTRY(178), - AVF_PTT_UNUSED_ENTRY(179), - - AVF_PTT_UNUSED_ENTRY(180), - AVF_PTT_UNUSED_ENTRY(181), - AVF_PTT_UNUSED_ENTRY(182), - AVF_PTT_UNUSED_ENTRY(183), - AVF_PTT_UNUSED_ENTRY(184), - AVF_PTT_UNUSED_ENTRY(185), - AVF_PTT_UNUSED_ENTRY(186), - AVF_PTT_UNUSED_ENTRY(187), - AVF_PTT_UNUSED_ENTRY(188), - AVF_PTT_UNUSED_ENTRY(189), - - AVF_PTT_UNUSED_ENTRY(190), - AVF_PTT_UNUSED_ENTRY(191), - AVF_PTT_UNUSED_ENTRY(192), - AVF_PTT_UNUSED_ENTRY(193), - AVF_PTT_UNUSED_ENTRY(194), - AVF_PTT_UNUSED_ENTRY(195), - AVF_PTT_UNUSED_ENTRY(196), - AVF_PTT_UNUSED_ENTRY(197), - AVF_PTT_UNUSED_ENTRY(198), - AVF_PTT_UNUSED_ENTRY(199), - - AVF_PTT_UNUSED_ENTRY(200), - AVF_PTT_UNUSED_ENTRY(201), - AVF_PTT_UNUSED_ENTRY(202), - AVF_PTT_UNUSED_ENTRY(203), - AVF_PTT_UNUSED_ENTRY(204), - AVF_PTT_UNUSED_ENTRY(205), - AVF_PTT_UNUSED_ENTRY(206), - AVF_PTT_UNUSED_ENTRY(207), - AVF_PTT_UNUSED_ENTRY(208), - AVF_PTT_UNUSED_ENTRY(209), - - AVF_PTT_UNUSED_ENTRY(210), - AVF_PTT_UNUSED_ENTRY(211), - AVF_PTT_UNUSED_ENTRY(212), - AVF_PTT_UNUSED_ENTRY(213), - AVF_PTT_UNUSED_ENTRY(214), - AVF_PTT_UNUSED_ENTRY(215), - AVF_PTT_UNUSED_ENTRY(216), - AVF_PTT_UNUSED_ENTRY(217), - AVF_PTT_UNUSED_ENTRY(218), - AVF_PTT_UNUSED_ENTRY(219), - - AVF_PTT_UNUSED_ENTRY(220), - AVF_PTT_UNUSED_ENTRY(221), - AVF_PTT_UNUSED_ENTRY(222), - AVF_PTT_UNUSED_ENTRY(223), - AVF_PTT_UNUSED_ENTRY(224), - AVF_PTT_UNUSED_ENTRY(225), - AVF_PTT_UNUSED_ENTRY(226), - AVF_PTT_UNUSED_ENTRY(227), - AVF_PTT_UNUSED_ENTRY(228), - AVF_PTT_UNUSED_ENTRY(229), - - AVF_PTT_UNUSED_ENTRY(230), - AVF_PTT_UNUSED_ENTRY(231), - AVF_PTT_UNUSED_ENTRY(232), - AVF_PTT_UNUSED_ENTRY(233), - AVF_PTT_UNUSED_ENTRY(234), - AVF_PTT_UNUSED_ENTRY(235), - AVF_PTT_UNUSED_ENTRY(236), - AVF_PTT_UNUSED_ENTRY(237), - AVF_PTT_UNUSED_ENTRY(238), - AVF_PTT_UNUSED_ENTRY(239), - - AVF_PTT_UNUSED_ENTRY(240), - AVF_PTT_UNUSED_ENTRY(241), - AVF_PTT_UNUSED_ENTRY(242), - AVF_PTT_UNUSED_ENTRY(243), - AVF_PTT_UNUSED_ENTRY(244), - AVF_PTT_UNUSED_ENTRY(245), - AVF_PTT_UNUSED_ENTRY(246), - AVF_PTT_UNUSED_ENTRY(247), - AVF_PTT_UNUSED_ENTRY(248), - AVF_PTT_UNUSED_ENTRY(249), - - AVF_PTT_UNUSED_ENTRY(250), - AVF_PTT_UNUSED_ENTRY(251), - AVF_PTT_UNUSED_ENTRY(252), - AVF_PTT_UNUSED_ENTRY(253), - AVF_PTT_UNUSED_ENTRY(254), - AVF_PTT_UNUSED_ENTRY(255) + IAVF_PTT_UNUSED_ENTRY(154), + IAVF_PTT_UNUSED_ENTRY(155), + IAVF_PTT_UNUSED_ENTRY(156), + IAVF_PTT_UNUSED_ENTRY(157), + IAVF_PTT_UNUSED_ENTRY(158), + IAVF_PTT_UNUSED_ENTRY(159), + + IAVF_PTT_UNUSED_ENTRY(160), + IAVF_PTT_UNUSED_ENTRY(161), + IAVF_PTT_UNUSED_ENTRY(162), + IAVF_PTT_UNUSED_ENTRY(163), + IAVF_PTT_UNUSED_ENTRY(164), + IAVF_PTT_UNUSED_ENTRY(165), + IAVF_PTT_UNUSED_ENTRY(166), + IAVF_PTT_UNUSED_ENTRY(167), + IAVF_PTT_UNUSED_ENTRY(168), + IAVF_PTT_UNUSED_ENTRY(169), + + IAVF_PTT_UNUSED_ENTRY(170), + IAVF_PTT_UNUSED_ENTRY(171), + IAVF_PTT_UNUSED_ENTRY(172), + IAVF_PTT_UNUSED_ENTRY(173), + IAVF_PTT_UNUSED_ENTRY(174), + IAVF_PTT_UNUSED_ENTRY(175), + IAVF_PTT_UNUSED_ENTRY(176), + IAVF_PTT_UNUSED_ENTRY(177), + IAVF_PTT_UNUSED_ENTRY(178), + IAVF_PTT_UNUSED_ENTRY(179), + + IAVF_PTT_UNUSED_ENTRY(180), + IAVF_PTT_UNUSED_ENTRY(181), + IAVF_PTT_UNUSED_ENTRY(182), + IAVF_PTT_UNUSED_ENTRY(183), + IAVF_PTT_UNUSED_ENTRY(184), + IAVF_PTT_UNUSED_ENTRY(185), + IAVF_PTT_UNUSED_ENTRY(186), + IAVF_PTT_UNUSED_ENTRY(187), + IAVF_PTT_UNUSED_ENTRY(188), + IAVF_PTT_UNUSED_ENTRY(189), + + IAVF_PTT_UNUSED_ENTRY(190), + IAVF_PTT_UNUSED_ENTRY(191), + IAVF_PTT_UNUSED_ENTRY(192), + IAVF_PTT_UNUSED_ENTRY(193), + IAVF_PTT_UNUSED_ENTRY(194), + IAVF_PTT_UNUSED_ENTRY(195), + IAVF_PTT_UNUSED_ENTRY(196), + IAVF_PTT_UNUSED_ENTRY(197), + IAVF_PTT_UNUSED_ENTRY(198), + IAVF_PTT_UNUSED_ENTRY(199), + + IAVF_PTT_UNUSED_ENTRY(200), + IAVF_PTT_UNUSED_ENTRY(201), + IAVF_PTT_UNUSED_ENTRY(202), + IAVF_PTT_UNUSED_ENTRY(203), + IAVF_PTT_UNUSED_ENTRY(204), + IAVF_PTT_UNUSED_ENTRY(205), + IAVF_PTT_UNUSED_ENTRY(206), + IAVF_PTT_UNUSED_ENTRY(207), + IAVF_PTT_UNUSED_ENTRY(208), + IAVF_PTT_UNUSED_ENTRY(209), + + IAVF_PTT_UNUSED_ENTRY(210), + IAVF_PTT_UNUSED_ENTRY(211), + IAVF_PTT_UNUSED_ENTRY(212), + IAVF_PTT_UNUSED_ENTRY(213), + IAVF_PTT_UNUSED_ENTRY(214), + IAVF_PTT_UNUSED_ENTRY(215), + IAVF_PTT_UNUSED_ENTRY(216), + IAVF_PTT_UNUSED_ENTRY(217), + IAVF_PTT_UNUSED_ENTRY(218), + IAVF_PTT_UNUSED_ENTRY(219), + + IAVF_PTT_UNUSED_ENTRY(220), + IAVF_PTT_UNUSED_ENTRY(221), + IAVF_PTT_UNUSED_ENTRY(222), + IAVF_PTT_UNUSED_ENTRY(223), + IAVF_PTT_UNUSED_ENTRY(224), + IAVF_PTT_UNUSED_ENTRY(225), + IAVF_PTT_UNUSED_ENTRY(226), + IAVF_PTT_UNUSED_ENTRY(227), + IAVF_PTT_UNUSED_ENTRY(228), + IAVF_PTT_UNUSED_ENTRY(229), + + IAVF_PTT_UNUSED_ENTRY(230), + IAVF_PTT_UNUSED_ENTRY(231), + IAVF_PTT_UNUSED_ENTRY(232), + IAVF_PTT_UNUSED_ENTRY(233), + IAVF_PTT_UNUSED_ENTRY(234), + IAVF_PTT_UNUSED_ENTRY(235), + IAVF_PTT_UNUSED_ENTRY(236), + IAVF_PTT_UNUSED_ENTRY(237), + IAVF_PTT_UNUSED_ENTRY(238), + IAVF_PTT_UNUSED_ENTRY(239), + + IAVF_PTT_UNUSED_ENTRY(240), + IAVF_PTT_UNUSED_ENTRY(241), + IAVF_PTT_UNUSED_ENTRY(242), + IAVF_PTT_UNUSED_ENTRY(243), + IAVF_PTT_UNUSED_ENTRY(244), + IAVF_PTT_UNUSED_ENTRY(245), + IAVF_PTT_UNUSED_ENTRY(246), + IAVF_PTT_UNUSED_ENTRY(247), + IAVF_PTT_UNUSED_ENTRY(248), + IAVF_PTT_UNUSED_ENTRY(249), + + IAVF_PTT_UNUSED_ENTRY(250), + IAVF_PTT_UNUSED_ENTRY(251), + IAVF_PTT_UNUSED_ENTRY(252), + IAVF_PTT_UNUSED_ENTRY(253), + IAVF_PTT_UNUSED_ENTRY(254), + IAVF_PTT_UNUSED_ENTRY(255) }; /** - * avf_validate_mac_addr - Validate unicast MAC address + * iavf_validate_mac_addr - Validate unicast MAC address * @mac_addr: pointer to MAC address * * Tests a MAC address to ensure it is a valid Individual Address **/ -enum avf_status_code avf_validate_mac_addr(u8 *mac_addr) +enum iavf_status_code iavf_validate_mac_addr(u8 *mac_addr) { - enum avf_status_code status = AVF_SUCCESS; + enum iavf_status_code status = IAVF_SUCCESS; - DEBUGFUNC("avf_validate_mac_addr"); + DEBUGFUNC("iavf_validate_mac_addr"); /* Broadcast addresses ARE multicast addresses * Make sure it is not a multicast address * Reject the zero address */ - if (AVF_IS_MULTICAST(mac_addr) || + if (IAVF_IS_MULTICAST(mac_addr) || (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 && mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0)) - status = AVF_ERR_INVALID_MAC_ADDR; + status = IAVF_ERR_INVALID_MAC_ADDR; return status; } /** - * avf_aq_rx_ctl_read_register - use FW to read from an Rx control register + * iavf_aq_rx_ctl_read_register - use FW to read from an Rx control register * @hw: pointer to the hw struct * @reg_addr: register address * @reg_val: ptr to register value @@ -953,50 +953,50 @@ enum avf_status_code avf_validate_mac_addr(u8 *mac_addr) * Use the firmware to read the Rx control register, * especially useful if the Rx unit is under heavy pressure **/ -enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw, +enum iavf_status_code iavf_aq_rx_ctl_read_register(struct iavf_hw *hw, u32 reg_addr, u32 *reg_val, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - struct avf_aqc_rx_ctl_reg_read_write *cmd_resp = - (struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw; - enum avf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_rx_ctl_reg_read_write *cmd_resp = + (struct iavf_aqc_rx_ctl_reg_read_write *)&desc.params.raw; + enum iavf_status_code status; if (reg_val == NULL) - return AVF_ERR_PARAM; + return IAVF_ERR_PARAM; - avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_read); + iavf_fill_default_direct_cmd_desc(&desc, iavf_aqc_opc_rx_ctl_reg_read); cmd_resp->address = CPU_TO_LE32(reg_addr); - status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details); + status = iavf_asq_send_command(hw, &desc, NULL, 0, cmd_details); - if (status == AVF_SUCCESS) + if (status == IAVF_SUCCESS) *reg_val = LE32_TO_CPU(cmd_resp->value); return status; } /** - * avf_read_rx_ctl - read from an Rx control register + * iavf_read_rx_ctl - read from an Rx control register * @hw: pointer to the hw struct * @reg_addr: register address **/ -u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr) +u32 iavf_read_rx_ctl(struct iavf_hw *hw, u32 reg_addr) { - enum avf_status_code status = AVF_SUCCESS; + enum iavf_status_code status = IAVF_SUCCESS; bool use_register; int retry = 5; u32 val = 0; use_register = (((hw->aq.api_maj_ver == 1) && (hw->aq.api_min_ver < 5)) || - (hw->mac.type == AVF_MAC_X722)); + (hw->mac.type == IAVF_MAC_X722)); if (!use_register) { do_retry: - status = avf_aq_rx_ctl_read_register(hw, reg_addr, &val, NULL); - if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) { - avf_msec_delay(1); + status = iavf_aq_rx_ctl_read_register(hw, reg_addr, &val, NULL); + if (hw->aq.asq_last_status == IAVF_AQ_RC_EAGAIN && retry) { + iavf_msec_delay(1); retry--; goto do_retry; } @@ -1010,7 +1010,7 @@ u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr) } /** - * avf_aq_rx_ctl_write_register + * iavf_aq_rx_ctl_write_register * @hw: pointer to the hw struct * @reg_addr: register address * @reg_val: register value @@ -1019,46 +1019,46 @@ u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr) * Use the firmware to write to an Rx control register, * especially useful if the Rx unit is under heavy pressure **/ -enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw, +enum iavf_status_code iavf_aq_rx_ctl_write_register(struct iavf_hw *hw, u32 reg_addr, u32 reg_val, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - struct avf_aqc_rx_ctl_reg_read_write *cmd = - (struct avf_aqc_rx_ctl_reg_read_write *)&desc.params.raw; - enum avf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_rx_ctl_reg_read_write *cmd = + (struct iavf_aqc_rx_ctl_reg_read_write *)&desc.params.raw; + enum iavf_status_code status; - avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_rx_ctl_reg_write); + iavf_fill_default_direct_cmd_desc(&desc, iavf_aqc_opc_rx_ctl_reg_write); cmd->address = CPU_TO_LE32(reg_addr); cmd->value = CPU_TO_LE32(reg_val); - status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details); + status = iavf_asq_send_command(hw, &desc, NULL, 0, cmd_details); return status; } /** - * avf_write_rx_ctl - write to an Rx control register + * iavf_write_rx_ctl - write to an Rx control register * @hw: pointer to the hw struct * @reg_addr: register address * @reg_val: register value **/ -void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val) +void iavf_write_rx_ctl(struct iavf_hw *hw, u32 reg_addr, u32 reg_val) { - enum avf_status_code status = AVF_SUCCESS; + enum iavf_status_code status = IAVF_SUCCESS; bool use_register; int retry = 5; use_register = (((hw->aq.api_maj_ver == 1) && (hw->aq.api_min_ver < 5)) || - (hw->mac.type == AVF_MAC_X722)); + (hw->mac.type == IAVF_MAC_X722)); if (!use_register) { do_retry: - status = avf_aq_rx_ctl_write_register(hw, reg_addr, + status = iavf_aq_rx_ctl_write_register(hw, reg_addr, reg_val, NULL); - if (hw->aq.asq_last_status == AVF_AQ_RC_EAGAIN && retry) { - avf_msec_delay(1); + if (hw->aq.asq_last_status == IAVF_AQ_RC_EAGAIN && retry) { + iavf_msec_delay(1); retry--; goto do_retry; } @@ -1070,7 +1070,7 @@ void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val) } /** - * avf_aq_set_phy_register + * iavf_aq_set_phy_register * @hw: pointer to the hw struct * @phy_select: select which phy should be accessed * @dev_addr: PHY device address @@ -1080,31 +1080,31 @@ void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val) * * Write the external PHY register. **/ -enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw, +enum iavf_status_code iavf_aq_set_phy_register(struct iavf_hw *hw, u8 phy_select, u8 dev_addr, u32 reg_addr, u32 reg_val, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - struct avf_aqc_phy_register_access *cmd = - (struct avf_aqc_phy_register_access *)&desc.params.raw; - enum avf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_phy_register_access *cmd = + (struct iavf_aqc_phy_register_access *)&desc.params.raw; + enum iavf_status_code status; - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_set_phy_register); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_set_phy_register); cmd->phy_interface = phy_select; cmd->dev_addres = dev_addr; cmd->reg_address = CPU_TO_LE32(reg_addr); cmd->reg_value = CPU_TO_LE32(reg_val); - status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details); + status = iavf_asq_send_command(hw, &desc, NULL, 0, cmd_details); return status; } /** - * avf_aq_get_phy_register + * iavf_aq_get_phy_register * @hw: pointer to the hw struct * @phy_select: select which phy should be accessed * @dev_addr: PHY device address @@ -1114,24 +1114,24 @@ enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw, * * Read the external PHY register. **/ -enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw, +enum iavf_status_code iavf_aq_get_phy_register(struct iavf_hw *hw, u8 phy_select, u8 dev_addr, u32 reg_addr, u32 *reg_val, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - struct avf_aqc_phy_register_access *cmd = - (struct avf_aqc_phy_register_access *)&desc.params.raw; - enum avf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_phy_register_access *cmd = + (struct iavf_aqc_phy_register_access *)&desc.params.raw; + enum iavf_status_code status; - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_get_phy_register); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_get_phy_register); cmd->phy_interface = phy_select; cmd->dev_addres = dev_addr; cmd->reg_address = CPU_TO_LE32(reg_addr); - status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details); + status = iavf_asq_send_command(hw, &desc, NULL, 0, cmd_details); if (!status) *reg_val = LE32_TO_CPU(cmd->reg_value); @@ -1140,7 +1140,7 @@ enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw, /** - * avf_aq_send_msg_to_pf + * iavf_aq_send_msg_to_pf * @hw: pointer to the hardware structure * @v_opcode: opcodes for VF-PF communication * @v_retval: return error code @@ -1149,49 +1149,49 @@ enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw, * @cmd_details: pointer to command details * * Send message to PF driver using admin queue. By default, this message - * is sent asynchronously, i.e. avf_asq_send_command() does not wait for + * is sent asynchronously, i.e. iavf_asq_send_command() does not wait for * completion before returning. **/ -enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw, +enum iavf_status_code iavf_aq_send_msg_to_pf(struct iavf_hw *hw, enum virtchnl_ops v_opcode, - enum avf_status_code v_retval, + enum iavf_status_code v_retval, u8 *msg, u16 msglen, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - struct avf_asq_cmd_details details; - enum avf_status_code status; + struct iavf_aq_desc desc; + struct iavf_asq_cmd_details details; + enum iavf_status_code status; - avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_send_msg_to_pf); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_SI); + iavf_fill_default_direct_cmd_desc(&desc, iavf_aqc_opc_send_msg_to_pf); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_SI); desc.cookie_high = CPU_TO_LE32(v_opcode); desc.cookie_low = CPU_TO_LE32(v_retval); if (msglen) { - desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF - | AVF_AQ_FLAG_RD)); - if (msglen > AVF_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB); + desc.flags |= CPU_TO_LE16((u16)(IAVF_AQ_FLAG_BUF + | IAVF_AQ_FLAG_RD)); + if (msglen > IAVF_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_LB); desc.datalen = CPU_TO_LE16(msglen); } if (!cmd_details) { - avf_memset(&details, 0, sizeof(details), AVF_NONDMA_MEM); + iavf_memset(&details, 0, sizeof(details), IAVF_NONDMA_MEM); details.async = true; cmd_details = &details; } - status = avf_asq_send_command(hw, (struct avf_aq_desc *)&desc, msg, + status = iavf_asq_send_command(hw, (struct iavf_aq_desc *)&desc, msg, msglen, cmd_details); return status; } /** - * avf_parse_hw_config + * iavf_parse_hw_config * @hw: pointer to the hardware structure * @msg: pointer to the virtual channel VF resource structure * * Given a VF resource message from the PF, populate the hw struct * with appropriate information. **/ -void avf_parse_hw_config(struct avf_hw *hw, +void iavf_parse_hw_config(struct iavf_hw *hw, struct virtchnl_vf_resource *msg) { struct virtchnl_vsi_resource *vsi_res; @@ -1209,107 +1209,107 @@ void avf_parse_hw_config(struct avf_hw *hw, VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0; for (i = 0; i < msg->num_vsis; i++) { if (vsi_res->vsi_type == VIRTCHNL_VSI_SRIOV) { - avf_memcpy(hw->mac.perm_addr, + iavf_memcpy(hw->mac.perm_addr, vsi_res->default_mac_addr, ETH_ALEN, - AVF_NONDMA_TO_NONDMA); - avf_memcpy(hw->mac.addr, vsi_res->default_mac_addr, + IAVF_NONDMA_TO_NONDMA); + iavf_memcpy(hw->mac.addr, vsi_res->default_mac_addr, ETH_ALEN, - AVF_NONDMA_TO_NONDMA); + IAVF_NONDMA_TO_NONDMA); } vsi_res++; } } /** - * avf_reset + * iavf_reset * @hw: pointer to the hardware structure * * Send a VF_RESET message to the PF. Does not wait for response from PF * as none will be forthcoming. Immediately after calling this function, * the admin queue should be shut down and (optionally) reinitialized. **/ -enum avf_status_code avf_reset(struct avf_hw *hw) +enum iavf_status_code iavf_reset(struct iavf_hw *hw) { - return avf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF, - AVF_SUCCESS, NULL, 0, NULL); + return iavf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF, + IAVF_SUCCESS, NULL, 0, NULL); } /** - * avf_aq_set_arp_proxy_config + * iavf_aq_set_arp_proxy_config * @hw: pointer to the HW structure * @proxy_config: pointer to proxy config command table struct * @cmd_details: pointer to command details * * Set ARP offload parameters from pre-populated - * avf_aqc_arp_proxy_data struct + * iavf_aqc_arp_proxy_data struct **/ -enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw, - struct avf_aqc_arp_proxy_data *proxy_config, - struct avf_asq_cmd_details *cmd_details) +enum iavf_status_code iavf_aq_set_arp_proxy_config(struct iavf_hw *hw, + struct iavf_aqc_arp_proxy_data *proxy_config, + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - enum avf_status_code status; + struct iavf_aq_desc desc; + enum iavf_status_code status; if (!proxy_config) - return AVF_ERR_PARAM; + return IAVF_ERR_PARAM; - avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_proxy_config); + iavf_fill_default_direct_cmd_desc(&desc, iavf_aqc_opc_set_proxy_config); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_RD); desc.params.external.addr_high = - CPU_TO_LE32(AVF_HI_DWORD((u64)proxy_config)); + CPU_TO_LE32(IAVF_HI_DWORD((u64)proxy_config)); desc.params.external.addr_low = - CPU_TO_LE32(AVF_LO_DWORD((u64)proxy_config)); - desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_arp_proxy_data)); + CPU_TO_LE32(IAVF_LO_DWORD((u64)proxy_config)); + desc.datalen = CPU_TO_LE16(sizeof(struct iavf_aqc_arp_proxy_data)); - status = avf_asq_send_command(hw, &desc, proxy_config, - sizeof(struct avf_aqc_arp_proxy_data), + status = iavf_asq_send_command(hw, &desc, proxy_config, + sizeof(struct iavf_aqc_arp_proxy_data), cmd_details); return status; } /** - * avf_aq_opc_set_ns_proxy_table_entry + * iavf_aq_opc_set_ns_proxy_table_entry * @hw: pointer to the HW structure * @ns_proxy_table_entry: pointer to NS table entry command struct * @cmd_details: pointer to command details * * Set IPv6 Neighbor Solicitation (NS) protocol offload parameters - * from pre-populated avf_aqc_ns_proxy_data struct + * from pre-populated iavf_aqc_ns_proxy_data struct **/ -enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw, - struct avf_aqc_ns_proxy_data *ns_proxy_table_entry, - struct avf_asq_cmd_details *cmd_details) +enum iavf_status_code iavf_aq_set_ns_proxy_table_entry(struct iavf_hw *hw, + struct iavf_aqc_ns_proxy_data *ns_proxy_table_entry, + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - enum avf_status_code status; + struct iavf_aq_desc desc; + enum iavf_status_code status; if (!ns_proxy_table_entry) - return AVF_ERR_PARAM; + return IAVF_ERR_PARAM; - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_set_ns_proxy_table_entry); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_set_ns_proxy_table_entry); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_RD); desc.params.external.addr_high = - CPU_TO_LE32(AVF_HI_DWORD((u64)ns_proxy_table_entry)); + CPU_TO_LE32(IAVF_HI_DWORD((u64)ns_proxy_table_entry)); desc.params.external.addr_low = - CPU_TO_LE32(AVF_LO_DWORD((u64)ns_proxy_table_entry)); - desc.datalen = CPU_TO_LE16(sizeof(struct avf_aqc_ns_proxy_data)); + CPU_TO_LE32(IAVF_LO_DWORD((u64)ns_proxy_table_entry)); + desc.datalen = CPU_TO_LE16(sizeof(struct iavf_aqc_ns_proxy_data)); - status = avf_asq_send_command(hw, &desc, ns_proxy_table_entry, - sizeof(struct avf_aqc_ns_proxy_data), + status = iavf_asq_send_command(hw, &desc, ns_proxy_table_entry, + sizeof(struct iavf_aqc_ns_proxy_data), cmd_details); return status; } /** - * avf_aq_set_clear_wol_filter + * iavf_aq_set_clear_wol_filter * @hw: pointer to the hw struct * @filter_index: index of filter to modify (0-7) * @filter: buffer containing filter to be set @@ -1322,110 +1322,110 @@ enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw, * * Set or clear WoL filter for port attached to the PF **/ -enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw, +enum iavf_status_code iavf_aq_set_clear_wol_filter(struct iavf_hw *hw, u8 filter_index, - struct avf_aqc_set_wol_filter_data *filter, + struct iavf_aqc_set_wol_filter_data *filter, bool set_filter, bool no_wol_tco, bool filter_valid, bool no_wol_tco_valid, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - struct avf_aqc_set_wol_filter *cmd = - (struct avf_aqc_set_wol_filter *)&desc.params.raw; - enum avf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_set_wol_filter *cmd = + (struct iavf_aqc_set_wol_filter *)&desc.params.raw; + enum iavf_status_code status; u16 cmd_flags = 0; u16 valid_flags = 0; u16 buff_len = 0; - avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_set_wol_filter); + iavf_fill_default_direct_cmd_desc(&desc, iavf_aqc_opc_set_wol_filter); - if (filter_index >= AVF_AQC_MAX_NUM_WOL_FILTERS) - return AVF_ERR_PARAM; + if (filter_index >= IAVF_AQC_MAX_NUM_WOL_FILTERS) + return IAVF_ERR_PARAM; cmd->filter_index = CPU_TO_LE16(filter_index); if (set_filter) { if (!filter) - return AVF_ERR_PARAM; + return IAVF_ERR_PARAM; - cmd_flags |= AVF_AQC_SET_WOL_FILTER; - cmd_flags |= AVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR; + cmd_flags |= IAVF_AQC_SET_WOL_FILTER; + cmd_flags |= IAVF_AQC_SET_WOL_FILTER_WOL_PRESERVE_ON_PFR; } if (no_wol_tco) - cmd_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_WOL; + cmd_flags |= IAVF_AQC_SET_WOL_FILTER_NO_TCO_WOL; cmd->cmd_flags = CPU_TO_LE16(cmd_flags); if (filter_valid) - valid_flags |= AVF_AQC_SET_WOL_FILTER_ACTION_VALID; + valid_flags |= IAVF_AQC_SET_WOL_FILTER_ACTION_VALID; if (no_wol_tco_valid) - valid_flags |= AVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID; + valid_flags |= IAVF_AQC_SET_WOL_FILTER_NO_TCO_ACTION_VALID; cmd->valid_flags = CPU_TO_LE16(valid_flags); buff_len = sizeof(*filter); desc.datalen = CPU_TO_LE16(buff_len); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_RD); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_RD); - cmd->address_high = CPU_TO_LE32(AVF_HI_DWORD((u64)filter)); - cmd->address_low = CPU_TO_LE32(AVF_LO_DWORD((u64)filter)); + cmd->address_high = CPU_TO_LE32(IAVF_HI_DWORD((u64)filter)); + cmd->address_low = CPU_TO_LE32(IAVF_LO_DWORD((u64)filter)); - status = avf_asq_send_command(hw, &desc, filter, + status = iavf_asq_send_command(hw, &desc, filter, buff_len, cmd_details); return status; } /** - * avf_aq_get_wake_event_reason + * iavf_aq_get_wake_event_reason * @hw: pointer to the hw struct * @wake_reason: return value, index of matching filter * @cmd_details: pointer to command details structure or NULL * * Get information for the reason of a Wake Up event **/ -enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw, +enum iavf_status_code iavf_aq_get_wake_event_reason(struct iavf_hw *hw, u16 *wake_reason, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - struct avf_aqc_get_wake_reason_completion *resp = - (struct avf_aqc_get_wake_reason_completion *)&desc.params.raw; - enum avf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_get_wake_reason_completion *resp = + (struct iavf_aqc_get_wake_reason_completion *)&desc.params.raw; + enum iavf_status_code status; - avf_fill_default_direct_cmd_desc(&desc, avf_aqc_opc_get_wake_reason); + iavf_fill_default_direct_cmd_desc(&desc, iavf_aqc_opc_get_wake_reason); - status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details); + status = iavf_asq_send_command(hw, &desc, NULL, 0, cmd_details); - if (status == AVF_SUCCESS) + if (status == IAVF_SUCCESS) *wake_reason = LE16_TO_CPU(resp->wake_reason); return status; } /** -* avf_aq_clear_all_wol_filters +* iavf_aq_clear_all_wol_filters * @hw: pointer to the hw struct * @cmd_details: pointer to command details structure or NULL * * Get information for the reason of a Wake Up event **/ -enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw, - struct avf_asq_cmd_details *cmd_details) +enum iavf_status_code iavf_aq_clear_all_wol_filters(struct iavf_hw *hw, + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - enum avf_status_code status; + struct iavf_aq_desc desc; + enum iavf_status_code status; - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_clear_all_wol_filters); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_clear_all_wol_filters); - status = avf_asq_send_command(hw, &desc, NULL, 0, cmd_details); + status = iavf_asq_send_command(hw, &desc, NULL, 0, cmd_details); return status; } /** - * avf_aq_write_ddp - Write dynamic device personalization (ddp) + * iavf_aq_write_ddp - Write dynamic device personalization (ddp) * @hw: pointer to the hw struct * @buff: command buffer (size in bytes = buff_size) * @buff_size: buffer size in bytes @@ -1435,32 +1435,32 @@ enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw, * @cmd_details: pointer to command details structure or NULL **/ enum -avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff, +iavf_status_code iavf_aq_write_ddp(struct iavf_hw *hw, void *buff, u16 buff_size, u32 track_id, u32 *error_offset, u32 *error_info, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - struct avf_aqc_write_personalization_profile *cmd = - (struct avf_aqc_write_personalization_profile *) + struct iavf_aq_desc desc; + struct iavf_aqc_write_personalization_profile *cmd = + (struct iavf_aqc_write_personalization_profile *) &desc.params.raw; - struct avf_aqc_write_ddp_resp *resp; - enum avf_status_code status; + struct iavf_aqc_write_ddp_resp *resp; + enum iavf_status_code status; - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_write_personalization_profile); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_write_personalization_profile); - desc.flags |= CPU_TO_LE16(AVF_AQ_FLAG_BUF | AVF_AQ_FLAG_RD); - if (buff_size > AVF_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB); + desc.flags |= CPU_TO_LE16(IAVF_AQ_FLAG_BUF | IAVF_AQ_FLAG_RD); + if (buff_size > IAVF_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_LB); desc.datalen = CPU_TO_LE16(buff_size); cmd->profile_track_id = CPU_TO_LE32(track_id); - status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details); + status = iavf_asq_send_command(hw, &desc, buff, buff_size, cmd_details); if (!status) { - resp = (struct avf_aqc_write_ddp_resp *)&desc.params.raw; + resp = (struct iavf_aqc_write_ddp_resp *)&desc.params.raw; if (error_offset) *error_offset = LE32_TO_CPU(resp->error_offset); if (error_info) @@ -1471,7 +1471,7 @@ avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff, } /** - * avf_aq_get_ddp_list - Read dynamic device personalization (ddp) + * iavf_aq_get_ddp_list - Read dynamic device personalization (ddp) * @hw: pointer to the hw struct * @buff: command buffer (size in bytes = buff_size) * @buff_size: buffer size in bytes @@ -1479,50 +1479,50 @@ avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff, * @cmd_details: pointer to command details structure or NULL **/ enum -avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff, +iavf_status_code iavf_aq_get_ddp_list(struct iavf_hw *hw, void *buff, u16 buff_size, u8 flags, - struct avf_asq_cmd_details *cmd_details) + struct iavf_asq_cmd_details *cmd_details) { - struct avf_aq_desc desc; - struct avf_aqc_get_applied_profiles *cmd = - (struct avf_aqc_get_applied_profiles *)&desc.params.raw; - enum avf_status_code status; + struct iavf_aq_desc desc; + struct iavf_aqc_get_applied_profiles *cmd = + (struct iavf_aqc_get_applied_profiles *)&desc.params.raw; + enum iavf_status_code status; - avf_fill_default_direct_cmd_desc(&desc, - avf_aqc_opc_get_personalization_profile_list); + iavf_fill_default_direct_cmd_desc(&desc, + iavf_aqc_opc_get_personalization_profile_list); - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_BUF); - if (buff_size > AVF_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB); + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_BUF); + if (buff_size > IAVF_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_LB); desc.datalen = CPU_TO_LE16(buff_size); cmd->flags = flags; - status = avf_asq_send_command(hw, &desc, buff, buff_size, cmd_details); + status = iavf_asq_send_command(hw, &desc, buff, buff_size, cmd_details); return status; } /** - * avf_find_segment_in_package - * @segment_type: the segment type to search for (i.e., SEGMENT_TYPE_AVF) + * iavf_find_segment_in_package + * @segment_type: the segment type to search for (i.e., SEGMENT_TYPE_IAVF) * @pkg_hdr: pointer to the package header to be searched * * This function searches a package file for a particular segment type. On * success it returns a pointer to the segment header, otherwise it will * return NULL. **/ -struct avf_generic_seg_header * -avf_find_segment_in_package(u32 segment_type, - struct avf_package_header *pkg_hdr) +struct iavf_generic_seg_header * +iavf_find_segment_in_package(u32 segment_type, + struct iavf_package_header *pkg_hdr) { - struct avf_generic_seg_header *segment; + struct iavf_generic_seg_header *segment; u32 i; /* Search all package segments for the requested segment type */ for (i = 0; i < pkg_hdr->segment_count; i++) { segment = - (struct avf_generic_seg_header *)((u8 *)pkg_hdr + + (struct iavf_generic_seg_header *)((u8 *)pkg_hdr + pkg_hdr->segment_offset[i]); if (segment->type == segment_type) @@ -1533,46 +1533,46 @@ avf_find_segment_in_package(u32 segment_type, } /* Get section table in profile */ -#define AVF_SECTION_TABLE(profile, sec_tbl) \ +#define IAVF_SECTION_TABLE(profile, sec_tbl) \ do { \ - struct avf_profile_segment *p = (profile); \ + struct iavf_profile_segment *p = (profile); \ u32 count; \ u32 *nvm; \ count = p->device_table_count; \ nvm = (u32 *)&p->device_table[count]; \ - sec_tbl = (struct avf_section_table *)&nvm[nvm[0] + 1]; \ + sec_tbl = (struct iavf_section_table *)&nvm[nvm[0] + 1]; \ } while (0) /* Get section header in profile */ -#define AVF_SECTION_HEADER(profile, offset) \ - (struct avf_profile_section_header *)((u8 *)(profile) + (offset)) +#define IAVF_SECTION_HEADER(profile, offset) \ + (struct iavf_profile_section_header *)((u8 *)(profile) + (offset)) /** - * avf_find_section_in_profile + * iavf_find_section_in_profile * @section_type: the section type to search for (i.e., SECTION_TYPE_NOTE) - * @profile: pointer to the avf segment header to be searched + * @profile: pointer to the iavf segment header to be searched * - * This function searches avf segment for a particular section type. On + * This function searches iavf segment for a particular section type. On * success it returns a pointer to the section header, otherwise it will * return NULL. **/ -struct avf_profile_section_header * -avf_find_section_in_profile(u32 section_type, - struct avf_profile_segment *profile) +struct iavf_profile_section_header * +iavf_find_section_in_profile(u32 section_type, + struct iavf_profile_segment *profile) { - struct avf_profile_section_header *sec; - struct avf_section_table *sec_tbl; + struct iavf_profile_section_header *sec; + struct iavf_section_table *sec_tbl; u32 sec_off; u32 i; - if (profile->header.type != SEGMENT_TYPE_AVF) + if (profile->header.type != SEGMENT_TYPE_IAVF) return NULL; - AVF_SECTION_TABLE(profile, sec_tbl); + IAVF_SECTION_TABLE(profile, sec_tbl); for (i = 0; i < sec_tbl->section_count; i++) { sec_off = sec_tbl->section_offset[i]; - sec = AVF_SECTION_HEADER(profile, sec_off); + sec = IAVF_SECTION_HEADER(profile, sec_off); if (sec->section.type == section_type) return sec; } @@ -1581,52 +1581,52 @@ avf_find_section_in_profile(u32 section_type, } /** - * avf_ddp_exec_aq_section - Execute generic AQ for DDP + * iavf_ddp_exec_aq_section - Execute generic AQ for DDP * @hw: pointer to the hw struct * @aq: command buffer containing all data to execute AQ **/ STATIC enum -avf_status_code avf_ddp_exec_aq_section(struct avf_hw *hw, - struct avf_profile_aq_section *aq) +iavf_status_code iavf_ddp_exec_aq_section(struct iavf_hw *hw, + struct iavf_profile_aq_section *aq) { - enum avf_status_code status; - struct avf_aq_desc desc; + enum iavf_status_code status; + struct iavf_aq_desc desc; u8 *msg = NULL; u16 msglen; - avf_fill_default_direct_cmd_desc(&desc, aq->opcode); + iavf_fill_default_direct_cmd_desc(&desc, aq->opcode); desc.flags |= CPU_TO_LE16(aq->flags); - avf_memcpy(desc.params.raw, aq->param, sizeof(desc.params.raw), - AVF_NONDMA_TO_NONDMA); + iavf_memcpy(desc.params.raw, aq->param, sizeof(desc.params.raw), + IAVF_NONDMA_TO_NONDMA); msglen = aq->datalen; if (msglen) { - desc.flags |= CPU_TO_LE16((u16)(AVF_AQ_FLAG_BUF | - AVF_AQ_FLAG_RD)); - if (msglen > AVF_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)AVF_AQ_FLAG_LB); + desc.flags |= CPU_TO_LE16((u16)(IAVF_AQ_FLAG_BUF | + IAVF_AQ_FLAG_RD)); + if (msglen > IAVF_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)IAVF_AQ_FLAG_LB); desc.datalen = CPU_TO_LE16(msglen); msg = &aq->data[0]; } - status = avf_asq_send_command(hw, &desc, msg, msglen, NULL); + status = iavf_asq_send_command(hw, &desc, msg, msglen, NULL); - if (status != AVF_SUCCESS) { - avf_debug(hw, AVF_DEBUG_PACKAGE, + if (status != IAVF_SUCCESS) { + iavf_debug(hw, IAVF_DEBUG_PACKAGE, "unable to exec DDP AQ opcode %u, error %d\n", aq->opcode, status); return status; } /* copy returned desc to aq_buf */ - avf_memcpy(aq->param, desc.params.raw, sizeof(desc.params.raw), - AVF_NONDMA_TO_NONDMA); + iavf_memcpy(aq->param, desc.params.raw, sizeof(desc.params.raw), + IAVF_NONDMA_TO_NONDMA); - return AVF_SUCCESS; + return IAVF_SUCCESS; } /** - * avf_validate_profile + * iavf_validate_profile * @hw: pointer to the hardware structure * @profile: pointer to the profile segment of the package to be validated * @track_id: package tracking id @@ -1634,56 +1634,56 @@ avf_status_code avf_ddp_exec_aq_section(struct avf_hw *hw, * * Validates supported devices and profile's sections. */ -STATIC enum avf_status_code -avf_validate_profile(struct avf_hw *hw, struct avf_profile_segment *profile, +STATIC enum iavf_status_code +iavf_validate_profile(struct iavf_hw *hw, struct iavf_profile_segment *profile, u32 track_id, bool rollback) { - struct avf_profile_section_header *sec = NULL; - enum avf_status_code status = AVF_SUCCESS; - struct avf_section_table *sec_tbl; + struct iavf_profile_section_header *sec = NULL; + enum iavf_status_code status = IAVF_SUCCESS; + struct iavf_section_table *sec_tbl; u32 vendor_dev_id; u32 dev_cnt; u32 sec_off; u32 i; - if (track_id == AVF_DDP_TRACKID_INVALID) { - avf_debug(hw, AVF_DEBUG_PACKAGE, "Invalid track_id\n"); - return AVF_NOT_SUPPORTED; + if (track_id == IAVF_DDP_TRACKID_INVALID) { + iavf_debug(hw, IAVF_DEBUG_PACKAGE, "Invalid track_id\n"); + return IAVF_NOT_SUPPORTED; } dev_cnt = profile->device_table_count; for (i = 0; i < dev_cnt; i++) { vendor_dev_id = profile->device_table[i].vendor_dev_id; - if ((vendor_dev_id >> 16) == AVF_INTEL_VENDOR_ID && + if ((vendor_dev_id >> 16) == IAVF_INTEL_VENDOR_ID && hw->device_id == (vendor_dev_id & 0xFFFF)) break; } if (dev_cnt && (i == dev_cnt)) { - avf_debug(hw, AVF_DEBUG_PACKAGE, + iavf_debug(hw, IAVF_DEBUG_PACKAGE, "Device doesn't support DDP\n"); - return AVF_ERR_DEVICE_NOT_SUPPORTED; + return IAVF_ERR_DEVICE_NOT_SUPPORTED; } - AVF_SECTION_TABLE(profile, sec_tbl); + IAVF_SECTION_TABLE(profile, sec_tbl); /* Validate sections types */ for (i = 0; i < sec_tbl->section_count; i++) { sec_off = sec_tbl->section_offset[i]; - sec = AVF_SECTION_HEADER(profile, sec_off); + sec = IAVF_SECTION_HEADER(profile, sec_off); if (rollback) { if (sec->section.type == SECTION_TYPE_MMIO || sec->section.type == SECTION_TYPE_AQ || sec->section.type == SECTION_TYPE_RB_AQ) { - avf_debug(hw, AVF_DEBUG_PACKAGE, + iavf_debug(hw, IAVF_DEBUG_PACKAGE, "Not a roll-back package\n"); - return AVF_NOT_SUPPORTED; + return IAVF_NOT_SUPPORTED; } } else { if (sec->section.type == SECTION_TYPE_RB_AQ || sec->section.type == SECTION_TYPE_RB_MMIO) { - avf_debug(hw, AVF_DEBUG_PACKAGE, + iavf_debug(hw, IAVF_DEBUG_PACKAGE, "Not an original package\n"); - return AVF_NOT_SUPPORTED; + return IAVF_NOT_SUPPORTED; } } } @@ -1692,41 +1692,41 @@ avf_validate_profile(struct avf_hw *hw, struct avf_profile_segment *profile, } /** - * avf_write_profile + * iavf_write_profile * @hw: pointer to the hardware structure * @profile: pointer to the profile segment of the package to be downloaded * @track_id: package tracking id * * Handles the download of a complete package. */ -enum avf_status_code -avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile, +enum iavf_status_code +iavf_write_profile(struct iavf_hw *hw, struct iavf_profile_segment *profile, u32 track_id) { - enum avf_status_code status = AVF_SUCCESS; - struct avf_section_table *sec_tbl; - struct avf_profile_section_header *sec = NULL; - struct avf_profile_aq_section *ddp_aq; + enum iavf_status_code status = IAVF_SUCCESS; + struct iavf_section_table *sec_tbl; + struct iavf_profile_section_header *sec = NULL; + struct iavf_profile_aq_section *ddp_aq; u32 section_size = 0; u32 offset = 0, info = 0; u32 sec_off; u32 i; - status = avf_validate_profile(hw, profile, track_id, false); + status = iavf_validate_profile(hw, profile, track_id, false); if (status) return status; - AVF_SECTION_TABLE(profile, sec_tbl); + IAVF_SECTION_TABLE(profile, sec_tbl); for (i = 0; i < sec_tbl->section_count; i++) { sec_off = sec_tbl->section_offset[i]; - sec = AVF_SECTION_HEADER(profile, sec_off); + sec = IAVF_SECTION_HEADER(profile, sec_off); /* Process generic admin command */ if (sec->section.type == SECTION_TYPE_AQ) { - ddp_aq = (struct avf_profile_aq_section *)&sec[1]; - status = avf_ddp_exec_aq_section(hw, ddp_aq); + ddp_aq = (struct iavf_profile_aq_section *)&sec[1]; + status = iavf_ddp_exec_aq_section(hw, ddp_aq); if (status) { - avf_debug(hw, AVF_DEBUG_PACKAGE, + iavf_debug(hw, IAVF_DEBUG_PACKAGE, "Failed to execute aq: section %d, opcode %u\n", i, ddp_aq->opcode); break; @@ -1739,13 +1739,13 @@ avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile, continue; section_size = sec->section.size + - sizeof(struct avf_profile_section_header); + sizeof(struct iavf_profile_section_header); /* Write MMIO section */ - status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size, + status = iavf_aq_write_ddp(hw, (void *)sec, (u16)section_size, track_id, &offset, &info, NULL); if (status) { - avf_debug(hw, AVF_DEBUG_PACKAGE, + iavf_debug(hw, IAVF_DEBUG_PACKAGE, "Failed to write profile: section %d, offset %d, info %d\n", i, offset, info); break; @@ -1755,48 +1755,48 @@ avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *profile, } /** - * avf_rollback_profile + * iavf_rollback_profile * @hw: pointer to the hardware structure * @profile: pointer to the profile segment of the package to be removed * @track_id: package tracking id * * Rolls back previously loaded package. */ -enum avf_status_code -avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile, +enum iavf_status_code +iavf_rollback_profile(struct iavf_hw *hw, struct iavf_profile_segment *profile, u32 track_id) { - struct avf_profile_section_header *sec = NULL; - enum avf_status_code status = AVF_SUCCESS; - struct avf_section_table *sec_tbl; + struct iavf_profile_section_header *sec = NULL; + enum iavf_status_code status = IAVF_SUCCESS; + struct iavf_section_table *sec_tbl; u32 offset = 0, info = 0; u32 section_size = 0; u32 sec_off; int i; - status = avf_validate_profile(hw, profile, track_id, true); + status = iavf_validate_profile(hw, profile, track_id, true); if (status) return status; - AVF_SECTION_TABLE(profile, sec_tbl); + IAVF_SECTION_TABLE(profile, sec_tbl); /* For rollback write sections in reverse */ for (i = sec_tbl->section_count - 1; i >= 0; i--) { sec_off = sec_tbl->section_offset[i]; - sec = AVF_SECTION_HEADER(profile, sec_off); + sec = IAVF_SECTION_HEADER(profile, sec_off); /* Skip any non-rollback sections */ if (sec->section.type != SECTION_TYPE_RB_MMIO) continue; section_size = sec->section.size + - sizeof(struct avf_profile_section_header); + sizeof(struct iavf_profile_section_header); /* Write roll-back MMIO section */ - status = avf_aq_write_ddp(hw, (void *)sec, (u16)section_size, + status = iavf_aq_write_ddp(hw, (void *)sec, (u16)section_size, track_id, &offset, &info, NULL); if (status) { - avf_debug(hw, AVF_DEBUG_PACKAGE, + iavf_debug(hw, IAVF_DEBUG_PACKAGE, "Failed to write profile: section %d, offset %d, info %d\n", i, offset, info); break; @@ -1806,7 +1806,7 @@ avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile, } /** - * avf_add_pinfo_to_list + * iavf_add_pinfo_to_list * @hw: pointer to the hardware structure * @profile: pointer to the profile segment of the package * @profile_info_sec: buffer for information section @@ -1814,32 +1814,32 @@ avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *profile, * * Register a profile to the list of loaded profiles. */ -enum avf_status_code -avf_add_pinfo_to_list(struct avf_hw *hw, - struct avf_profile_segment *profile, +enum iavf_status_code +iavf_add_pinfo_to_list(struct iavf_hw *hw, + struct iavf_profile_segment *profile, u8 *profile_info_sec, u32 track_id) { - enum avf_status_code status = AVF_SUCCESS; - struct avf_profile_section_header *sec = NULL; - struct avf_profile_info *pinfo; + enum iavf_status_code status = IAVF_SUCCESS; + struct iavf_profile_section_header *sec = NULL; + struct iavf_profile_info *pinfo; u32 offset = 0, info = 0; - sec = (struct avf_profile_section_header *)profile_info_sec; + sec = (struct iavf_profile_section_header *)profile_info_sec; sec->tbl_size = 1; - sec->data_end = sizeof(struct avf_profile_section_header) + - sizeof(struct avf_profile_info); + sec->data_end = sizeof(struct iavf_profile_section_header) + + sizeof(struct iavf_profile_info); sec->section.type = SECTION_TYPE_INFO; - sec->section.offset = sizeof(struct avf_profile_section_header); - sec->section.size = sizeof(struct avf_profile_info); - pinfo = (struct avf_profile_info *)(profile_info_sec + + sec->section.offset = sizeof(struct iavf_profile_section_header); + sec->section.size = sizeof(struct iavf_profile_info); + pinfo = (struct iavf_profile_info *)(profile_info_sec + sec->section.offset); pinfo->track_id = track_id; pinfo->version = profile->version; - pinfo->op = AVF_DDP_ADD_TRACKID; - avf_memcpy(pinfo->name, profile->name, AVF_DDP_NAME_SIZE, - AVF_NONDMA_TO_NONDMA); + pinfo->op = IAVF_DDP_ADD_TRACKID; + iavf_memcpy(pinfo->name, profile->name, IAVF_DDP_NAME_SIZE, + IAVF_NONDMA_TO_NONDMA); - status = avf_aq_write_ddp(hw, (void *)sec, sec->data_end, + status = iavf_aq_write_ddp(hw, (void *)sec, sec->data_end, track_id, &offset, &info, NULL); return status; } diff --git a/drivers/net/iavf/base/iavf_devids.h b/drivers/net/iavf/base/iavf_devids.h index 7d9fed25b..efb39cc34 100644 --- a/drivers/net/iavf/base/iavf_devids.h +++ b/drivers/net/iavf/base/iavf_devids.h @@ -31,13 +31,13 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_DEVIDS_H_ -#define _AVF_DEVIDS_H_ +#ifndef _IAVF_DEVIDS_H_ +#define _IAVF_DEVIDS_H_ /* Vendor ID */ -#define AVF_INTEL_VENDOR_ID 0x8086 +#define IAVF_INTEL_VENDOR_ID 0x8086 /* Device IDs */ -#define AVF_DEV_ID_ADAPTIVE_VF 0x1889 +#define IAVF_DEV_ID_ADAPTIVE_VF 0x1889 -#endif /* _AVF_DEVIDS_H_ */ +#endif /* _IAVF_DEVIDS_H_ */ diff --git a/drivers/net/iavf/base/iavf_hmc.h b/drivers/net/iavf/base/iavf_hmc.h index b9b7b5bed..348b71a2f 100644 --- a/drivers/net/iavf/base/iavf_hmc.h +++ b/drivers/net/iavf/base/iavf_hmc.h @@ -31,148 +31,148 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_HMC_H_ -#define _AVF_HMC_H_ +#ifndef _IAVF_HMC_H_ +#define _IAVF_HMC_H_ -#define AVF_HMC_MAX_BP_COUNT 512 +#define IAVF_HMC_MAX_BP_COUNT 512 /* forward-declare the HW struct for the compiler */ -struct avf_hw; +struct iavf_hw; -#define AVF_HMC_INFO_SIGNATURE 0x484D5347 /* HMSG */ -#define AVF_HMC_PD_CNT_IN_SD 512 -#define AVF_HMC_DIRECT_BP_SIZE 0x200000 /* 2M */ -#define AVF_HMC_PAGED_BP_SIZE 4096 -#define AVF_HMC_PD_BP_BUF_ALIGNMENT 4096 -#define AVF_FIRST_VF_FPM_ID 16 +#define IAVF_HMC_INFO_SIGNATURE 0x484D5347 /* HMSG */ +#define IAVF_HMC_PD_CNT_IN_SD 512 +#define IAVF_HMC_DIRECT_BP_SIZE 0x200000 /* 2M */ +#define IAVF_HMC_PAGED_BP_SIZE 4096 +#define IAVF_HMC_PD_BP_BUF_ALIGNMENT 4096 +#define IAVF_FIRST_VF_FPM_ID 16 -struct avf_hmc_obj_info { +struct iavf_hmc_obj_info { u64 base; /* base addr in FPM */ u32 max_cnt; /* max count available for this hmc func */ u32 cnt; /* count of objects driver actually wants to create */ u64 size; /* size in bytes of one object */ }; -enum avf_sd_entry_type { - AVF_SD_TYPE_INVALID = 0, - AVF_SD_TYPE_PAGED = 1, - AVF_SD_TYPE_DIRECT = 2 +enum iavf_sd_entry_type { + IAVF_SD_TYPE_INVALID = 0, + IAVF_SD_TYPE_PAGED = 1, + IAVF_SD_TYPE_DIRECT = 2 }; -struct avf_hmc_bp { - enum avf_sd_entry_type entry_type; - struct avf_dma_mem addr; /* populate to be used by hw */ +struct iavf_hmc_bp { + enum iavf_sd_entry_type entry_type; + struct iavf_dma_mem addr; /* populate to be used by hw */ u32 sd_pd_index; u32 ref_cnt; }; -struct avf_hmc_pd_entry { - struct avf_hmc_bp bp; +struct iavf_hmc_pd_entry { + struct iavf_hmc_bp bp; u32 sd_index; bool rsrc_pg; bool valid; }; -struct avf_hmc_pd_table { - struct avf_dma_mem pd_page_addr; /* populate to be used by hw */ - struct avf_hmc_pd_entry *pd_entry; /* [512] for sw book keeping */ - struct avf_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */ +struct iavf_hmc_pd_table { + struct iavf_dma_mem pd_page_addr; /* populate to be used by hw */ + struct iavf_hmc_pd_entry *pd_entry; /* [512] for sw book keeping */ + struct iavf_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */ u32 ref_cnt; u32 sd_index; }; -struct avf_hmc_sd_entry { - enum avf_sd_entry_type entry_type; +struct iavf_hmc_sd_entry { + enum iavf_sd_entry_type entry_type; bool valid; union { - struct avf_hmc_pd_table pd_table; - struct avf_hmc_bp bp; + struct iavf_hmc_pd_table pd_table; + struct iavf_hmc_bp bp; } u; }; -struct avf_hmc_sd_table { - struct avf_virt_mem addr; /* used to track sd_entry allocations */ +struct iavf_hmc_sd_table { + struct iavf_virt_mem addr; /* used to track sd_entry allocations */ u32 sd_cnt; u32 ref_cnt; - struct avf_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */ + struct iavf_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */ }; -struct avf_hmc_info { +struct iavf_hmc_info { u32 signature; /* equals to pci func num for PF and dynamically allocated for VFs */ u8 hmc_fn_id; u16 first_sd_index; /* index of the first available SD */ /* hmc objects */ - struct avf_hmc_obj_info *hmc_obj; - struct avf_virt_mem hmc_obj_virt_mem; - struct avf_hmc_sd_table sd_table; + struct iavf_hmc_obj_info *hmc_obj; + struct iavf_virt_mem hmc_obj_virt_mem; + struct iavf_hmc_sd_table sd_table; }; -#define AVF_INC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt++) -#define AVF_INC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt++) -#define AVF_INC_BP_REFCNT(bp) ((bp)->ref_cnt++) +#define IAVF_INC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt++) +#define IAVF_INC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt++) +#define IAVF_INC_BP_REFCNT(bp) ((bp)->ref_cnt++) -#define AVF_DEC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt--) -#define AVF_DEC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt--) -#define AVF_DEC_BP_REFCNT(bp) ((bp)->ref_cnt--) +#define IAVF_DEC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt--) +#define IAVF_DEC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt--) +#define IAVF_DEC_BP_REFCNT(bp) ((bp)->ref_cnt--) /** - * AVF_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware + * IAVF_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware * @hw: pointer to our hw struct * @pa: pointer to physical address * @sd_index: segment descriptor index * @type: if sd entry is direct or paged **/ -#define AVF_SET_PF_SD_ENTRY(hw, pa, sd_index, type) \ +#define IAVF_SET_PF_SD_ENTRY(hw, pa, sd_index, type) \ { \ u32 val1, val2, val3; \ - val1 = (u32)(AVF_HI_DWORD(pa)); \ - val2 = (u32)(pa) | (AVF_HMC_MAX_BP_COUNT << \ - AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \ - ((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) << \ - AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) | \ - BIT(AVF_PFHMC_SDDATALOW_PMSDVALID_SHIFT); \ - val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT); \ - wr32((hw), AVF_PFHMC_SDDATAHIGH, val1); \ - wr32((hw), AVF_PFHMC_SDDATALOW, val2); \ - wr32((hw), AVF_PFHMC_SDCMD, val3); \ + val1 = (u32)(IAVF_HI_DWORD(pa)); \ + val2 = (u32)(pa) | (IAVF_HMC_MAX_BP_COUNT << \ + IAVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \ + ((((type) == IAVF_SD_TYPE_PAGED) ? 0 : 1) << \ + IAVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) | \ + BIT(IAVF_PFHMC_SDDATALOW_PMSDVALID_SHIFT); \ + val3 = (sd_index) | BIT_ULL(IAVF_PFHMC_SDCMD_PMSDWR_SHIFT); \ + wr32((hw), IAVF_PFHMC_SDDATAHIGH, val1); \ + wr32((hw), IAVF_PFHMC_SDDATALOW, val2); \ + wr32((hw), IAVF_PFHMC_SDCMD, val3); \ } /** - * AVF_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware + * IAVF_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware * @hw: pointer to our hw struct * @sd_index: segment descriptor index * @type: if sd entry is direct or paged **/ -#define AVF_CLEAR_PF_SD_ENTRY(hw, sd_index, type) \ +#define IAVF_CLEAR_PF_SD_ENTRY(hw, sd_index, type) \ { \ u32 val2, val3; \ - val2 = (AVF_HMC_MAX_BP_COUNT << \ - AVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \ - ((((type) == AVF_SD_TYPE_PAGED) ? 0 : 1) << \ - AVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT); \ - val3 = (sd_index) | BIT_ULL(AVF_PFHMC_SDCMD_PMSDWR_SHIFT); \ - wr32((hw), AVF_PFHMC_SDDATAHIGH, 0); \ - wr32((hw), AVF_PFHMC_SDDATALOW, val2); \ - wr32((hw), AVF_PFHMC_SDCMD, val3); \ + val2 = (IAVF_HMC_MAX_BP_COUNT << \ + IAVF_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \ + ((((type) == IAVF_SD_TYPE_PAGED) ? 0 : 1) << \ + IAVF_PFHMC_SDDATALOW_PMSDTYPE_SHIFT); \ + val3 = (sd_index) | BIT_ULL(IAVF_PFHMC_SDCMD_PMSDWR_SHIFT); \ + wr32((hw), IAVF_PFHMC_SDDATAHIGH, 0); \ + wr32((hw), IAVF_PFHMC_SDDATALOW, val2); \ + wr32((hw), IAVF_PFHMC_SDCMD, val3); \ } /** - * AVF_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware + * IAVF_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware * @hw: pointer to our hw struct * @sd_idx: segment descriptor index * @pd_idx: page descriptor index **/ -#define AVF_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx) \ - wr32((hw), AVF_PFHMC_PDINV, \ - (((sd_idx) << AVF_PFHMC_PDINV_PMSDIDX_SHIFT) | \ - ((pd_idx) << AVF_PFHMC_PDINV_PMPDIDX_SHIFT))) +#define IAVF_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx) \ + wr32((hw), IAVF_PFHMC_PDINV, \ + (((sd_idx) << IAVF_PFHMC_PDINV_PMSDIDX_SHIFT) | \ + ((pd_idx) << IAVF_PFHMC_PDINV_PMPDIDX_SHIFT))) /** - * AVF_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit + * IAVF_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit * @hmc_info: pointer to the HMC configuration information structure * @type: type of HMC resources we're searching * @index: starting index for the object @@ -181,22 +181,22 @@ struct avf_hmc_info { * @sd_limit: pointer to return the maximum number of segment descriptors * * This function calculates the segment descriptor index and index limit - * for the resource defined by avf_hmc_rsrc_type. + * for the resource defined by iavf_hmc_rsrc_type. **/ -#define AVF_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\ +#define IAVF_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\ { \ u64 fpm_addr, fpm_limit; \ fpm_addr = (hmc_info)->hmc_obj[(type)].base + \ (hmc_info)->hmc_obj[(type)].size * (index); \ fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\ - *(sd_idx) = (u32)(fpm_addr / AVF_HMC_DIRECT_BP_SIZE); \ - *(sd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_DIRECT_BP_SIZE); \ + *(sd_idx) = (u32)(fpm_addr / IAVF_HMC_DIRECT_BP_SIZE); \ + *(sd_limit) = (u32)((fpm_limit - 1) / IAVF_HMC_DIRECT_BP_SIZE); \ /* add one more to the limit to correct our range */ \ *(sd_limit) += 1; \ } /** - * AVF_FIND_PD_INDEX_LIMIT - finds page descriptor index limit + * IAVF_FIND_PD_INDEX_LIMIT - finds page descriptor index limit * @hmc_info: pointer to the HMC configuration information struct * @type: HMC resource type we're examining * @idx: starting index for the object @@ -205,41 +205,41 @@ struct avf_hmc_info { * @pd_limit: pointer to return page descriptor index limit * * Calculates the page descriptor index and index limit for the resource - * defined by avf_hmc_rsrc_type. + * defined by iavf_hmc_rsrc_type. **/ -#define AVF_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\ +#define IAVF_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\ { \ u64 fpm_adr, fpm_limit; \ fpm_adr = (hmc_info)->hmc_obj[(type)].base + \ (hmc_info)->hmc_obj[(type)].size * (idx); \ fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt); \ - *(pd_index) = (u32)(fpm_adr / AVF_HMC_PAGED_BP_SIZE); \ - *(pd_limit) = (u32)((fpm_limit - 1) / AVF_HMC_PAGED_BP_SIZE); \ + *(pd_index) = (u32)(fpm_adr / IAVF_HMC_PAGED_BP_SIZE); \ + *(pd_limit) = (u32)((fpm_limit - 1) / IAVF_HMC_PAGED_BP_SIZE); \ /* add one more to the limit to correct our range */ \ *(pd_limit) += 1; \ } -enum avf_status_code avf_add_sd_table_entry(struct avf_hw *hw, - struct avf_hmc_info *hmc_info, +enum iavf_status_code iavf_add_sd_table_entry(struct iavf_hw *hw, + struct iavf_hmc_info *hmc_info, u32 sd_index, - enum avf_sd_entry_type type, + enum iavf_sd_entry_type type, u64 direct_mode_sz); -enum avf_status_code avf_add_pd_table_entry(struct avf_hw *hw, - struct avf_hmc_info *hmc_info, +enum iavf_status_code iavf_add_pd_table_entry(struct iavf_hw *hw, + struct iavf_hmc_info *hmc_info, u32 pd_index, - struct avf_dma_mem *rsrc_pg); -enum avf_status_code avf_remove_pd_bp(struct avf_hw *hw, - struct avf_hmc_info *hmc_info, + struct iavf_dma_mem *rsrc_pg); +enum iavf_status_code iavf_remove_pd_bp(struct iavf_hw *hw, + struct iavf_hmc_info *hmc_info, u32 idx); -enum avf_status_code avf_prep_remove_sd_bp(struct avf_hmc_info *hmc_info, +enum iavf_status_code iavf_prep_remove_sd_bp(struct iavf_hmc_info *hmc_info, u32 idx); -enum avf_status_code avf_remove_sd_bp_new(struct avf_hw *hw, - struct avf_hmc_info *hmc_info, +enum iavf_status_code iavf_remove_sd_bp_new(struct iavf_hw *hw, + struct iavf_hmc_info *hmc_info, u32 idx, bool is_pf); -enum avf_status_code avf_prep_remove_pd_page(struct avf_hmc_info *hmc_info, +enum iavf_status_code iavf_prep_remove_pd_page(struct iavf_hmc_info *hmc_info, u32 idx); -enum avf_status_code avf_remove_pd_page_new(struct avf_hw *hw, - struct avf_hmc_info *hmc_info, +enum iavf_status_code iavf_remove_pd_page_new(struct iavf_hw *hw, + struct iavf_hmc_info *hmc_info, u32 idx, bool is_pf); -#endif /* _AVF_HMC_H_ */ +#endif /* _IAVF_HMC_H_ */ diff --git a/drivers/net/iavf/base/iavf_lan_hmc.h b/drivers/net/iavf/base/iavf_lan_hmc.h index 48805d89a..a9389f52b 100644 --- a/drivers/net/iavf/base/iavf_lan_hmc.h +++ b/drivers/net/iavf/base/iavf_lan_hmc.h @@ -31,11 +31,11 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_LAN_HMC_H_ -#define _AVF_LAN_HMC_H_ +#ifndef _IAVF_LAN_HMC_H_ +#define _IAVF_LAN_HMC_H_ /* forward-declare the HW struct for the compiler */ -struct avf_hw; +struct iavf_hw; /* HMC element context information */ @@ -46,14 +46,14 @@ struct avf_hw; * size then we could end up shifting bits off the top of the variable when the * variable is at the top of a byte and crosses over into the next byte. */ -struct avf_hmc_obj_rxq { +struct iavf_hmc_obj_rxq { u16 head; u16 cpuid; /* bigger than needed, see above for reason */ u64 base; u16 qlen; -#define AVF_RXQ_CTX_DBUFF_SHIFT 7 +#define IAVF_RXQ_CTX_DBUFF_SHIFT 7 u16 dbuff; /* bigger than needed, see above for reason */ -#define AVF_RXQ_CTX_HBUFF_SHIFT 6 +#define IAVF_RXQ_CTX_HBUFF_SHIFT 6 u16 hbuff; /* bigger than needed, see above for reason */ u8 dtype; u8 dsize; @@ -79,7 +79,7 @@ struct avf_hmc_obj_rxq { * size then we could end up shifting bits off the top of the variable when the * variable is at the top of a byte and crosses over into the next byte. */ -struct avf_hmc_obj_txq { +struct iavf_hmc_obj_txq { u16 head; u8 new_context; u64 base; @@ -101,100 +101,100 @@ struct avf_hmc_obj_txq { }; /* for hsplit_0 field of Rx HMC context */ -enum avf_hmc_obj_rx_hsplit_0 { - AVF_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT = 0, - AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2 = 1, - AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP = 2, - AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4, - AVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP = 8, +enum iavf_hmc_obj_rx_hsplit_0 { + IAVF_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT = 0, + IAVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2 = 1, + IAVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP = 2, + IAVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4, + IAVF_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP = 8, }; /* fcoe_cntx and fcoe_filt are for debugging purpose only */ -struct avf_hmc_obj_fcoe_cntx { +struct iavf_hmc_obj_fcoe_cntx { u32 rsv[32]; }; -struct avf_hmc_obj_fcoe_filt { +struct iavf_hmc_obj_fcoe_filt { u32 rsv[8]; }; /* Context sizes for LAN objects */ -enum avf_hmc_lan_object_size { - AVF_HMC_LAN_OBJ_SZ_8 = 0x3, - AVF_HMC_LAN_OBJ_SZ_16 = 0x4, - AVF_HMC_LAN_OBJ_SZ_32 = 0x5, - AVF_HMC_LAN_OBJ_SZ_64 = 0x6, - AVF_HMC_LAN_OBJ_SZ_128 = 0x7, - AVF_HMC_LAN_OBJ_SZ_256 = 0x8, - AVF_HMC_LAN_OBJ_SZ_512 = 0x9, +enum iavf_hmc_lan_object_size { + IAVF_HMC_LAN_OBJ_SZ_8 = 0x3, + IAVF_HMC_LAN_OBJ_SZ_16 = 0x4, + IAVF_HMC_LAN_OBJ_SZ_32 = 0x5, + IAVF_HMC_LAN_OBJ_SZ_64 = 0x6, + IAVF_HMC_LAN_OBJ_SZ_128 = 0x7, + IAVF_HMC_LAN_OBJ_SZ_256 = 0x8, + IAVF_HMC_LAN_OBJ_SZ_512 = 0x9, }; -#define AVF_HMC_L2OBJ_BASE_ALIGNMENT 512 -#define AVF_HMC_OBJ_SIZE_TXQ 128 -#define AVF_HMC_OBJ_SIZE_RXQ 32 -#define AVF_HMC_OBJ_SIZE_FCOE_CNTX 64 -#define AVF_HMC_OBJ_SIZE_FCOE_FILT 64 - -enum avf_hmc_lan_rsrc_type { - AVF_HMC_LAN_FULL = 0, - AVF_HMC_LAN_TX = 1, - AVF_HMC_LAN_RX = 2, - AVF_HMC_FCOE_CTX = 3, - AVF_HMC_FCOE_FILT = 4, - AVF_HMC_LAN_MAX = 5 +#define IAVF_HMC_L2OBJ_BASE_ALIGNMENT 512 +#define IAVF_HMC_OBJ_SIZE_TXQ 128 +#define IAVF_HMC_OBJ_SIZE_RXQ 32 +#define IAVF_HMC_OBJ_SIZE_FCOE_CNTX 64 +#define IAVF_HMC_OBJ_SIZE_FCOE_FILT 64 + +enum iavf_hmc_lan_rsrc_type { + IAVF_HMC_LAN_FULL = 0, + IAVF_HMC_LAN_TX = 1, + IAVF_HMC_LAN_RX = 2, + IAVF_HMC_FCOE_CTX = 3, + IAVF_HMC_FCOE_FILT = 4, + IAVF_HMC_LAN_MAX = 5 }; -enum avf_hmc_model { - AVF_HMC_MODEL_DIRECT_PREFERRED = 0, - AVF_HMC_MODEL_DIRECT_ONLY = 1, - AVF_HMC_MODEL_PAGED_ONLY = 2, - AVF_HMC_MODEL_UNKNOWN, +enum iavf_hmc_model { + IAVF_HMC_MODEL_DIRECT_PREFERRED = 0, + IAVF_HMC_MODEL_DIRECT_ONLY = 1, + IAVF_HMC_MODEL_PAGED_ONLY = 2, + IAVF_HMC_MODEL_UNKNOWN, }; -struct avf_hmc_lan_create_obj_info { - struct avf_hmc_info *hmc_info; +struct iavf_hmc_lan_create_obj_info { + struct iavf_hmc_info *hmc_info; u32 rsrc_type; u32 start_idx; u32 count; - enum avf_sd_entry_type entry_type; + enum iavf_sd_entry_type entry_type; u64 direct_mode_sz; }; -struct avf_hmc_lan_delete_obj_info { - struct avf_hmc_info *hmc_info; +struct iavf_hmc_lan_delete_obj_info { + struct iavf_hmc_info *hmc_info; u32 rsrc_type; u32 start_idx; u32 count; }; -enum avf_status_code avf_init_lan_hmc(struct avf_hw *hw, u32 txq_num, +enum iavf_status_code iavf_init_lan_hmc(struct iavf_hw *hw, u32 txq_num, u32 rxq_num, u32 fcoe_cntx_num, u32 fcoe_filt_num); -enum avf_status_code avf_configure_lan_hmc(struct avf_hw *hw, - enum avf_hmc_model model); -enum avf_status_code avf_shutdown_lan_hmc(struct avf_hw *hw); +enum iavf_status_code iavf_configure_lan_hmc(struct iavf_hw *hw, + enum iavf_hmc_model model); +enum iavf_status_code iavf_shutdown_lan_hmc(struct iavf_hw *hw); -u64 avf_calculate_l2fpm_size(u32 txq_num, u32 rxq_num, +u64 iavf_calculate_l2fpm_size(u32 txq_num, u32 rxq_num, u32 fcoe_cntx_num, u32 fcoe_filt_num); -enum avf_status_code avf_get_lan_tx_queue_context(struct avf_hw *hw, +enum iavf_status_code iavf_get_lan_tx_queue_context(struct iavf_hw *hw, u16 queue, - struct avf_hmc_obj_txq *s); -enum avf_status_code avf_clear_lan_tx_queue_context(struct avf_hw *hw, + struct iavf_hmc_obj_txq *s); +enum iavf_status_code iavf_clear_lan_tx_queue_context(struct iavf_hw *hw, u16 queue); -enum avf_status_code avf_set_lan_tx_queue_context(struct avf_hw *hw, +enum iavf_status_code iavf_set_lan_tx_queue_context(struct iavf_hw *hw, u16 queue, - struct avf_hmc_obj_txq *s); -enum avf_status_code avf_get_lan_rx_queue_context(struct avf_hw *hw, + struct iavf_hmc_obj_txq *s); +enum iavf_status_code iavf_get_lan_rx_queue_context(struct iavf_hw *hw, u16 queue, - struct avf_hmc_obj_rxq *s); -enum avf_status_code avf_clear_lan_rx_queue_context(struct avf_hw *hw, + struct iavf_hmc_obj_rxq *s); +enum iavf_status_code iavf_clear_lan_rx_queue_context(struct iavf_hw *hw, u16 queue); -enum avf_status_code avf_set_lan_rx_queue_context(struct avf_hw *hw, +enum iavf_status_code iavf_set_lan_rx_queue_context(struct iavf_hw *hw, u16 queue, - struct avf_hmc_obj_rxq *s); -enum avf_status_code avf_create_lan_hmc_object(struct avf_hw *hw, - struct avf_hmc_lan_create_obj_info *info); -enum avf_status_code avf_delete_lan_hmc_object(struct avf_hw *hw, - struct avf_hmc_lan_delete_obj_info *info); + struct iavf_hmc_obj_rxq *s); +enum iavf_status_code iavf_create_lan_hmc_object(struct iavf_hw *hw, + struct iavf_hmc_lan_create_obj_info *info); +enum iavf_status_code iavf_delete_lan_hmc_object(struct iavf_hw *hw, + struct iavf_hmc_lan_delete_obj_info *info); -#endif /* _AVF_LAN_HMC_H_ */ +#endif /* _IAVF_LAN_HMC_H_ */ diff --git a/drivers/net/iavf/base/iavf_osdep.h b/drivers/net/iavf/base/iavf_osdep.h index 1f1226674..648026693 100644 --- a/drivers/net/iavf/base/iavf_osdep.h +++ b/drivers/net/iavf/base/iavf_osdep.h @@ -2,8 +2,8 @@ * Copyright(c) 2017 Intel Corporation */ -#ifndef _AVF_OSDEP_H_ -#define _AVF_OSDEP_H_ +#ifndef _IAVF_OSDEP_H_ +#define _IAVF_OSDEP_H_ #include #include @@ -70,7 +70,7 @@ typedef uint64_t u64; #define max(a,b) RTE_MAX(a,b) #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f)) -#define ASSERT(x) if(!(x)) rte_panic("AVF: x") +#define ASSERT(x) if(!(x)) rte_panic("IAVF: x") #define DEBUGOUT(S) PMD_DRV_LOG_RAW(DEBUG, S) #define DEBUGOUT2(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A) @@ -90,98 +90,98 @@ typedef uint64_t u64; #define le32_to_cpu(c) rte_le_to_cpu_32(c) #define le64_to_cpu(k) rte_le_to_cpu_64(k) -#define avf_memset(a, b, c, d) memset((a), (b), (c)) -#define avf_memcpy(a, b, c, d) rte_memcpy((a), (b), (c)) +#define iavf_memset(a, b, c, d) memset((a), (b), (c)) +#define iavf_memcpy(a, b, c, d) rte_memcpy((a), (b), (c)) -#define avf_usec_delay(x) rte_delay_us_sleep(x) -#define avf_msec_delay(x) avf_usec_delay(1000 * (x)) +#define iavf_usec_delay(x) rte_delay_us_sleep(x) +#define iavf_msec_delay(x) iavf_usec_delay(1000 * (x)) -#define AVF_PCI_REG(reg) rte_read32(reg) -#define AVF_PCI_REG_ADDR(a, reg) \ +#define IAVF_PCI_REG(reg) rte_read32(reg) +#define IAVF_PCI_REG_ADDR(a, reg) \ ((volatile uint32_t *)((char *)(a)->hw_addr + (reg))) -#define AVF_PCI_REG_WRITE(reg, value) \ +#define IAVF_PCI_REG_WRITE(reg, value) \ rte_write32((rte_cpu_to_le_32(value)), reg) -#define AVF_PCI_REG_WRITE_RELAXED(reg, value) \ +#define IAVF_PCI_REG_WRITE_RELAXED(reg, value) \ rte_write32_relaxed((rte_cpu_to_le_32(value)), reg) static inline -uint32_t avf_read_addr(volatile void *addr) +uint32_t iavf_read_addr(volatile void *addr) { - return rte_le_to_cpu_32(AVF_PCI_REG(addr)); + return rte_le_to_cpu_32(IAVF_PCI_REG(addr)); } -#define AVF_READ_REG(hw, reg) \ - avf_read_addr(AVF_PCI_REG_ADDR((hw), (reg))) -#define AVF_WRITE_REG(hw, reg, value) \ - AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((hw), (reg)), (value)) -#define AVF_WRITE_FLUSH(a) \ - AVF_READ_REG(a, AVFGEN_RSTAT) +#define IAVF_READ_REG(hw, reg) \ + iavf_read_addr(IAVF_PCI_REG_ADDR((hw), (reg))) +#define IAVF_WRITE_REG(hw, reg, value) \ + IAVF_PCI_REG_WRITE(IAVF_PCI_REG_ADDR((hw), (reg)), (value)) +#define IAVF_WRITE_FLUSH(a) \ + IAVF_READ_REG(a, IAVFGEN_RSTAT) -#define rd32(a, reg) avf_read_addr(AVF_PCI_REG_ADDR((a), (reg))) +#define rd32(a, reg) iavf_read_addr(IAVF_PCI_REG_ADDR((a), (reg))) #define wr32(a, reg, value) \ - AVF_PCI_REG_WRITE(AVF_PCI_REG_ADDR((a), (reg)), (value)) + IAVF_PCI_REG_WRITE(IAVF_PCI_REG_ADDR((a), (reg)), (value)) #define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0])) -#define avf_debug(h, m, s, ...) \ +#define iavf_debug(h, m, s, ...) \ do { \ if (((m) & (h)->debug_mask)) \ - PMD_DRV_LOG_RAW(DEBUG, "avf %02x.%x " s, \ + PMD_DRV_LOG_RAW(DEBUG, "iavf %02x.%x " s, \ (h)->bus.device, (h)->bus.func, \ ##__VA_ARGS__); \ } while (0) /* memory allocation tracking */ -struct avf_dma_mem { +struct iavf_dma_mem { void *va; u64 pa; u32 size; const void *zone; } __attribute__((packed)); -struct avf_virt_mem { +struct iavf_virt_mem { void *va; u32 size; } __attribute__((packed)); /* SW spinlock */ -struct avf_spinlock { +struct iavf_spinlock { rte_spinlock_t spinlock; }; -#define avf_allocate_dma_mem(h, m, unused, s, a) \ - avf_allocate_dma_mem_d(h, m, s, a) -#define avf_free_dma_mem(h, m) avf_free_dma_mem_d(h, m) +#define iavf_allocate_dma_mem(h, m, unused, s, a) \ + iavf_allocate_dma_mem_d(h, m, s, a) +#define iavf_free_dma_mem(h, m) iavf_free_dma_mem_d(h, m) -#define avf_allocate_virt_mem(h, m, s) avf_allocate_virt_mem_d(h, m, s) -#define avf_free_virt_mem(h, m) avf_free_virt_mem_d(h, m) +#define iavf_allocate_virt_mem(h, m, s) iavf_allocate_virt_mem_d(h, m, s) +#define iavf_free_virt_mem(h, m) iavf_free_virt_mem_d(h, m) static inline void -avf_init_spinlock_d(struct avf_spinlock *sp) +iavf_init_spinlock_d(struct iavf_spinlock *sp) { rte_spinlock_init(&sp->spinlock); } static inline void -avf_acquire_spinlock_d(struct avf_spinlock *sp) +iavf_acquire_spinlock_d(struct iavf_spinlock *sp) { rte_spinlock_lock(&sp->spinlock); } static inline void -avf_release_spinlock_d(struct avf_spinlock *sp) +iavf_release_spinlock_d(struct iavf_spinlock *sp) { rte_spinlock_unlock(&sp->spinlock); } static inline void -avf_destroy_spinlock_d(__rte_unused struct avf_spinlock *sp) +iavf_destroy_spinlock_d(__rte_unused struct iavf_spinlock *sp) { } -#define avf_init_spinlock(_sp) avf_init_spinlock_d(_sp) -#define avf_acquire_spinlock(_sp) avf_acquire_spinlock_d(_sp) -#define avf_release_spinlock(_sp) avf_release_spinlock_d(_sp) -#define avf_destroy_spinlock(_sp) avf_destroy_spinlock_d(_sp) +#define iavf_init_spinlock(_sp) iavf_init_spinlock_d(_sp) +#define iavf_acquire_spinlock(_sp) iavf_acquire_spinlock_d(_sp) +#define iavf_release_spinlock(_sp) iavf_release_spinlock_d(_sp) +#define iavf_destroy_spinlock(_sp) iavf_destroy_spinlock_d(_sp) -#endif /* _AVF_OSDEP_H_ */ +#endif /* _IAVF_OSDEP_H_ */ diff --git a/drivers/net/iavf/base/iavf_prototype.h b/drivers/net/iavf/base/iavf_prototype.h index 8230d5cac..35122aedb 100644 --- a/drivers/net/iavf/base/iavf_prototype.h +++ b/drivers/net/iavf/base/iavf_prototype.h @@ -31,8 +31,8 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_PROTOTYPE_H_ -#define _AVF_PROTOTYPE_H_ +#ifndef _IAVF_PROTOTYPE_H_ +#define _IAVF_PROTOTYPE_H_ #include "iavf_type.h" #include "iavf_alloc.h" @@ -46,161 +46,161 @@ POSSIBILITY OF SUCH DAMAGE. */ /* adminq functions */ -enum avf_status_code avf_init_adminq(struct avf_hw *hw); -enum avf_status_code avf_shutdown_adminq(struct avf_hw *hw); -enum avf_status_code avf_init_asq(struct avf_hw *hw); -enum avf_status_code avf_init_arq(struct avf_hw *hw); -enum avf_status_code avf_alloc_adminq_asq_ring(struct avf_hw *hw); -enum avf_status_code avf_alloc_adminq_arq_ring(struct avf_hw *hw); -enum avf_status_code avf_shutdown_asq(struct avf_hw *hw); -enum avf_status_code avf_shutdown_arq(struct avf_hw *hw); -u16 avf_clean_asq(struct avf_hw *hw); -void avf_free_adminq_asq(struct avf_hw *hw); -void avf_free_adminq_arq(struct avf_hw *hw); -enum avf_status_code avf_validate_mac_addr(u8 *mac_addr); -void avf_adminq_init_ring_data(struct avf_hw *hw); -enum avf_status_code avf_clean_arq_element(struct avf_hw *hw, - struct avf_arq_event_info *e, +enum iavf_status_code iavf_init_adminq(struct iavf_hw *hw); +enum iavf_status_code iavf_shutdown_adminq(struct iavf_hw *hw); +enum iavf_status_code iavf_init_asq(struct iavf_hw *hw); +enum iavf_status_code iavf_init_arq(struct iavf_hw *hw); +enum iavf_status_code iavf_alloc_adminq_asq_ring(struct iavf_hw *hw); +enum iavf_status_code iavf_alloc_adminq_arq_ring(struct iavf_hw *hw); +enum iavf_status_code iavf_shutdown_asq(struct iavf_hw *hw); +enum iavf_status_code iavf_shutdown_arq(struct iavf_hw *hw); +u16 iavf_clean_asq(struct iavf_hw *hw); +void iavf_free_adminq_asq(struct iavf_hw *hw); +void iavf_free_adminq_arq(struct iavf_hw *hw); +enum iavf_status_code iavf_validate_mac_addr(u8 *mac_addr); +void iavf_adminq_init_ring_data(struct iavf_hw *hw); +enum iavf_status_code iavf_clean_arq_element(struct iavf_hw *hw, + struct iavf_arq_event_info *e, u16 *events_pending); -enum avf_status_code avf_asq_send_command(struct avf_hw *hw, - struct avf_aq_desc *desc, +enum iavf_status_code iavf_asq_send_command(struct iavf_hw *hw, + struct iavf_aq_desc *desc, void *buff, /* can be NULL */ u16 buff_size, - struct avf_asq_cmd_details *cmd_details); -bool avf_asq_done(struct avf_hw *hw); + struct iavf_asq_cmd_details *cmd_details); +bool iavf_asq_done(struct iavf_hw *hw); /* debug function for adminq */ -void avf_debug_aq(struct avf_hw *hw, enum avf_debug_mask mask, +void iavf_debug_aq(struct iavf_hw *hw, enum iavf_debug_mask mask, void *desc, void *buffer, u16 buf_len); -void avf_idle_aq(struct avf_hw *hw); -bool avf_check_asq_alive(struct avf_hw *hw); -enum avf_status_code avf_aq_queue_shutdown(struct avf_hw *hw, bool unloading); +void iavf_idle_aq(struct iavf_hw *hw); +bool iavf_check_asq_alive(struct iavf_hw *hw); +enum iavf_status_code iavf_aq_queue_shutdown(struct iavf_hw *hw, bool unloading); -enum avf_status_code avf_aq_get_rss_lut(struct avf_hw *hw, u16 seid, +enum iavf_status_code iavf_aq_get_rss_lut(struct iavf_hw *hw, u16 seid, bool pf_lut, u8 *lut, u16 lut_size); -enum avf_status_code avf_aq_set_rss_lut(struct avf_hw *hw, u16 seid, +enum iavf_status_code iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 seid, bool pf_lut, u8 *lut, u16 lut_size); -enum avf_status_code avf_aq_get_rss_key(struct avf_hw *hw, +enum iavf_status_code iavf_aq_get_rss_key(struct iavf_hw *hw, u16 seid, - struct avf_aqc_get_set_rss_key_data *key); -enum avf_status_code avf_aq_set_rss_key(struct avf_hw *hw, + struct iavf_aqc_get_set_rss_key_data *key); +enum iavf_status_code iavf_aq_set_rss_key(struct iavf_hw *hw, u16 seid, - struct avf_aqc_get_set_rss_key_data *key); -const char *avf_aq_str(struct avf_hw *hw, enum avf_admin_queue_err aq_err); -const char *avf_stat_str(struct avf_hw *hw, enum avf_status_code stat_err); + struct iavf_aqc_get_set_rss_key_data *key); +const char *iavf_aq_str(struct iavf_hw *hw, enum iavf_admin_queue_err aq_err); +const char *iavf_stat_str(struct iavf_hw *hw, enum iavf_status_code stat_err); -enum avf_status_code avf_set_mac_type(struct avf_hw *hw); +enum iavf_status_code iavf_set_mac_type(struct iavf_hw *hw); -extern struct avf_rx_ptype_decoded avf_ptype_lookup[]; +extern struct iavf_rx_ptype_decoded iavf_ptype_lookup[]; -STATIC INLINE struct avf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) +STATIC INLINE struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) { - return avf_ptype_lookup[ptype]; + return iavf_ptype_lookup[ptype]; } /* prototype for functions used for SW spinlocks */ -void avf_init_spinlock(struct avf_spinlock *sp); -void avf_acquire_spinlock(struct avf_spinlock *sp); -void avf_release_spinlock(struct avf_spinlock *sp); -void avf_destroy_spinlock(struct avf_spinlock *sp); +void iavf_init_spinlock(struct iavf_spinlock *sp); +void iavf_acquire_spinlock(struct iavf_spinlock *sp); +void iavf_release_spinlock(struct iavf_spinlock *sp); +void iavf_destroy_spinlock(struct iavf_spinlock *sp); -/* avf_common for VF drivers*/ -void avf_parse_hw_config(struct avf_hw *hw, +/* iavf_common for VF drivers*/ +void iavf_parse_hw_config(struct iavf_hw *hw, struct virtchnl_vf_resource *msg); -enum avf_status_code avf_reset(struct avf_hw *hw); -enum avf_status_code avf_aq_send_msg_to_pf(struct avf_hw *hw, +enum iavf_status_code iavf_reset(struct iavf_hw *hw); +enum iavf_status_code iavf_aq_send_msg_to_pf(struct iavf_hw *hw, enum virtchnl_ops v_opcode, - enum avf_status_code v_retval, + enum iavf_status_code v_retval, u8 *msg, u16 msglen, - struct avf_asq_cmd_details *cmd_details); -enum avf_status_code avf_set_filter_control(struct avf_hw *hw, - struct avf_filter_control_settings *settings); -enum avf_status_code avf_aq_add_rem_control_packet_filter(struct avf_hw *hw, + struct iavf_asq_cmd_details *cmd_details); +enum iavf_status_code iavf_set_filter_control(struct iavf_hw *hw, + struct iavf_filter_control_settings *settings); +enum iavf_status_code iavf_aq_add_rem_control_packet_filter(struct iavf_hw *hw, u8 *mac_addr, u16 ethtype, u16 flags, u16 vsi_seid, u16 queue, bool is_add, - struct avf_control_filter_stats *stats, - struct avf_asq_cmd_details *cmd_details); -enum avf_status_code avf_aq_debug_dump(struct avf_hw *hw, u8 cluster_id, + struct iavf_control_filter_stats *stats, + struct iavf_asq_cmd_details *cmd_details); +enum iavf_status_code iavf_aq_debug_dump(struct iavf_hw *hw, u8 cluster_id, u8 table_id, u32 start_index, u16 buff_size, void *buff, u16 *ret_buff_size, u8 *ret_next_table, u32 *ret_next_index, - struct avf_asq_cmd_details *cmd_details); -void avf_add_filter_to_drop_tx_flow_control_frames(struct avf_hw *hw, + struct iavf_asq_cmd_details *cmd_details); +void iavf_add_filter_to_drop_tx_flow_control_frames(struct iavf_hw *hw, u16 vsi_seid); -enum avf_status_code avf_aq_rx_ctl_read_register(struct avf_hw *hw, +enum iavf_status_code iavf_aq_rx_ctl_read_register(struct iavf_hw *hw, u32 reg_addr, u32 *reg_val, - struct avf_asq_cmd_details *cmd_details); -u32 avf_read_rx_ctl(struct avf_hw *hw, u32 reg_addr); -enum avf_status_code avf_aq_rx_ctl_write_register(struct avf_hw *hw, + struct iavf_asq_cmd_details *cmd_details); +u32 iavf_read_rx_ctl(struct iavf_hw *hw, u32 reg_addr); +enum iavf_status_code iavf_aq_rx_ctl_write_register(struct iavf_hw *hw, u32 reg_addr, u32 reg_val, - struct avf_asq_cmd_details *cmd_details); -void avf_write_rx_ctl(struct avf_hw *hw, u32 reg_addr, u32 reg_val); -enum avf_status_code avf_aq_set_phy_register(struct avf_hw *hw, + struct iavf_asq_cmd_details *cmd_details); +void iavf_write_rx_ctl(struct iavf_hw *hw, u32 reg_addr, u32 reg_val); +enum iavf_status_code iavf_aq_set_phy_register(struct iavf_hw *hw, u8 phy_select, u8 dev_addr, u32 reg_addr, u32 reg_val, - struct avf_asq_cmd_details *cmd_details); -enum avf_status_code avf_aq_get_phy_register(struct avf_hw *hw, + struct iavf_asq_cmd_details *cmd_details); +enum iavf_status_code iavf_aq_get_phy_register(struct iavf_hw *hw, u8 phy_select, u8 dev_addr, u32 reg_addr, u32 *reg_val, - struct avf_asq_cmd_details *cmd_details); - -enum avf_status_code avf_aq_set_arp_proxy_config(struct avf_hw *hw, - struct avf_aqc_arp_proxy_data *proxy_config, - struct avf_asq_cmd_details *cmd_details); -enum avf_status_code avf_aq_set_ns_proxy_table_entry(struct avf_hw *hw, - struct avf_aqc_ns_proxy_data *ns_proxy_table_entry, - struct avf_asq_cmd_details *cmd_details); -enum avf_status_code avf_aq_set_clear_wol_filter(struct avf_hw *hw, + struct iavf_asq_cmd_details *cmd_details); + +enum iavf_status_code iavf_aq_set_arp_proxy_config(struct iavf_hw *hw, + struct iavf_aqc_arp_proxy_data *proxy_config, + struct iavf_asq_cmd_details *cmd_details); +enum iavf_status_code iavf_aq_set_ns_proxy_table_entry(struct iavf_hw *hw, + struct iavf_aqc_ns_proxy_data *ns_proxy_table_entry, + struct iavf_asq_cmd_details *cmd_details); +enum iavf_status_code iavf_aq_set_clear_wol_filter(struct iavf_hw *hw, u8 filter_index, - struct avf_aqc_set_wol_filter_data *filter, + struct iavf_aqc_set_wol_filter_data *filter, bool set_filter, bool no_wol_tco, bool filter_valid, bool no_wol_tco_valid, - struct avf_asq_cmd_details *cmd_details); -enum avf_status_code avf_aq_get_wake_event_reason(struct avf_hw *hw, + struct iavf_asq_cmd_details *cmd_details); +enum iavf_status_code iavf_aq_get_wake_event_reason(struct iavf_hw *hw, u16 *wake_reason, - struct avf_asq_cmd_details *cmd_details); -enum avf_status_code avf_aq_clear_all_wol_filters(struct avf_hw *hw, - struct avf_asq_cmd_details *cmd_details); -enum avf_status_code avf_read_phy_register_clause22(struct avf_hw *hw, + struct iavf_asq_cmd_details *cmd_details); +enum iavf_status_code iavf_aq_clear_all_wol_filters(struct iavf_hw *hw, + struct iavf_asq_cmd_details *cmd_details); +enum iavf_status_code iavf_read_phy_register_clause22(struct iavf_hw *hw, u16 reg, u8 phy_addr, u16 *value); -enum avf_status_code avf_write_phy_register_clause22(struct avf_hw *hw, +enum iavf_status_code iavf_write_phy_register_clause22(struct iavf_hw *hw, u16 reg, u8 phy_addr, u16 value); -enum avf_status_code avf_read_phy_register_clause45(struct avf_hw *hw, +enum iavf_status_code iavf_read_phy_register_clause45(struct iavf_hw *hw, u8 page, u16 reg, u8 phy_addr, u16 *value); -enum avf_status_code avf_write_phy_register_clause45(struct avf_hw *hw, +enum iavf_status_code iavf_write_phy_register_clause45(struct iavf_hw *hw, u8 page, u16 reg, u8 phy_addr, u16 value); -enum avf_status_code avf_read_phy_register(struct avf_hw *hw, +enum iavf_status_code iavf_read_phy_register(struct iavf_hw *hw, u8 page, u16 reg, u8 phy_addr, u16 *value); -enum avf_status_code avf_write_phy_register(struct avf_hw *hw, +enum iavf_status_code iavf_write_phy_register(struct iavf_hw *hw, u8 page, u16 reg, u8 phy_addr, u16 value); -u8 avf_get_phy_address(struct avf_hw *hw, u8 dev_num); -enum avf_status_code avf_blink_phy_link_led(struct avf_hw *hw, +u8 iavf_get_phy_address(struct iavf_hw *hw, u8 dev_num); +enum iavf_status_code iavf_blink_phy_link_led(struct iavf_hw *hw, u32 time, u32 interval); -enum avf_status_code avf_aq_write_ddp(struct avf_hw *hw, void *buff, +enum iavf_status_code iavf_aq_write_ddp(struct iavf_hw *hw, void *buff, u16 buff_size, u32 track_id, u32 *error_offset, u32 *error_info, - struct avf_asq_cmd_details * + struct iavf_asq_cmd_details * cmd_details); -enum avf_status_code avf_aq_get_ddp_list(struct avf_hw *hw, void *buff, +enum iavf_status_code iavf_aq_get_ddp_list(struct iavf_hw *hw, void *buff, u16 buff_size, u8 flags, - struct avf_asq_cmd_details * + struct iavf_asq_cmd_details * cmd_details); -struct avf_generic_seg_header * -avf_find_segment_in_package(u32 segment_type, - struct avf_package_header *pkg_header); -struct avf_profile_section_header * -avf_find_section_in_profile(u32 section_type, - struct avf_profile_segment *profile); -enum avf_status_code -avf_write_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg, +struct iavf_generic_seg_header * +iavf_find_segment_in_package(u32 segment_type, + struct iavf_package_header *pkg_header); +struct iavf_profile_section_header * +iavf_find_section_in_profile(u32 section_type, + struct iavf_profile_segment *profile); +enum iavf_status_code +iavf_write_profile(struct iavf_hw *hw, struct iavf_profile_segment *iavf_seg, u32 track_id); -enum avf_status_code -avf_rollback_profile(struct avf_hw *hw, struct avf_profile_segment *avf_seg, +enum iavf_status_code +iavf_rollback_profile(struct iavf_hw *hw, struct iavf_profile_segment *iavf_seg, u32 track_id); -enum avf_status_code -avf_add_pinfo_to_list(struct avf_hw *hw, - struct avf_profile_segment *profile, +enum iavf_status_code +iavf_add_pinfo_to_list(struct iavf_hw *hw, + struct iavf_profile_segment *profile, u8 *profile_info_sec, u32 track_id); -#endif /* _AVF_PROTOTYPE_H_ */ +#endif /* _IAVF_PROTOTYPE_H_ */ diff --git a/drivers/net/iavf/base/iavf_register.h b/drivers/net/iavf/base/iavf_register.h index adb989583..de0629c5e 100644 --- a/drivers/net/iavf/base/iavf_register.h +++ b/drivers/net/iavf/base/iavf_register.h @@ -31,316 +31,316 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_REGISTER_H_ -#define _AVF_REGISTER_H_ +#ifndef _IAVF_REGISTER_H_ +#define _IAVF_REGISTER_H_ -#define AVFMSIX_PBA1(_i) (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */ -#define AVFMSIX_PBA1_MAX_INDEX 19 -#define AVFMSIX_PBA1_PENBIT_SHIFT 0 -#define AVFMSIX_PBA1_PENBIT_MASK AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA1_PENBIT_SHIFT) -#define AVFMSIX_TADD1(_i) (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ -#define AVFMSIX_TADD1_MAX_INDEX 639 -#define AVFMSIX_TADD1_MSIXTADD10_SHIFT 0 -#define AVFMSIX_TADD1_MSIXTADD10_MASK AVF_MASK(0x3, AVFMSIX_TADD1_MSIXTADD10_SHIFT) -#define AVFMSIX_TADD1_MSIXTADD_SHIFT 2 -#define AVFMSIX_TADD1_MSIXTADD_MASK AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD1_MSIXTADD_SHIFT) -#define AVFMSIX_TMSG1(_i) (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ -#define AVFMSIX_TMSG1_MAX_INDEX 639 -#define AVFMSIX_TMSG1_MSIXTMSG_SHIFT 0 -#define AVFMSIX_TMSG1_MSIXTMSG_MASK AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG1_MSIXTMSG_SHIFT) -#define AVFMSIX_TUADD1(_i) (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ -#define AVFMSIX_TUADD1_MAX_INDEX 639 -#define AVFMSIX_TUADD1_MSIXTUADD_SHIFT 0 -#define AVFMSIX_TUADD1_MSIXTUADD_MASK AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD1_MSIXTUADD_SHIFT) -#define AVFMSIX_TVCTRL1(_i) (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ -#define AVFMSIX_TVCTRL1_MAX_INDEX 639 -#define AVFMSIX_TVCTRL1_MASK_SHIFT 0 -#define AVFMSIX_TVCTRL1_MASK_MASK AVF_MASK(0x1, AVFMSIX_TVCTRL1_MASK_SHIFT) -#define AVF_ARQBAH1 0x00006000 /* Reset: EMPR */ -#define AVF_ARQBAH1_ARQBAH_SHIFT 0 -#define AVF_ARQBAH1_ARQBAH_MASK AVF_MASK(0xFFFFFFFF, AVF_ARQBAH1_ARQBAH_SHIFT) -#define AVF_ARQBAL1 0x00006C00 /* Reset: EMPR */ -#define AVF_ARQBAL1_ARQBAL_SHIFT 0 -#define AVF_ARQBAL1_ARQBAL_MASK AVF_MASK(0xFFFFFFFF, AVF_ARQBAL1_ARQBAL_SHIFT) -#define AVF_ARQH1 0x00007400 /* Reset: EMPR */ -#define AVF_ARQH1_ARQH_SHIFT 0 -#define AVF_ARQH1_ARQH_MASK AVF_MASK(0x3FF, AVF_ARQH1_ARQH_SHIFT) -#define AVF_ARQLEN1 0x00008000 /* Reset: EMPR */ -#define AVF_ARQLEN1_ARQLEN_SHIFT 0 -#define AVF_ARQLEN1_ARQLEN_MASK AVF_MASK(0x3FF, AVF_ARQLEN1_ARQLEN_SHIFT) -#define AVF_ARQLEN1_ARQVFE_SHIFT 28 -#define AVF_ARQLEN1_ARQVFE_MASK AVF_MASK(0x1, AVF_ARQLEN1_ARQVFE_SHIFT) -#define AVF_ARQLEN1_ARQOVFL_SHIFT 29 -#define AVF_ARQLEN1_ARQOVFL_MASK AVF_MASK(0x1, AVF_ARQLEN1_ARQOVFL_SHIFT) -#define AVF_ARQLEN1_ARQCRIT_SHIFT 30 -#define AVF_ARQLEN1_ARQCRIT_MASK AVF_MASK(0x1, AVF_ARQLEN1_ARQCRIT_SHIFT) -#define AVF_ARQLEN1_ARQENABLE_SHIFT 31 -#define AVF_ARQLEN1_ARQENABLE_MASK AVF_MASK(0x1U, AVF_ARQLEN1_ARQENABLE_SHIFT) -#define AVF_ARQT1 0x00007000 /* Reset: EMPR */ -#define AVF_ARQT1_ARQT_SHIFT 0 -#define AVF_ARQT1_ARQT_MASK AVF_MASK(0x3FF, AVF_ARQT1_ARQT_SHIFT) -#define AVF_ATQBAH1 0x00007800 /* Reset: EMPR */ -#define AVF_ATQBAH1_ATQBAH_SHIFT 0 -#define AVF_ATQBAH1_ATQBAH_MASK AVF_MASK(0xFFFFFFFF, AVF_ATQBAH1_ATQBAH_SHIFT) -#define AVF_ATQBAL1 0x00007C00 /* Reset: EMPR */ -#define AVF_ATQBAL1_ATQBAL_SHIFT 0 -#define AVF_ATQBAL1_ATQBAL_MASK AVF_MASK(0xFFFFFFFF, AVF_ATQBAL1_ATQBAL_SHIFT) -#define AVF_ATQH1 0x00006400 /* Reset: EMPR */ -#define AVF_ATQH1_ATQH_SHIFT 0 -#define AVF_ATQH1_ATQH_MASK AVF_MASK(0x3FF, AVF_ATQH1_ATQH_SHIFT) -#define AVF_ATQLEN1 0x00006800 /* Reset: EMPR */ -#define AVF_ATQLEN1_ATQLEN_SHIFT 0 -#define AVF_ATQLEN1_ATQLEN_MASK AVF_MASK(0x3FF, AVF_ATQLEN1_ATQLEN_SHIFT) -#define AVF_ATQLEN1_ATQVFE_SHIFT 28 -#define AVF_ATQLEN1_ATQVFE_MASK AVF_MASK(0x1, AVF_ATQLEN1_ATQVFE_SHIFT) -#define AVF_ATQLEN1_ATQOVFL_SHIFT 29 -#define AVF_ATQLEN1_ATQOVFL_MASK AVF_MASK(0x1, AVF_ATQLEN1_ATQOVFL_SHIFT) -#define AVF_ATQLEN1_ATQCRIT_SHIFT 30 -#define AVF_ATQLEN1_ATQCRIT_MASK AVF_MASK(0x1, AVF_ATQLEN1_ATQCRIT_SHIFT) -#define AVF_ATQLEN1_ATQENABLE_SHIFT 31 -#define AVF_ATQLEN1_ATQENABLE_MASK AVF_MASK(0x1U, AVF_ATQLEN1_ATQENABLE_SHIFT) -#define AVF_ATQT1 0x00008400 /* Reset: EMPR */ -#define AVF_ATQT1_ATQT_SHIFT 0 -#define AVF_ATQT1_ATQT_MASK AVF_MASK(0x3FF, AVF_ATQT1_ATQT_SHIFT) -#define AVFGEN_RSTAT 0x00008800 /* Reset: VFR */ -#define AVFGEN_RSTAT_VFR_STATE_SHIFT 0 -#define AVFGEN_RSTAT_VFR_STATE_MASK AVF_MASK(0x3, AVFGEN_RSTAT_VFR_STATE_SHIFT) -#define AVFINT_DYN_CTL01 0x00005C00 /* Reset: VFR */ -#define AVFINT_DYN_CTL01_INTENA_SHIFT 0 -#define AVFINT_DYN_CTL01_INTENA_MASK AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_SHIFT) -#define AVFINT_DYN_CTL01_CLEARPBA_SHIFT 1 -#define AVFINT_DYN_CTL01_CLEARPBA_MASK AVF_MASK(0x1, AVFINT_DYN_CTL01_CLEARPBA_SHIFT) -#define AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT 2 -#define AVFINT_DYN_CTL01_SWINT_TRIG_MASK AVF_MASK(0x1, AVFINT_DYN_CTL01_SWINT_TRIG_SHIFT) -#define AVFINT_DYN_CTL01_ITR_INDX_SHIFT 3 -#define AVFINT_DYN_CTL01_ITR_INDX_MASK AVF_MASK(0x3, AVFINT_DYN_CTL01_ITR_INDX_SHIFT) -#define AVFINT_DYN_CTL01_INTERVAL_SHIFT 5 -#define AVFINT_DYN_CTL01_INTERVAL_MASK AVF_MASK(0xFFF, AVFINT_DYN_CTL01_INTERVAL_SHIFT) -#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24 -#define AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK AVF_MASK(0x1, AVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT) -#define AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT 25 -#define AVFINT_DYN_CTL01_SW_ITR_INDX_MASK AVF_MASK(0x3, AVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT) -#define AVFINT_DYN_CTL01_INTENA_MSK_SHIFT 31 -#define AVFINT_DYN_CTL01_INTENA_MSK_MASK AVF_MASK(0x1, AVFINT_DYN_CTL01_INTENA_MSK_SHIFT) -#define AVFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */ -#define AVFINT_DYN_CTLN1_MAX_INDEX 15 -#define AVFINT_DYN_CTLN1_INTENA_SHIFT 0 -#define AVFINT_DYN_CTLN1_INTENA_MASK AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_SHIFT) -#define AVFINT_DYN_CTLN1_CLEARPBA_SHIFT 1 -#define AVFINT_DYN_CTLN1_CLEARPBA_MASK AVF_MASK(0x1, AVFINT_DYN_CTLN1_CLEARPBA_SHIFT) -#define AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2 -#define AVFINT_DYN_CTLN1_SWINT_TRIG_MASK AVF_MASK(0x1, AVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT) -#define AVFINT_DYN_CTLN1_ITR_INDX_SHIFT 3 -#define AVFINT_DYN_CTLN1_ITR_INDX_MASK AVF_MASK(0x3, AVFINT_DYN_CTLN1_ITR_INDX_SHIFT) -#define AVFINT_DYN_CTLN1_INTERVAL_SHIFT 5 -#define AVFINT_DYN_CTLN1_INTERVAL_MASK AVF_MASK(0xFFF, AVFINT_DYN_CTLN1_INTERVAL_SHIFT) -#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24 -#define AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK AVF_MASK(0x1, AVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT) -#define AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT 25 -#define AVFINT_DYN_CTLN1_SW_ITR_INDX_MASK AVF_MASK(0x3, AVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT) -#define AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT 31 -#define AVFINT_DYN_CTLN1_INTENA_MSK_MASK AVF_MASK(0x1, AVFINT_DYN_CTLN1_INTENA_MSK_SHIFT) -#define AVFINT_ICR0_ENA1 0x00005000 /* Reset: CORER */ -#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25 -#define AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK AVF_MASK(0x1, AVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT) -#define AVFINT_ICR0_ENA1_ADMINQ_SHIFT 30 -#define AVFINT_ICR0_ENA1_ADMINQ_MASK AVF_MASK(0x1, AVFINT_ICR0_ENA1_ADMINQ_SHIFT) -#define AVFINT_ICR0_ENA1_RSVD_SHIFT 31 -#define AVFINT_ICR0_ENA1_RSVD_MASK AVF_MASK(0x1, AVFINT_ICR0_ENA1_RSVD_SHIFT) -#define AVFINT_ICR01 0x00004800 /* Reset: CORER */ -#define AVFINT_ICR01_INTEVENT_SHIFT 0 -#define AVFINT_ICR01_INTEVENT_MASK AVF_MASK(0x1, AVFINT_ICR01_INTEVENT_SHIFT) -#define AVFINT_ICR01_QUEUE_0_SHIFT 1 -#define AVFINT_ICR01_QUEUE_0_MASK AVF_MASK(0x1, AVFINT_ICR01_QUEUE_0_SHIFT) -#define AVFINT_ICR01_QUEUE_1_SHIFT 2 -#define AVFINT_ICR01_QUEUE_1_MASK AVF_MASK(0x1, AVFINT_ICR01_QUEUE_1_SHIFT) -#define AVFINT_ICR01_QUEUE_2_SHIFT 3 -#define AVFINT_ICR01_QUEUE_2_MASK AVF_MASK(0x1, AVFINT_ICR01_QUEUE_2_SHIFT) -#define AVFINT_ICR01_QUEUE_3_SHIFT 4 -#define AVFINT_ICR01_QUEUE_3_MASK AVF_MASK(0x1, AVFINT_ICR01_QUEUE_3_SHIFT) -#define AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25 -#define AVFINT_ICR01_LINK_STAT_CHANGE_MASK AVF_MASK(0x1, AVFINT_ICR01_LINK_STAT_CHANGE_SHIFT) -#define AVFINT_ICR01_ADMINQ_SHIFT 30 -#define AVFINT_ICR01_ADMINQ_MASK AVF_MASK(0x1, AVFINT_ICR01_ADMINQ_SHIFT) -#define AVFINT_ICR01_SWINT_SHIFT 31 -#define AVFINT_ICR01_SWINT_MASK AVF_MASK(0x1, AVFINT_ICR01_SWINT_SHIFT) -#define AVFINT_ITR01(_i) (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */ -#define AVFINT_ITR01_MAX_INDEX 2 -#define AVFINT_ITR01_INTERVAL_SHIFT 0 -#define AVFINT_ITR01_INTERVAL_MASK AVF_MASK(0xFFF, AVFINT_ITR01_INTERVAL_SHIFT) -#define AVFINT_ITRN1(_i, _INTVF) (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */ -#define AVFINT_ITRN1_MAX_INDEX 2 -#define AVFINT_ITRN1_INTERVAL_SHIFT 0 -#define AVFINT_ITRN1_INTERVAL_MASK AVF_MASK(0xFFF, AVFINT_ITRN1_INTERVAL_SHIFT) -#define AVFINT_STAT_CTL01 0x00005400 /* Reset: CORER */ -#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2 -#define AVFINT_STAT_CTL01_OTHER_ITR_INDX_MASK AVF_MASK(0x3, AVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT) -#define AVF_QRX_TAIL1(_Q) (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define AVF_QRX_TAIL1_MAX_INDEX 15 -#define AVF_QRX_TAIL1_TAIL_SHIFT 0 -#define AVF_QRX_TAIL1_TAIL_MASK AVF_MASK(0x1FFF, AVF_QRX_TAIL1_TAIL_SHIFT) -#define AVF_QTX_TAIL1(_Q) (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */ -#define AVF_QTX_TAIL1_MAX_INDEX 15 -#define AVF_QTX_TAIL1_TAIL_SHIFT 0 -#define AVF_QTX_TAIL1_TAIL_MASK AVF_MASK(0x1FFF, AVF_QTX_TAIL1_TAIL_SHIFT) -#define AVFMSIX_PBA 0x00002000 /* Reset: VFLR */ -#define AVFMSIX_PBA_PENBIT_SHIFT 0 -#define AVFMSIX_PBA_PENBIT_MASK AVF_MASK(0xFFFFFFFF, AVFMSIX_PBA_PENBIT_SHIFT) -#define AVFMSIX_TADD(_i) (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ -#define AVFMSIX_TADD_MAX_INDEX 16 -#define AVFMSIX_TADD_MSIXTADD10_SHIFT 0 -#define AVFMSIX_TADD_MSIXTADD10_MASK AVF_MASK(0x3, AVFMSIX_TADD_MSIXTADD10_SHIFT) -#define AVFMSIX_TADD_MSIXTADD_SHIFT 2 -#define AVFMSIX_TADD_MSIXTADD_MASK AVF_MASK(0x3FFFFFFF, AVFMSIX_TADD_MSIXTADD_SHIFT) -#define AVFMSIX_TMSG(_i) (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ -#define AVFMSIX_TMSG_MAX_INDEX 16 -#define AVFMSIX_TMSG_MSIXTMSG_SHIFT 0 -#define AVFMSIX_TMSG_MSIXTMSG_MASK AVF_MASK(0xFFFFFFFF, AVFMSIX_TMSG_MSIXTMSG_SHIFT) -#define AVFMSIX_TUADD(_i) (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ -#define AVFMSIX_TUADD_MAX_INDEX 16 -#define AVFMSIX_TUADD_MSIXTUADD_SHIFT 0 -#define AVFMSIX_TUADD_MSIXTUADD_MASK AVF_MASK(0xFFFFFFFF, AVFMSIX_TUADD_MSIXTUADD_SHIFT) -#define AVFMSIX_TVCTRL(_i) (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ -#define AVFMSIX_TVCTRL_MAX_INDEX 16 -#define AVFMSIX_TVCTRL_MASK_SHIFT 0 -#define AVFMSIX_TVCTRL_MASK_MASK AVF_MASK(0x1, AVFMSIX_TVCTRL_MASK_SHIFT) -#define AVFCM_PE_ERRDATA 0x0000DC00 /* Reset: VFR */ -#define AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0 -#define AVFCM_PE_ERRDATA_ERROR_CODE_MASK AVF_MASK(0xF, AVFCM_PE_ERRDATA_ERROR_CODE_SHIFT) -#define AVFCM_PE_ERRDATA_Q_TYPE_SHIFT 4 -#define AVFCM_PE_ERRDATA_Q_TYPE_MASK AVF_MASK(0x7, AVFCM_PE_ERRDATA_Q_TYPE_SHIFT) -#define AVFCM_PE_ERRDATA_Q_NUM_SHIFT 8 -#define AVFCM_PE_ERRDATA_Q_NUM_MASK AVF_MASK(0x3FFFF, AVFCM_PE_ERRDATA_Q_NUM_SHIFT) -#define AVFCM_PE_ERRINFO 0x0000D800 /* Reset: VFR */ -#define AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT 0 -#define AVFCM_PE_ERRINFO_ERROR_VALID_MASK AVF_MASK(0x1, AVFCM_PE_ERRINFO_ERROR_VALID_SHIFT) -#define AVFCM_PE_ERRINFO_ERROR_INST_SHIFT 4 -#define AVFCM_PE_ERRINFO_ERROR_INST_MASK AVF_MASK(0x7, AVFCM_PE_ERRINFO_ERROR_INST_SHIFT) -#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8 -#define AVFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK AVF_MASK(0xFF, AVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT) -#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16 -#define AVFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT) -#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24 -#define AVFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK AVF_MASK(0xFF, AVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT) -#define AVFQF_HENA(_i) (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ -#define AVFQF_HENA_MAX_INDEX 1 -#define AVFQF_HENA_PTYPE_ENA_SHIFT 0 -#define AVFQF_HENA_PTYPE_ENA_MASK AVF_MASK(0xFFFFFFFF, AVFQF_HENA_PTYPE_ENA_SHIFT) -#define AVFQF_HKEY(_i) (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */ -#define AVFQF_HKEY_MAX_INDEX 12 -#define AVFQF_HKEY_KEY_0_SHIFT 0 -#define AVFQF_HKEY_KEY_0_MASK AVF_MASK(0xFF, AVFQF_HKEY_KEY_0_SHIFT) -#define AVFQF_HKEY_KEY_1_SHIFT 8 -#define AVFQF_HKEY_KEY_1_MASK AVF_MASK(0xFF, AVFQF_HKEY_KEY_1_SHIFT) -#define AVFQF_HKEY_KEY_2_SHIFT 16 -#define AVFQF_HKEY_KEY_2_MASK AVF_MASK(0xFF, AVFQF_HKEY_KEY_2_SHIFT) -#define AVFQF_HKEY_KEY_3_SHIFT 24 -#define AVFQF_HKEY_KEY_3_MASK AVF_MASK(0xFF, AVFQF_HKEY_KEY_3_SHIFT) -#define AVFQF_HLUT(_i) (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define AVFQF_HLUT_MAX_INDEX 15 -#define AVFQF_HLUT_LUT0_SHIFT 0 -#define AVFQF_HLUT_LUT0_MASK AVF_MASK(0xF, AVFQF_HLUT_LUT0_SHIFT) -#define AVFQF_HLUT_LUT1_SHIFT 8 -#define AVFQF_HLUT_LUT1_MASK AVF_MASK(0xF, AVFQF_HLUT_LUT1_SHIFT) -#define AVFQF_HLUT_LUT2_SHIFT 16 -#define AVFQF_HLUT_LUT2_MASK AVF_MASK(0xF, AVFQF_HLUT_LUT2_SHIFT) -#define AVFQF_HLUT_LUT3_SHIFT 24 -#define AVFQF_HLUT_LUT3_MASK AVF_MASK(0xF, AVFQF_HLUT_LUT3_SHIFT) -#define AVFQF_HREGION(_i) (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ -#define AVFQF_HREGION_MAX_INDEX 7 -#define AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0 -#define AVFQF_HREGION_OVERRIDE_ENA_0_MASK AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_0_SHIFT) -#define AVFQF_HREGION_REGION_0_SHIFT 1 -#define AVFQF_HREGION_REGION_0_MASK AVF_MASK(0x7, AVFQF_HREGION_REGION_0_SHIFT) -#define AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4 -#define AVFQF_HREGION_OVERRIDE_ENA_1_MASK AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_1_SHIFT) -#define AVFQF_HREGION_REGION_1_SHIFT 5 -#define AVFQF_HREGION_REGION_1_MASK AVF_MASK(0x7, AVFQF_HREGION_REGION_1_SHIFT) -#define AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8 -#define AVFQF_HREGION_OVERRIDE_ENA_2_MASK AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_2_SHIFT) -#define AVFQF_HREGION_REGION_2_SHIFT 9 -#define AVFQF_HREGION_REGION_2_MASK AVF_MASK(0x7, AVFQF_HREGION_REGION_2_SHIFT) -#define AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12 -#define AVFQF_HREGION_OVERRIDE_ENA_3_MASK AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_3_SHIFT) -#define AVFQF_HREGION_REGION_3_SHIFT 13 -#define AVFQF_HREGION_REGION_3_MASK AVF_MASK(0x7, AVFQF_HREGION_REGION_3_SHIFT) -#define AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16 -#define AVFQF_HREGION_OVERRIDE_ENA_4_MASK AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_4_SHIFT) -#define AVFQF_HREGION_REGION_4_SHIFT 17 -#define AVFQF_HREGION_REGION_4_MASK AVF_MASK(0x7, AVFQF_HREGION_REGION_4_SHIFT) -#define AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20 -#define AVFQF_HREGION_OVERRIDE_ENA_5_MASK AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_5_SHIFT) -#define AVFQF_HREGION_REGION_5_SHIFT 21 -#define AVFQF_HREGION_REGION_5_MASK AVF_MASK(0x7, AVFQF_HREGION_REGION_5_SHIFT) -#define AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24 -#define AVFQF_HREGION_OVERRIDE_ENA_6_MASK AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_6_SHIFT) -#define AVFQF_HREGION_REGION_6_SHIFT 25 -#define AVFQF_HREGION_REGION_6_MASK AVF_MASK(0x7, AVFQF_HREGION_REGION_6_SHIFT) -#define AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28 -#define AVFQF_HREGION_OVERRIDE_ENA_7_MASK AVF_MASK(0x1, AVFQF_HREGION_OVERRIDE_ENA_7_SHIFT) -#define AVFQF_HREGION_REGION_7_SHIFT 29 -#define AVFQF_HREGION_REGION_7_MASK AVF_MASK(0x7, AVFQF_HREGION_REGION_7_SHIFT) +#define IAVFMSIX_PBA1(_i) (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */ +#define IAVFMSIX_PBA1_MAX_INDEX 19 +#define IAVFMSIX_PBA1_PENBIT_SHIFT 0 +#define IAVFMSIX_PBA1_PENBIT_MASK IAVF_MASK(0xFFFFFFFF, IAVFMSIX_PBA1_PENBIT_SHIFT) +#define IAVFMSIX_TADD1(_i) (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ +#define IAVFMSIX_TADD1_MAX_INDEX 639 +#define IAVFMSIX_TADD1_MSIXTADD10_SHIFT 0 +#define IAVFMSIX_TADD1_MSIXTADD10_MASK IAVF_MASK(0x3, IAVFMSIX_TADD1_MSIXTADD10_SHIFT) +#define IAVFMSIX_TADD1_MSIXTADD_SHIFT 2 +#define IAVFMSIX_TADD1_MSIXTADD_MASK IAVF_MASK(0x3FFFFFFF, IAVFMSIX_TADD1_MSIXTADD_SHIFT) +#define IAVFMSIX_TMSG1(_i) (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ +#define IAVFMSIX_TMSG1_MAX_INDEX 639 +#define IAVFMSIX_TMSG1_MSIXTMSG_SHIFT 0 +#define IAVFMSIX_TMSG1_MSIXTMSG_MASK IAVF_MASK(0xFFFFFFFF, IAVFMSIX_TMSG1_MSIXTMSG_SHIFT) +#define IAVFMSIX_TUADD1(_i) (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ +#define IAVFMSIX_TUADD1_MAX_INDEX 639 +#define IAVFMSIX_TUADD1_MSIXTUADD_SHIFT 0 +#define IAVFMSIX_TUADD1_MSIXTUADD_MASK IAVF_MASK(0xFFFFFFFF, IAVFMSIX_TUADD1_MSIXTUADD_SHIFT) +#define IAVFMSIX_TVCTRL1(_i) (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ +#define IAVFMSIX_TVCTRL1_MAX_INDEX 639 +#define IAVFMSIX_TVCTRL1_MASK_SHIFT 0 +#define IAVFMSIX_TVCTRL1_MASK_MASK IAVF_MASK(0x1, IAVFMSIX_TVCTRL1_MASK_SHIFT) +#define IAVF_ARQBAH1 0x00006000 /* Reset: EMPR */ +#define IAVF_ARQBAH1_ARQBAH_SHIFT 0 +#define IAVF_ARQBAH1_ARQBAH_MASK IAVF_MASK(0xFFFFFFFF, IAVF_ARQBAH1_ARQBAH_SHIFT) +#define IAVF_ARQBAL1 0x00006C00 /* Reset: EMPR */ +#define IAVF_ARQBAL1_ARQBAL_SHIFT 0 +#define IAVF_ARQBAL1_ARQBAL_MASK IAVF_MASK(0xFFFFFFFF, IAVF_ARQBAL1_ARQBAL_SHIFT) +#define IAVF_ARQH1 0x00007400 /* Reset: EMPR */ +#define IAVF_ARQH1_ARQH_SHIFT 0 +#define IAVF_ARQH1_ARQH_MASK IAVF_MASK(0x3FF, IAVF_ARQH1_ARQH_SHIFT) +#define IAVF_ARQLEN1 0x00008000 /* Reset: EMPR */ +#define IAVF_ARQLEN1_ARQLEN_SHIFT 0 +#define IAVF_ARQLEN1_ARQLEN_MASK IAVF_MASK(0x3FF, IAVF_ARQLEN1_ARQLEN_SHIFT) +#define IAVF_ARQLEN1_ARQVFE_SHIFT 28 +#define IAVF_ARQLEN1_ARQVFE_MASK IAVF_MASK(0x1, IAVF_ARQLEN1_ARQVFE_SHIFT) +#define IAVF_ARQLEN1_ARQOVFL_SHIFT 29 +#define IAVF_ARQLEN1_ARQOVFL_MASK IAVF_MASK(0x1, IAVF_ARQLEN1_ARQOVFL_SHIFT) +#define IAVF_ARQLEN1_ARQCRIT_SHIFT 30 +#define IAVF_ARQLEN1_ARQCRIT_MASK IAVF_MASK(0x1, IAVF_ARQLEN1_ARQCRIT_SHIFT) +#define IAVF_ARQLEN1_ARQENABLE_SHIFT 31 +#define IAVF_ARQLEN1_ARQENABLE_MASK IAVF_MASK(0x1U, IAVF_ARQLEN1_ARQENABLE_SHIFT) +#define IAVF_ARQT1 0x00007000 /* Reset: EMPR */ +#define IAVF_ARQT1_ARQT_SHIFT 0 +#define IAVF_ARQT1_ARQT_MASK IAVF_MASK(0x3FF, IAVF_ARQT1_ARQT_SHIFT) +#define IAVF_ATQBAH1 0x00007800 /* Reset: EMPR */ +#define IAVF_ATQBAH1_ATQBAH_SHIFT 0 +#define IAVF_ATQBAH1_ATQBAH_MASK IAVF_MASK(0xFFFFFFFF, IAVF_ATQBAH1_ATQBAH_SHIFT) +#define IAVF_ATQBAL1 0x00007C00 /* Reset: EMPR */ +#define IAVF_ATQBAL1_ATQBAL_SHIFT 0 +#define IAVF_ATQBAL1_ATQBAL_MASK IAVF_MASK(0xFFFFFFFF, IAVF_ATQBAL1_ATQBAL_SHIFT) +#define IAVF_ATQH1 0x00006400 /* Reset: EMPR */ +#define IAVF_ATQH1_ATQH_SHIFT 0 +#define IAVF_ATQH1_ATQH_MASK IAVF_MASK(0x3FF, IAVF_ATQH1_ATQH_SHIFT) +#define IAVF_ATQLEN1 0x00006800 /* Reset: EMPR */ +#define IAVF_ATQLEN1_ATQLEN_SHIFT 0 +#define IAVF_ATQLEN1_ATQLEN_MASK IAVF_MASK(0x3FF, IAVF_ATQLEN1_ATQLEN_SHIFT) +#define IAVF_ATQLEN1_ATQVFE_SHIFT 28 +#define IAVF_ATQLEN1_ATQVFE_MASK IAVF_MASK(0x1, IAVF_ATQLEN1_ATQVFE_SHIFT) +#define IAVF_ATQLEN1_ATQOVFL_SHIFT 29 +#define IAVF_ATQLEN1_ATQOVFL_MASK IAVF_MASK(0x1, IAVF_ATQLEN1_ATQOVFL_SHIFT) +#define IAVF_ATQLEN1_ATQCRIT_SHIFT 30 +#define IAVF_ATQLEN1_ATQCRIT_MASK IAVF_MASK(0x1, IAVF_ATQLEN1_ATQCRIT_SHIFT) +#define IAVF_ATQLEN1_ATQENABLE_SHIFT 31 +#define IAVF_ATQLEN1_ATQENABLE_MASK IAVF_MASK(0x1U, IAVF_ATQLEN1_ATQENABLE_SHIFT) +#define IAVF_ATQT1 0x00008400 /* Reset: EMPR */ +#define IAVF_ATQT1_ATQT_SHIFT 0 +#define IAVF_ATQT1_ATQT_MASK IAVF_MASK(0x3FF, IAVF_ATQT1_ATQT_SHIFT) +#define IAVFGEN_RSTAT 0x00008800 /* Reset: VFR */ +#define IAVFGEN_RSTAT_VFR_STATE_SHIFT 0 +#define IAVFGEN_RSTAT_VFR_STATE_MASK IAVF_MASK(0x3, IAVFGEN_RSTAT_VFR_STATE_SHIFT) +#define IAVFINT_DYN_CTL01 0x00005C00 /* Reset: VFR */ +#define IAVFINT_DYN_CTL01_INTENA_SHIFT 0 +#define IAVFINT_DYN_CTL01_INTENA_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTL01_INTENA_SHIFT) +#define IAVFINT_DYN_CTL01_CLEARPBA_SHIFT 1 +#define IAVFINT_DYN_CTL01_CLEARPBA_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTL01_CLEARPBA_SHIFT) +#define IAVFINT_DYN_CTL01_SWINT_TRIG_SHIFT 2 +#define IAVFINT_DYN_CTL01_SWINT_TRIG_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTL01_SWINT_TRIG_SHIFT) +#define IAVFINT_DYN_CTL01_ITR_INDX_SHIFT 3 +#define IAVFINT_DYN_CTL01_ITR_INDX_MASK IAVF_MASK(0x3, IAVFINT_DYN_CTL01_ITR_INDX_SHIFT) +#define IAVFINT_DYN_CTL01_INTERVAL_SHIFT 5 +#define IAVFINT_DYN_CTL01_INTERVAL_MASK IAVF_MASK(0xFFF, IAVFINT_DYN_CTL01_INTERVAL_SHIFT) +#define IAVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24 +#define IAVFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT) +#define IAVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT 25 +#define IAVFINT_DYN_CTL01_SW_ITR_INDX_MASK IAVF_MASK(0x3, IAVFINT_DYN_CTL01_SW_ITR_INDX_SHIFT) +#define IAVFINT_DYN_CTL01_INTENA_MSK_SHIFT 31 +#define IAVFINT_DYN_CTL01_INTENA_MSK_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTL01_INTENA_MSK_SHIFT) +#define IAVFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */ +#define IAVFINT_DYN_CTLN1_MAX_INDEX 15 +#define IAVFINT_DYN_CTLN1_INTENA_SHIFT 0 +#define IAVFINT_DYN_CTLN1_INTENA_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTLN1_INTENA_SHIFT) +#define IAVFINT_DYN_CTLN1_CLEARPBA_SHIFT 1 +#define IAVFINT_DYN_CTLN1_CLEARPBA_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTLN1_CLEARPBA_SHIFT) +#define IAVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2 +#define IAVFINT_DYN_CTLN1_SWINT_TRIG_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTLN1_SWINT_TRIG_SHIFT) +#define IAVFINT_DYN_CTLN1_ITR_INDX_SHIFT 3 +#define IAVFINT_DYN_CTLN1_ITR_INDX_MASK IAVF_MASK(0x3, IAVFINT_DYN_CTLN1_ITR_INDX_SHIFT) +#define IAVFINT_DYN_CTLN1_INTERVAL_SHIFT 5 +#define IAVFINT_DYN_CTLN1_INTERVAL_MASK IAVF_MASK(0xFFF, IAVFINT_DYN_CTLN1_INTERVAL_SHIFT) +#define IAVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24 +#define IAVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT) +#define IAVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT 25 +#define IAVFINT_DYN_CTLN1_SW_ITR_INDX_MASK IAVF_MASK(0x3, IAVFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT) +#define IAVFINT_DYN_CTLN1_INTENA_MSK_SHIFT 31 +#define IAVFINT_DYN_CTLN1_INTENA_MSK_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTLN1_INTENA_MSK_SHIFT) +#define IAVFINT_ICR0_ENA1 0x00005000 /* Reset: CORER */ +#define IAVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25 +#define IAVFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK IAVF_MASK(0x1, IAVFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT) +#define IAVFINT_ICR0_ENA1_ADMINQ_SHIFT 30 +#define IAVFINT_ICR0_ENA1_ADMINQ_MASK IAVF_MASK(0x1, IAVFINT_ICR0_ENA1_ADMINQ_SHIFT) +#define IAVFINT_ICR0_ENA1_RSVD_SHIFT 31 +#define IAVFINT_ICR0_ENA1_RSVD_MASK IAVF_MASK(0x1, IAVFINT_ICR0_ENA1_RSVD_SHIFT) +#define IAVFINT_ICR01 0x00004800 /* Reset: CORER */ +#define IAVFINT_ICR01_INTEVENT_SHIFT 0 +#define IAVFINT_ICR01_INTEVENT_MASK IAVF_MASK(0x1, IAVFINT_ICR01_INTEVENT_SHIFT) +#define IAVFINT_ICR01_QUEUE_0_SHIFT 1 +#define IAVFINT_ICR01_QUEUE_0_MASK IAVF_MASK(0x1, IAVFINT_ICR01_QUEUE_0_SHIFT) +#define IAVFINT_ICR01_QUEUE_1_SHIFT 2 +#define IAVFINT_ICR01_QUEUE_1_MASK IAVF_MASK(0x1, IAVFINT_ICR01_QUEUE_1_SHIFT) +#define IAVFINT_ICR01_QUEUE_2_SHIFT 3 +#define IAVFINT_ICR01_QUEUE_2_MASK IAVF_MASK(0x1, IAVFINT_ICR01_QUEUE_2_SHIFT) +#define IAVFINT_ICR01_QUEUE_3_SHIFT 4 +#define IAVFINT_ICR01_QUEUE_3_MASK IAVF_MASK(0x1, IAVFINT_ICR01_QUEUE_3_SHIFT) +#define IAVFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25 +#define IAVFINT_ICR01_LINK_STAT_CHANGE_MASK IAVF_MASK(0x1, IAVFINT_ICR01_LINK_STAT_CHANGE_SHIFT) +#define IAVFINT_ICR01_ADMINQ_SHIFT 30 +#define IAVFINT_ICR01_ADMINQ_MASK IAVF_MASK(0x1, IAVFINT_ICR01_ADMINQ_SHIFT) +#define IAVFINT_ICR01_SWINT_SHIFT 31 +#define IAVFINT_ICR01_SWINT_MASK IAVF_MASK(0x1, IAVFINT_ICR01_SWINT_SHIFT) +#define IAVFINT_ITR01(_i) (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */ +#define IAVFINT_ITR01_MAX_INDEX 2 +#define IAVFINT_ITR01_INTERVAL_SHIFT 0 +#define IAVFINT_ITR01_INTERVAL_MASK IAVF_MASK(0xFFF, IAVFINT_ITR01_INTERVAL_SHIFT) +#define IAVFINT_ITRN1(_i, _INTVF) (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */ +#define IAVFINT_ITRN1_MAX_INDEX 2 +#define IAVFINT_ITRN1_INTERVAL_SHIFT 0 +#define IAVFINT_ITRN1_INTERVAL_MASK IAVF_MASK(0xFFF, IAVFINT_ITRN1_INTERVAL_SHIFT) +#define IAVFINT_STAT_CTL01 0x00005400 /* Reset: CORER */ +#define IAVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2 +#define IAVFINT_STAT_CTL01_OTHER_ITR_INDX_MASK IAVF_MASK(0x3, IAVFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT) +#define IAVF_QRX_TAIL1(_Q) (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define IAVF_QRX_TAIL1_MAX_INDEX 15 +#define IAVF_QRX_TAIL1_TAIL_SHIFT 0 +#define IAVF_QRX_TAIL1_TAIL_MASK IAVF_MASK(0x1FFF, IAVF_QRX_TAIL1_TAIL_SHIFT) +#define IAVF_QTX_TAIL1(_Q) (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */ +#define IAVF_QTX_TAIL1_MAX_INDEX 15 +#define IAVF_QTX_TAIL1_TAIL_SHIFT 0 +#define IAVF_QTX_TAIL1_TAIL_MASK IAVF_MASK(0x1FFF, IAVF_QTX_TAIL1_TAIL_SHIFT) +#define IAVFMSIX_PBA 0x00002000 /* Reset: VFLR */ +#define IAVFMSIX_PBA_PENBIT_SHIFT 0 +#define IAVFMSIX_PBA_PENBIT_MASK IAVF_MASK(0xFFFFFFFF, IAVFMSIX_PBA_PENBIT_SHIFT) +#define IAVFMSIX_TADD(_i) (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ +#define IAVFMSIX_TADD_MAX_INDEX 16 +#define IAVFMSIX_TADD_MSIXTADD10_SHIFT 0 +#define IAVFMSIX_TADD_MSIXTADD10_MASK IAVF_MASK(0x3, IAVFMSIX_TADD_MSIXTADD10_SHIFT) +#define IAVFMSIX_TADD_MSIXTADD_SHIFT 2 +#define IAVFMSIX_TADD_MSIXTADD_MASK IAVF_MASK(0x3FFFFFFF, IAVFMSIX_TADD_MSIXTADD_SHIFT) +#define IAVFMSIX_TMSG(_i) (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ +#define IAVFMSIX_TMSG_MAX_INDEX 16 +#define IAVFMSIX_TMSG_MSIXTMSG_SHIFT 0 +#define IAVFMSIX_TMSG_MSIXTMSG_MASK IAVF_MASK(0xFFFFFFFF, IAVFMSIX_TMSG_MSIXTMSG_SHIFT) +#define IAVFMSIX_TUADD(_i) (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ +#define IAVFMSIX_TUADD_MAX_INDEX 16 +#define IAVFMSIX_TUADD_MSIXTUADD_SHIFT 0 +#define IAVFMSIX_TUADD_MSIXTUADD_MASK IAVF_MASK(0xFFFFFFFF, IAVFMSIX_TUADD_MSIXTUADD_SHIFT) +#define IAVFMSIX_TVCTRL(_i) (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ +#define IAVFMSIX_TVCTRL_MAX_INDEX 16 +#define IAVFMSIX_TVCTRL_MASK_SHIFT 0 +#define IAVFMSIX_TVCTRL_MASK_MASK IAVF_MASK(0x1, IAVFMSIX_TVCTRL_MASK_SHIFT) +#define IAVFCM_PE_ERRDATA 0x0000DC00 /* Reset: VFR */ +#define IAVFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0 +#define IAVFCM_PE_ERRDATA_ERROR_CODE_MASK IAVF_MASK(0xF, IAVFCM_PE_ERRDATA_ERROR_CODE_SHIFT) +#define IAVFCM_PE_ERRDATA_Q_TYPE_SHIFT 4 +#define IAVFCM_PE_ERRDATA_Q_TYPE_MASK IAVF_MASK(0x7, IAVFCM_PE_ERRDATA_Q_TYPE_SHIFT) +#define IAVFCM_PE_ERRDATA_Q_NUM_SHIFT 8 +#define IAVFCM_PE_ERRDATA_Q_NUM_MASK IAVF_MASK(0x3FFFF, IAVFCM_PE_ERRDATA_Q_NUM_SHIFT) +#define IAVFCM_PE_ERRINFO 0x0000D800 /* Reset: VFR */ +#define IAVFCM_PE_ERRINFO_ERROR_VALID_SHIFT 0 +#define IAVFCM_PE_ERRINFO_ERROR_VALID_MASK IAVF_MASK(0x1, IAVFCM_PE_ERRINFO_ERROR_VALID_SHIFT) +#define IAVFCM_PE_ERRINFO_ERROR_INST_SHIFT 4 +#define IAVFCM_PE_ERRINFO_ERROR_INST_MASK IAVF_MASK(0x7, IAVFCM_PE_ERRINFO_ERROR_INST_SHIFT) +#define IAVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8 +#define IAVFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK IAVF_MASK(0xFF, IAVFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT) +#define IAVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16 +#define IAVFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK IAVF_MASK(0xFF, IAVFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT) +#define IAVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24 +#define IAVFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK IAVF_MASK(0xFF, IAVFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT) +#define IAVFQF_HENA(_i) (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ +#define IAVFQF_HENA_MAX_INDEX 1 +#define IAVFQF_HENA_PTYPE_ENA_SHIFT 0 +#define IAVFQF_HENA_PTYPE_ENA_MASK IAVF_MASK(0xFFFFFFFF, IAVFQF_HENA_PTYPE_ENA_SHIFT) +#define IAVFQF_HKEY(_i) (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */ +#define IAVFQF_HKEY_MAX_INDEX 12 +#define IAVFQF_HKEY_KEY_0_SHIFT 0 +#define IAVFQF_HKEY_KEY_0_MASK IAVF_MASK(0xFF, IAVFQF_HKEY_KEY_0_SHIFT) +#define IAVFQF_HKEY_KEY_1_SHIFT 8 +#define IAVFQF_HKEY_KEY_1_MASK IAVF_MASK(0xFF, IAVFQF_HKEY_KEY_1_SHIFT) +#define IAVFQF_HKEY_KEY_2_SHIFT 16 +#define IAVFQF_HKEY_KEY_2_MASK IAVF_MASK(0xFF, IAVFQF_HKEY_KEY_2_SHIFT) +#define IAVFQF_HKEY_KEY_3_SHIFT 24 +#define IAVFQF_HKEY_KEY_3_MASK IAVF_MASK(0xFF, IAVFQF_HKEY_KEY_3_SHIFT) +#define IAVFQF_HLUT(_i) (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define IAVFQF_HLUT_MAX_INDEX 15 +#define IAVFQF_HLUT_LUT0_SHIFT 0 +#define IAVFQF_HLUT_LUT0_MASK IAVF_MASK(0xF, IAVFQF_HLUT_LUT0_SHIFT) +#define IAVFQF_HLUT_LUT1_SHIFT 8 +#define IAVFQF_HLUT_LUT1_MASK IAVF_MASK(0xF, IAVFQF_HLUT_LUT1_SHIFT) +#define IAVFQF_HLUT_LUT2_SHIFT 16 +#define IAVFQF_HLUT_LUT2_MASK IAVF_MASK(0xF, IAVFQF_HLUT_LUT2_SHIFT) +#define IAVFQF_HLUT_LUT3_SHIFT 24 +#define IAVFQF_HLUT_LUT3_MASK IAVF_MASK(0xF, IAVFQF_HLUT_LUT3_SHIFT) +#define IAVFQF_HREGION(_i) (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ +#define IAVFQF_HREGION_MAX_INDEX 7 +#define IAVFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0 +#define IAVFQF_HREGION_OVERRIDE_ENA_0_MASK IAVF_MASK(0x1, IAVFQF_HREGION_OVERRIDE_ENA_0_SHIFT) +#define IAVFQF_HREGION_REGION_0_SHIFT 1 +#define IAVFQF_HREGION_REGION_0_MASK IAVF_MASK(0x7, IAVFQF_HREGION_REGION_0_SHIFT) +#define IAVFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4 +#define IAVFQF_HREGION_OVERRIDE_ENA_1_MASK IAVF_MASK(0x1, IAVFQF_HREGION_OVERRIDE_ENA_1_SHIFT) +#define IAVFQF_HREGION_REGION_1_SHIFT 5 +#define IAVFQF_HREGION_REGION_1_MASK IAVF_MASK(0x7, IAVFQF_HREGION_REGION_1_SHIFT) +#define IAVFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8 +#define IAVFQF_HREGION_OVERRIDE_ENA_2_MASK IAVF_MASK(0x1, IAVFQF_HREGION_OVERRIDE_ENA_2_SHIFT) +#define IAVFQF_HREGION_REGION_2_SHIFT 9 +#define IAVFQF_HREGION_REGION_2_MASK IAVF_MASK(0x7, IAVFQF_HREGION_REGION_2_SHIFT) +#define IAVFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12 +#define IAVFQF_HREGION_OVERRIDE_ENA_3_MASK IAVF_MASK(0x1, IAVFQF_HREGION_OVERRIDE_ENA_3_SHIFT) +#define IAVFQF_HREGION_REGION_3_SHIFT 13 +#define IAVFQF_HREGION_REGION_3_MASK IAVF_MASK(0x7, IAVFQF_HREGION_REGION_3_SHIFT) +#define IAVFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16 +#define IAVFQF_HREGION_OVERRIDE_ENA_4_MASK IAVF_MASK(0x1, IAVFQF_HREGION_OVERRIDE_ENA_4_SHIFT) +#define IAVFQF_HREGION_REGION_4_SHIFT 17 +#define IAVFQF_HREGION_REGION_4_MASK IAVF_MASK(0x7, IAVFQF_HREGION_REGION_4_SHIFT) +#define IAVFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20 +#define IAVFQF_HREGION_OVERRIDE_ENA_5_MASK IAVF_MASK(0x1, IAVFQF_HREGION_OVERRIDE_ENA_5_SHIFT) +#define IAVFQF_HREGION_REGION_5_SHIFT 21 +#define IAVFQF_HREGION_REGION_5_MASK IAVF_MASK(0x7, IAVFQF_HREGION_REGION_5_SHIFT) +#define IAVFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24 +#define IAVFQF_HREGION_OVERRIDE_ENA_6_MASK IAVF_MASK(0x1, IAVFQF_HREGION_OVERRIDE_ENA_6_SHIFT) +#define IAVFQF_HREGION_REGION_6_SHIFT 25 +#define IAVFQF_HREGION_REGION_6_MASK IAVF_MASK(0x7, IAVFQF_HREGION_REGION_6_SHIFT) +#define IAVFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28 +#define IAVFQF_HREGION_OVERRIDE_ENA_7_MASK IAVF_MASK(0x1, IAVFQF_HREGION_OVERRIDE_ENA_7_SHIFT) +#define IAVFQF_HREGION_REGION_7_SHIFT 29 +#define IAVFQF_HREGION_REGION_7_MASK IAVF_MASK(0x7, IAVFQF_HREGION_REGION_7_SHIFT) -#define AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT 30 -#define AVFINT_DYN_CTL01_WB_ON_ITR_MASK AVF_MASK(0x1, AVFINT_DYN_CTL01_WB_ON_ITR_SHIFT) -#define AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT 30 -#define AVFINT_DYN_CTLN1_WB_ON_ITR_MASK AVF_MASK(0x1, AVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT) -#define AVFPE_AEQALLOC1 0x0000A400 /* Reset: VFR */ -#define AVFPE_AEQALLOC1_AECOUNT_SHIFT 0 -#define AVFPE_AEQALLOC1_AECOUNT_MASK AVF_MASK(0xFFFFFFFF, AVFPE_AEQALLOC1_AECOUNT_SHIFT) -#define AVFPE_CCQPHIGH1 0x00009800 /* Reset: VFR */ -#define AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0 -#define AVFPE_CCQPHIGH1_PECCQPHIGH_MASK AVF_MASK(0xFFFFFFFF, AVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT) -#define AVFPE_CCQPLOW1 0x0000AC00 /* Reset: VFR */ -#define AVFPE_CCQPLOW1_PECCQPLOW_SHIFT 0 -#define AVFPE_CCQPLOW1_PECCQPLOW_MASK AVF_MASK(0xFFFFFFFF, AVFPE_CCQPLOW1_PECCQPLOW_SHIFT) -#define AVFPE_CCQPSTATUS1 0x0000B800 /* Reset: VFR */ -#define AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT 0 -#define AVFPE_CCQPSTATUS1_CCQP_DONE_MASK AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT) -#define AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4 -#define AVFPE_CCQPSTATUS1_HMC_PROFILE_MASK AVF_MASK(0x7, AVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT) -#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16 -#define AVFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK AVF_MASK(0x3F, AVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT) -#define AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT 31 -#define AVFPE_CCQPSTATUS1_CCQP_ERR_MASK AVF_MASK(0x1, AVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT) -#define AVFPE_CQACK1 0x0000B000 /* Reset: VFR */ -#define AVFPE_CQACK1_PECQID_SHIFT 0 -#define AVFPE_CQACK1_PECQID_MASK AVF_MASK(0x1FFFF, AVFPE_CQACK1_PECQID_SHIFT) -#define AVFPE_CQARM1 0x0000B400 /* Reset: VFR */ -#define AVFPE_CQARM1_PECQID_SHIFT 0 -#define AVFPE_CQARM1_PECQID_MASK AVF_MASK(0x1FFFF, AVFPE_CQARM1_PECQID_SHIFT) -#define AVFPE_CQPDB1 0x0000BC00 /* Reset: VFR */ -#define AVFPE_CQPDB1_WQHEAD_SHIFT 0 -#define AVFPE_CQPDB1_WQHEAD_MASK AVF_MASK(0x7FF, AVFPE_CQPDB1_WQHEAD_SHIFT) -#define AVFPE_CQPERRCODES1 0x00009C00 /* Reset: VFR */ -#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0 -#define AVFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT) -#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16 -#define AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK AVF_MASK(0xFFFF, AVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT) -#define AVFPE_CQPTAIL1 0x0000A000 /* Reset: VFR */ -#define AVFPE_CQPTAIL1_WQTAIL_SHIFT 0 -#define AVFPE_CQPTAIL1_WQTAIL_MASK AVF_MASK(0x7FF, AVFPE_CQPTAIL1_WQTAIL_SHIFT) -#define AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31 -#define AVFPE_CQPTAIL1_CQP_OP_ERR_MASK AVF_MASK(0x1, AVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT) -#define AVFPE_IPCONFIG01 0x00008C00 /* Reset: VFR */ -#define AVFPE_IPCONFIG01_PEIPID_SHIFT 0 -#define AVFPE_IPCONFIG01_PEIPID_MASK AVF_MASK(0xFFFF, AVFPE_IPCONFIG01_PEIPID_SHIFT) -#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16 -#define AVFPE_IPCONFIG01_USEENTIREIDRANGE_MASK AVF_MASK(0x1, AVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT) -#define AVFPE_MRTEIDXMASK1 0x00009000 /* Reset: VFR */ -#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0 -#define AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK AVF_MASK(0x1F, AVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT) -#define AVFPE_RCVUNEXPECTEDERROR1 0x00009400 /* Reset: VFR */ -#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0 -#define AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK AVF_MASK(0xFFFFFF, AVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT) -#define AVFPE_TCPNOWTIMER1 0x0000A800 /* Reset: VFR */ -#define AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0 -#define AVFPE_TCPNOWTIMER1_TCP_NOW_MASK AVF_MASK(0xFFFFFFFF, AVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT) -#define AVFPE_WQEALLOC1 0x0000C000 /* Reset: VFR */ -#define AVFPE_WQEALLOC1_PEQPID_SHIFT 0 -#define AVFPE_WQEALLOC1_PEQPID_MASK AVF_MASK(0x3FFFF, AVFPE_WQEALLOC1_PEQPID_SHIFT) -#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20 -#define AVFPE_WQEALLOC1_WQE_DESC_INDEX_MASK AVF_MASK(0xFFF, AVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT) +#define IAVFINT_DYN_CTL01_WB_ON_ITR_SHIFT 30 +#define IAVFINT_DYN_CTL01_WB_ON_ITR_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTL01_WB_ON_ITR_SHIFT) +#define IAVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT 30 +#define IAVFINT_DYN_CTLN1_WB_ON_ITR_MASK IAVF_MASK(0x1, IAVFINT_DYN_CTLN1_WB_ON_ITR_SHIFT) +#define IAVFPE_AEQALLOC1 0x0000A400 /* Reset: VFR */ +#define IAVFPE_AEQALLOC1_AECOUNT_SHIFT 0 +#define IAVFPE_AEQALLOC1_AECOUNT_MASK IAVF_MASK(0xFFFFFFFF, IAVFPE_AEQALLOC1_AECOUNT_SHIFT) +#define IAVFPE_CCQPHIGH1 0x00009800 /* Reset: VFR */ +#define IAVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT 0 +#define IAVFPE_CCQPHIGH1_PECCQPHIGH_MASK IAVF_MASK(0xFFFFFFFF, IAVFPE_CCQPHIGH1_PECCQPHIGH_SHIFT) +#define IAVFPE_CCQPLOW1 0x0000AC00 /* Reset: VFR */ +#define IAVFPE_CCQPLOW1_PECCQPLOW_SHIFT 0 +#define IAVFPE_CCQPLOW1_PECCQPLOW_MASK IAVF_MASK(0xFFFFFFFF, IAVFPE_CCQPLOW1_PECCQPLOW_SHIFT) +#define IAVFPE_CCQPSTATUS1 0x0000B800 /* Reset: VFR */ +#define IAVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT 0 +#define IAVFPE_CCQPSTATUS1_CCQP_DONE_MASK IAVF_MASK(0x1, IAVFPE_CCQPSTATUS1_CCQP_DONE_SHIFT) +#define IAVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT 4 +#define IAVFPE_CCQPSTATUS1_HMC_PROFILE_MASK IAVF_MASK(0x7, IAVFPE_CCQPSTATUS1_HMC_PROFILE_SHIFT) +#define IAVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT 16 +#define IAVFPE_CCQPSTATUS1_RDMA_EN_VFS_MASK IAVF_MASK(0x3F, IAVFPE_CCQPSTATUS1_RDMA_EN_VFS_SHIFT) +#define IAVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT 31 +#define IAVFPE_CCQPSTATUS1_CCQP_ERR_MASK IAVF_MASK(0x1, IAVFPE_CCQPSTATUS1_CCQP_ERR_SHIFT) +#define IAVFPE_CQACK1 0x0000B000 /* Reset: VFR */ +#define IAVFPE_CQACK1_PECQID_SHIFT 0 +#define IAVFPE_CQACK1_PECQID_MASK IAVF_MASK(0x1FFFF, IAVFPE_CQACK1_PECQID_SHIFT) +#define IAVFPE_CQARM1 0x0000B400 /* Reset: VFR */ +#define IAVFPE_CQARM1_PECQID_SHIFT 0 +#define IAVFPE_CQARM1_PECQID_MASK IAVF_MASK(0x1FFFF, IAVFPE_CQARM1_PECQID_SHIFT) +#define IAVFPE_CQPDB1 0x0000BC00 /* Reset: VFR */ +#define IAVFPE_CQPDB1_WQHEAD_SHIFT 0 +#define IAVFPE_CQPDB1_WQHEAD_MASK IAVF_MASK(0x7FF, IAVFPE_CQPDB1_WQHEAD_SHIFT) +#define IAVFPE_CQPERRCODES1 0x00009C00 /* Reset: VFR */ +#define IAVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT 0 +#define IAVFPE_CQPERRCODES1_CQP_MINOR_CODE_MASK IAVF_MASK(0xFFFF, IAVFPE_CQPERRCODES1_CQP_MINOR_CODE_SHIFT) +#define IAVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT 16 +#define IAVFPE_CQPERRCODES1_CQP_MAJOR_CODE_MASK IAVF_MASK(0xFFFF, IAVFPE_CQPERRCODES1_CQP_MAJOR_CODE_SHIFT) +#define IAVFPE_CQPTAIL1 0x0000A000 /* Reset: VFR */ +#define IAVFPE_CQPTAIL1_WQTAIL_SHIFT 0 +#define IAVFPE_CQPTAIL1_WQTAIL_MASK IAVF_MASK(0x7FF, IAVFPE_CQPTAIL1_WQTAIL_SHIFT) +#define IAVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT 31 +#define IAVFPE_CQPTAIL1_CQP_OP_ERR_MASK IAVF_MASK(0x1, IAVFPE_CQPTAIL1_CQP_OP_ERR_SHIFT) +#define IAVFPE_IPCONFIG01 0x00008C00 /* Reset: VFR */ +#define IAVFPE_IPCONFIG01_PEIPID_SHIFT 0 +#define IAVFPE_IPCONFIG01_PEIPID_MASK IAVF_MASK(0xFFFF, IAVFPE_IPCONFIG01_PEIPID_SHIFT) +#define IAVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT 16 +#define IAVFPE_IPCONFIG01_USEENTIREIDRANGE_MASK IAVF_MASK(0x1, IAVFPE_IPCONFIG01_USEENTIREIDRANGE_SHIFT) +#define IAVFPE_MRTEIDXMASK1 0x00009000 /* Reset: VFR */ +#define IAVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT 0 +#define IAVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_MASK IAVF_MASK(0x1F, IAVFPE_MRTEIDXMASK1_MRTEIDXMASKBITS_SHIFT) +#define IAVFPE_RCVUNEXPECTEDERROR1 0x00009400 /* Reset: VFR */ +#define IAVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT 0 +#define IAVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_MASK IAVF_MASK(0xFFFFFF, IAVFPE_RCVUNEXPECTEDERROR1_TCP_RX_UNEXP_ERR_SHIFT) +#define IAVFPE_TCPNOWTIMER1 0x0000A800 /* Reset: VFR */ +#define IAVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT 0 +#define IAVFPE_TCPNOWTIMER1_TCP_NOW_MASK IAVF_MASK(0xFFFFFFFF, IAVFPE_TCPNOWTIMER1_TCP_NOW_SHIFT) +#define IAVFPE_WQEALLOC1 0x0000C000 /* Reset: VFR */ +#define IAVFPE_WQEALLOC1_PEQPID_SHIFT 0 +#define IAVFPE_WQEALLOC1_PEQPID_MASK IAVF_MASK(0x3FFFF, IAVFPE_WQEALLOC1_PEQPID_SHIFT) +#define IAVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT 20 +#define IAVFPE_WQEALLOC1_WQE_DESC_INDEX_MASK IAVF_MASK(0xFFF, IAVFPE_WQEALLOC1_WQE_DESC_INDEX_SHIFT) -#endif /* _AVF_REGISTER_H_ */ +#endif /* _IAVF_REGISTER_H_ */ diff --git a/drivers/net/iavf/base/iavf_status.h b/drivers/net/iavf/base/iavf_status.h index e8a673bd1..37454c697 100644 --- a/drivers/net/iavf/base/iavf_status.h +++ b/drivers/net/iavf/base/iavf_status.h @@ -31,78 +31,78 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_STATUS_H_ -#define _AVF_STATUS_H_ +#ifndef _IAVF_STATUS_H_ +#define _IAVF_STATUS_H_ /* Error Codes */ -enum avf_status_code { - AVF_SUCCESS = 0, - AVF_ERR_NVM = -1, - AVF_ERR_NVM_CHECKSUM = -2, - AVF_ERR_PHY = -3, - AVF_ERR_CONFIG = -4, - AVF_ERR_PARAM = -5, - AVF_ERR_MAC_TYPE = -6, - AVF_ERR_UNKNOWN_PHY = -7, - AVF_ERR_LINK_SETUP = -8, - AVF_ERR_ADAPTER_STOPPED = -9, - AVF_ERR_INVALID_MAC_ADDR = -10, - AVF_ERR_DEVICE_NOT_SUPPORTED = -11, - AVF_ERR_MASTER_REQUESTS_PENDING = -12, - AVF_ERR_INVALID_LINK_SETTINGS = -13, - AVF_ERR_AUTONEG_NOT_COMPLETE = -14, - AVF_ERR_RESET_FAILED = -15, - AVF_ERR_SWFW_SYNC = -16, - AVF_ERR_NO_AVAILABLE_VSI = -17, - AVF_ERR_NO_MEMORY = -18, - AVF_ERR_BAD_PTR = -19, - AVF_ERR_RING_FULL = -20, - AVF_ERR_INVALID_PD_ID = -21, - AVF_ERR_INVALID_QP_ID = -22, - AVF_ERR_INVALID_CQ_ID = -23, - AVF_ERR_INVALID_CEQ_ID = -24, - AVF_ERR_INVALID_AEQ_ID = -25, - AVF_ERR_INVALID_SIZE = -26, - AVF_ERR_INVALID_ARP_INDEX = -27, - AVF_ERR_INVALID_FPM_FUNC_ID = -28, - AVF_ERR_QP_INVALID_MSG_SIZE = -29, - AVF_ERR_QP_TOOMANY_WRS_POSTED = -30, - AVF_ERR_INVALID_FRAG_COUNT = -31, - AVF_ERR_QUEUE_EMPTY = -32, - AVF_ERR_INVALID_ALIGNMENT = -33, - AVF_ERR_FLUSHED_QUEUE = -34, - AVF_ERR_INVALID_PUSH_PAGE_INDEX = -35, - AVF_ERR_INVALID_IMM_DATA_SIZE = -36, - AVF_ERR_TIMEOUT = -37, - AVF_ERR_OPCODE_MISMATCH = -38, - AVF_ERR_CQP_COMPL_ERROR = -39, - AVF_ERR_INVALID_VF_ID = -40, - AVF_ERR_INVALID_HMCFN_ID = -41, - AVF_ERR_BACKING_PAGE_ERROR = -42, - AVF_ERR_NO_PBLCHUNKS_AVAILABLE = -43, - AVF_ERR_INVALID_PBLE_INDEX = -44, - AVF_ERR_INVALID_SD_INDEX = -45, - AVF_ERR_INVALID_PAGE_DESC_INDEX = -46, - AVF_ERR_INVALID_SD_TYPE = -47, - AVF_ERR_MEMCPY_FAILED = -48, - AVF_ERR_INVALID_HMC_OBJ_INDEX = -49, - AVF_ERR_INVALID_HMC_OBJ_COUNT = -50, - AVF_ERR_INVALID_SRQ_ARM_LIMIT = -51, - AVF_ERR_SRQ_ENABLED = -52, - AVF_ERR_ADMIN_QUEUE_ERROR = -53, - AVF_ERR_ADMIN_QUEUE_TIMEOUT = -54, - AVF_ERR_BUF_TOO_SHORT = -55, - AVF_ERR_ADMIN_QUEUE_FULL = -56, - AVF_ERR_ADMIN_QUEUE_NO_WORK = -57, - AVF_ERR_BAD_IWARP_CQE = -58, - AVF_ERR_NVM_BLANK_MODE = -59, - AVF_ERR_NOT_IMPLEMENTED = -60, - AVF_ERR_PE_DOORBELL_NOT_ENABLED = -61, - AVF_ERR_DIAG_TEST_FAILED = -62, - AVF_ERR_NOT_READY = -63, - AVF_NOT_SUPPORTED = -64, - AVF_ERR_FIRMWARE_API_VERSION = -65, - AVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR = -66, +enum iavf_status_code { + IAVF_SUCCESS = 0, + IAVF_ERR_NVM = -1, + IAVF_ERR_NVM_CHECKSUM = -2, + IAVF_ERR_PHY = -3, + IAVF_ERR_CONFIG = -4, + IAVF_ERR_PARAM = -5, + IAVF_ERR_MAC_TYPE = -6, + IAVF_ERR_UNKNOWN_PHY = -7, + IAVF_ERR_LINK_SETUP = -8, + IAVF_ERR_ADAPTER_STOPPED = -9, + IAVF_ERR_INVALID_MAC_ADDR = -10, + IAVF_ERR_DEVICE_NOT_SUPPORTED = -11, + IAVF_ERR_MASTER_REQUESTS_PENDING = -12, + IAVF_ERR_INVALID_LINK_SETTINGS = -13, + IAVF_ERR_AUTONEG_NOT_COMPLETE = -14, + IAVF_ERR_RESET_FAILED = -15, + IAVF_ERR_SWFW_SYNC = -16, + IAVF_ERR_NO_AVAILABLE_VSI = -17, + IAVF_ERR_NO_MEMORY = -18, + IAVF_ERR_BAD_PTR = -19, + IAVF_ERR_RING_FULL = -20, + IAVF_ERR_INVALID_PD_ID = -21, + IAVF_ERR_INVALID_QP_ID = -22, + IAVF_ERR_INVALID_CQ_ID = -23, + IAVF_ERR_INVALID_CEQ_ID = -24, + IAVF_ERR_INVALID_AEQ_ID = -25, + IAVF_ERR_INVALID_SIZE = -26, + IAVF_ERR_INVALID_ARP_INDEX = -27, + IAVF_ERR_INVALID_FPM_FUNC_ID = -28, + IAVF_ERR_QP_INVALID_MSG_SIZE = -29, + IAVF_ERR_QP_TOOMANY_WRS_POSTED = -30, + IAVF_ERR_INVALID_FRAG_COUNT = -31, + IAVF_ERR_QUEUE_EMPTY = -32, + IAVF_ERR_INVALID_ALIGNMENT = -33, + IAVF_ERR_FLUSHED_QUEUE = -34, + IAVF_ERR_INVALID_PUSH_PAGE_INDEX = -35, + IAVF_ERR_INVALID_IMM_DATA_SIZE = -36, + IAVF_ERR_TIMEOUT = -37, + IAVF_ERR_OPCODE_MISMATCH = -38, + IAVF_ERR_CQP_COMPL_ERROR = -39, + IAVF_ERR_INVALID_VF_ID = -40, + IAVF_ERR_INVALID_HMCFN_ID = -41, + IAVF_ERR_BACKING_PAGE_ERROR = -42, + IAVF_ERR_NO_PBLCHUNKS_AVAILABLE = -43, + IAVF_ERR_INVALID_PBLE_INDEX = -44, + IAVF_ERR_INVALID_SD_INDEX = -45, + IAVF_ERR_INVALID_PAGE_DESC_INDEX = -46, + IAVF_ERR_INVALID_SD_TYPE = -47, + IAVF_ERR_MEMCPY_FAILED = -48, + IAVF_ERR_INVALID_HMC_OBJ_INDEX = -49, + IAVF_ERR_INVALID_HMC_OBJ_COUNT = -50, + IAVF_ERR_INVALID_SRQ_ARM_LIMIT = -51, + IAVF_ERR_SRQ_ENABLED = -52, + IAVF_ERR_ADMIN_QUEUE_ERROR = -53, + IAVF_ERR_ADMIN_QUEUE_TIMEOUT = -54, + IAVF_ERR_BUF_TOO_SHORT = -55, + IAVF_ERR_ADMIN_QUEUE_FULL = -56, + IAVF_ERR_ADMIN_QUEUE_NO_WORK = -57, + IAVF_ERR_BAD_IWARP_CQE = -58, + IAVF_ERR_NVM_BLANK_MODE = -59, + IAVF_ERR_NOT_IMPLEMENTED = -60, + IAVF_ERR_PE_DOORBELL_NOT_ENABLED = -61, + IAVF_ERR_DIAG_TEST_FAILED = -62, + IAVF_ERR_NOT_READY = -63, + IAVF_NOT_SUPPORTED = -64, + IAVF_ERR_FIRMWARE_API_VERSION = -65, + IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR = -66, }; -#endif /* _AVF_STATUS_H_ */ +#endif /* _IAVF_STATUS_H_ */ diff --git a/drivers/net/iavf/base/iavf_type.h b/drivers/net/iavf/base/iavf_type.h index 7c590737a..0a602874f 100644 --- a/drivers/net/iavf/base/iavf_type.h +++ b/drivers/net/iavf/base/iavf_type.h @@ -31,8 +31,8 @@ POSSIBILITY OF SUCH DAMAGE. ***************************************************************************/ -#ifndef _AVF_TYPE_H_ -#define _AVF_TYPE_H_ +#ifndef _IAVF_TYPE_H_ +#define _IAVF_TYPE_H_ #include "iavf_status.h" #include "iavf_osdep.h" @@ -58,148 +58,148 @@ POSSIBILITY OF SUCH DAMAGE. #endif /* BIT_ULL */ #endif /* LINUX_MACROS */ -#ifndef AVF_MASK -/* AVF_MASK is a macro used on 32 bit registers */ -#define AVF_MASK(mask, shift) (mask << shift) +#ifndef IAVF_MASK +/* IAVF_MASK is a macro used on 32 bit registers */ +#define IAVF_MASK(mask, shift) (mask << shift) #endif -#define AVF_MAX_PF 16 -#define AVF_MAX_PF_VSI 64 -#define AVF_MAX_PF_QP 128 -#define AVF_MAX_VSI_QP 16 -#define AVF_MAX_VF_VSI 3 -#define AVF_MAX_CHAINED_RX_BUFFERS 5 -#define AVF_MAX_PF_UDP_OFFLOAD_PORTS 16 +#define IAVF_MAX_PF 16 +#define IAVF_MAX_PF_VSI 64 +#define IAVF_MAX_PF_QP 128 +#define IAVF_MAX_VSI_QP 16 +#define IAVF_MAX_VF_VSI 3 +#define IAVF_MAX_CHAINED_RX_BUFFERS 5 +#define IAVF_MAX_PF_UDP_OFFLOAD_PORTS 16 /* something less than 1 minute */ -#define AVF_HEARTBEAT_TIMEOUT (HZ * 50) +#define IAVF_HEARTBEAT_TIMEOUT (HZ * 50) /* Max default timeout in ms, */ -#define AVF_MAX_NVM_TIMEOUT 18000 +#define IAVF_MAX_NVM_TIMEOUT 18000 /* Max timeout in ms for the phy to respond */ -#define AVF_MAX_PHY_TIMEOUT 500 +#define IAVF_MAX_PHY_TIMEOUT 500 /* Check whether address is multicast. */ -#define AVF_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01)) +#define IAVF_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01)) /* Check whether an address is broadcast. */ -#define AVF_IS_BROADCAST(address) \ +#define IAVF_IS_BROADCAST(address) \ ((((u8 *)(address))[0] == ((u8)0xff)) && \ (((u8 *)(address))[1] == ((u8)0xff))) /* Switch from ms to the 1usec global time (this is the GTIME resolution) */ -#define AVF_MS_TO_GTIME(time) ((time) * 1000) +#define IAVF_MS_TO_GTIME(time) ((time) * 1000) /* forward declaration */ -struct avf_hw; -typedef void (*AVF_ADMINQ_CALLBACK)(struct avf_hw *, struct avf_aq_desc *); +struct iavf_hw; +typedef void (*IAVF_ADMINQ_CALLBACK)(struct iavf_hw *, struct iavf_aq_desc *); #ifndef ETH_ALEN #define ETH_ALEN 6 #endif /* Data type manipulation macros. */ -#define AVF_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF)) -#define AVF_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF)) +#define IAVF_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF)) +#define IAVF_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF)) -#define AVF_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF)) -#define AVF_LO_WORD(x) ((u16)((x) & 0xFFFF)) +#define IAVF_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF)) +#define IAVF_LO_WORD(x) ((u16)((x) & 0xFFFF)) -#define AVF_HI_BYTE(x) ((u8)(((x) >> 8) & 0xFF)) -#define AVF_LO_BYTE(x) ((u8)((x) & 0xFF)) +#define IAVF_HI_BYTE(x) ((u8)(((x) >> 8) & 0xFF)) +#define IAVF_LO_BYTE(x) ((u8)((x) & 0xFF)) /* Number of Transmit Descriptors must be a multiple of 8. */ -#define AVF_REQ_TX_DESCRIPTOR_MULTIPLE 8 +#define IAVF_REQ_TX_DESCRIPTOR_MULTIPLE 8 /* Number of Receive Descriptors must be a multiple of 32 if * the number of descriptors is greater than 32. */ -#define AVF_REQ_RX_DESCRIPTOR_MULTIPLE 32 +#define IAVF_REQ_RX_DESCRIPTOR_MULTIPLE 32 -#define AVF_DESC_UNUSED(R) \ +#define IAVF_DESC_UNUSED(R) \ ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \ (R)->next_to_clean - (R)->next_to_use - 1) /* bitfields for Tx queue mapping in QTX_CTL */ -#define AVF_QTX_CTL_VF_QUEUE 0x0 -#define AVF_QTX_CTL_VM_QUEUE 0x1 -#define AVF_QTX_CTL_PF_QUEUE 0x2 +#define IAVF_QTX_CTL_VF_QUEUE 0x0 +#define IAVF_QTX_CTL_VM_QUEUE 0x1 +#define IAVF_QTX_CTL_PF_QUEUE 0x2 /* debug masks - set these bits in hw->debug_mask to control output */ -enum avf_debug_mask { - AVF_DEBUG_INIT = 0x00000001, - AVF_DEBUG_RELEASE = 0x00000002, +enum iavf_debug_mask { + IAVF_DEBUG_INIT = 0x00000001, + IAVF_DEBUG_RELEASE = 0x00000002, - AVF_DEBUG_LINK = 0x00000010, - AVF_DEBUG_PHY = 0x00000020, - AVF_DEBUG_HMC = 0x00000040, - AVF_DEBUG_NVM = 0x00000080, - AVF_DEBUG_LAN = 0x00000100, - AVF_DEBUG_FLOW = 0x00000200, - AVF_DEBUG_DCB = 0x00000400, - AVF_DEBUG_DIAG = 0x00000800, - AVF_DEBUG_FD = 0x00001000, - AVF_DEBUG_PACKAGE = 0x00002000, + IAVF_DEBUG_LINK = 0x00000010, + IAVF_DEBUG_PHY = 0x00000020, + IAVF_DEBUG_HMC = 0x00000040, + IAVF_DEBUG_NVM = 0x00000080, + IAVF_DEBUG_LAN = 0x00000100, + IAVF_DEBUG_FLOW = 0x00000200, + IAVF_DEBUG_DCB = 0x00000400, + IAVF_DEBUG_DIAG = 0x00000800, + IAVF_DEBUG_FD = 0x00001000, + IAVF_DEBUG_PACKAGE = 0x00002000, - AVF_DEBUG_AQ_MESSAGE = 0x01000000, - AVF_DEBUG_AQ_DESCRIPTOR = 0x02000000, - AVF_DEBUG_AQ_DESC_BUFFER = 0x04000000, - AVF_DEBUG_AQ_COMMAND = 0x06000000, - AVF_DEBUG_AQ = 0x0F000000, + IAVF_DEBUG_AQ_MESSAGE = 0x01000000, + IAVF_DEBUG_AQ_DESCRIPTOR = 0x02000000, + IAVF_DEBUG_AQ_DESC_BUFFER = 0x04000000, + IAVF_DEBUG_AQ_COMMAND = 0x06000000, + IAVF_DEBUG_AQ = 0x0F000000, - AVF_DEBUG_USER = 0xF0000000, + IAVF_DEBUG_USER = 0xF0000000, - AVF_DEBUG_ALL = 0xFFFFFFFF + IAVF_DEBUG_ALL = 0xFFFFFFFF }; /* PCI Bus Info */ -#define AVF_PCI_LINK_STATUS 0xB2 -#define AVF_PCI_LINK_WIDTH 0x3F0 -#define AVF_PCI_LINK_WIDTH_1 0x10 -#define AVF_PCI_LINK_WIDTH_2 0x20 -#define AVF_PCI_LINK_WIDTH_4 0x40 -#define AVF_PCI_LINK_WIDTH_8 0x80 -#define AVF_PCI_LINK_SPEED 0xF -#define AVF_PCI_LINK_SPEED_2500 0x1 -#define AVF_PCI_LINK_SPEED_5000 0x2 -#define AVF_PCI_LINK_SPEED_8000 0x3 - -#define AVF_MDIO_CLAUSE22_STCODE_MASK AVF_MASK(1, \ - AVF_GLGEN_MSCA_STCODE_SHIFT) -#define AVF_MDIO_CLAUSE22_OPCODE_WRITE_MASK AVF_MASK(1, \ - AVF_GLGEN_MSCA_OPCODE_SHIFT) -#define AVF_MDIO_CLAUSE22_OPCODE_READ_MASK AVF_MASK(2, \ - AVF_GLGEN_MSCA_OPCODE_SHIFT) - -#define AVF_MDIO_CLAUSE45_STCODE_MASK AVF_MASK(0, \ - AVF_GLGEN_MSCA_STCODE_SHIFT) -#define AVF_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK AVF_MASK(0, \ - AVF_GLGEN_MSCA_OPCODE_SHIFT) -#define AVF_MDIO_CLAUSE45_OPCODE_WRITE_MASK AVF_MASK(1, \ - AVF_GLGEN_MSCA_OPCODE_SHIFT) -#define AVF_MDIO_CLAUSE45_OPCODE_READ_INC_ADDR_MASK AVF_MASK(2, \ - AVF_GLGEN_MSCA_OPCODE_SHIFT) -#define AVF_MDIO_CLAUSE45_OPCODE_READ_MASK AVF_MASK(3, \ - AVF_GLGEN_MSCA_OPCODE_SHIFT) - -#define AVF_PHY_COM_REG_PAGE 0x1E -#define AVF_PHY_LED_LINK_MODE_MASK 0xF0 -#define AVF_PHY_LED_MANUAL_ON 0x100 -#define AVF_PHY_LED_PROV_REG_1 0xC430 -#define AVF_PHY_LED_MODE_MASK 0xFFFF -#define AVF_PHY_LED_MODE_ORIG 0x80000000 +#define IAVF_PCI_LINK_STATUS 0xB2 +#define IAVF_PCI_LINK_WIDTH 0x3F0 +#define IAVF_PCI_LINK_WIDTH_1 0x10 +#define IAVF_PCI_LINK_WIDTH_2 0x20 +#define IAVF_PCI_LINK_WIDTH_4 0x40 +#define IAVF_PCI_LINK_WIDTH_8 0x80 +#define IAVF_PCI_LINK_SPEED 0xF +#define IAVF_PCI_LINK_SPEED_2500 0x1 +#define IAVF_PCI_LINK_SPEED_5000 0x2 +#define IAVF_PCI_LINK_SPEED_8000 0x3 + +#define IAVF_MDIO_CLAUSE22_STCODE_MASK IAVF_MASK(1, \ + IAVF_GLGEN_MSCA_STCODE_SHIFT) +#define IAVF_MDIO_CLAUSE22_OPCODE_WRITE_MASK IAVF_MASK(1, \ + IAVF_GLGEN_MSCA_OPCODE_SHIFT) +#define IAVF_MDIO_CLAUSE22_OPCODE_READ_MASK IAVF_MASK(2, \ + IAVF_GLGEN_MSCA_OPCODE_SHIFT) + +#define IAVF_MDIO_CLAUSE45_STCODE_MASK IAVF_MASK(0, \ + IAVF_GLGEN_MSCA_STCODE_SHIFT) +#define IAVF_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK IAVF_MASK(0, \ + IAVF_GLGEN_MSCA_OPCODE_SHIFT) +#define IAVF_MDIO_CLAUSE45_OPCODE_WRITE_MASK IAVF_MASK(1, \ + IAVF_GLGEN_MSCA_OPCODE_SHIFT) +#define IAVF_MDIO_CLAUSE45_OPCODE_READ_INC_ADDR_MASK IAVF_MASK(2, \ + IAVF_GLGEN_MSCA_OPCODE_SHIFT) +#define IAVF_MDIO_CLAUSE45_OPCODE_READ_MASK IAVF_MASK(3, \ + IAVF_GLGEN_MSCA_OPCODE_SHIFT) + +#define IAVF_PHY_COM_REG_PAGE 0x1E +#define IAVF_PHY_LED_LINK_MODE_MASK 0xF0 +#define IAVF_PHY_LED_MANUAL_ON 0x100 +#define IAVF_PHY_LED_PROV_REG_1 0xC430 +#define IAVF_PHY_LED_MODE_MASK 0xFFFF +#define IAVF_PHY_LED_MODE_ORIG 0x80000000 /* Memory types */ -enum avf_memset_type { - AVF_NONDMA_MEM = 0, - AVF_DMA_MEM +enum iavf_memset_type { + IAVF_NONDMA_MEM = 0, + IAVF_DMA_MEM }; /* Memcpy types */ -enum avf_memcpy_type { - AVF_NONDMA_TO_NONDMA = 0, - AVF_NONDMA_TO_DMA, - AVF_DMA_TO_DMA, - AVF_DMA_TO_NONDMA +enum iavf_memcpy_type { + IAVF_NONDMA_TO_NONDMA = 0, + IAVF_NONDMA_TO_DMA, + IAVF_DMA_TO_DMA, + IAVF_DMA_TO_NONDMA }; /* These are structs for managing the hardware information and the operations. @@ -210,64 +210,64 @@ enum avf_memcpy_type { * the Firmware and AdminQ are intended to insulate the driver from most of the * future changes, but these structures will also do part of the job. */ -enum avf_mac_type { - AVF_MAC_UNKNOWN = 0, - AVF_MAC_XL710, - AVF_MAC_VF, - AVF_MAC_X722, - AVF_MAC_X722_VF, - AVF_MAC_GENERIC, -}; - -enum avf_media_type { - AVF_MEDIA_TYPE_UNKNOWN = 0, - AVF_MEDIA_TYPE_FIBER, - AVF_MEDIA_TYPE_BASET, - AVF_MEDIA_TYPE_BACKPLANE, - AVF_MEDIA_TYPE_CX4, - AVF_MEDIA_TYPE_DA, - AVF_MEDIA_TYPE_VIRTUAL -}; - -enum avf_fc_mode { - AVF_FC_NONE = 0, - AVF_FC_RX_PAUSE, - AVF_FC_TX_PAUSE, - AVF_FC_FULL, - AVF_FC_PFC, - AVF_FC_DEFAULT -}; - -enum avf_set_fc_aq_failures { - AVF_SET_FC_AQ_FAIL_NONE = 0, - AVF_SET_FC_AQ_FAIL_GET = 1, - AVF_SET_FC_AQ_FAIL_SET = 2, - AVF_SET_FC_AQ_FAIL_UPDATE = 4, - AVF_SET_FC_AQ_FAIL_SET_UPDATE = 6 -}; - -enum avf_vsi_type { - AVF_VSI_MAIN = 0, - AVF_VSI_VMDQ1 = 1, - AVF_VSI_VMDQ2 = 2, - AVF_VSI_CTRL = 3, - AVF_VSI_FCOE = 4, - AVF_VSI_MIRROR = 5, - AVF_VSI_SRIOV = 6, - AVF_VSI_FDIR = 7, - AVF_VSI_TYPE_UNKNOWN -}; - -enum avf_queue_type { - AVF_QUEUE_TYPE_RX = 0, - AVF_QUEUE_TYPE_TX, - AVF_QUEUE_TYPE_PE_CEQ, - AVF_QUEUE_TYPE_UNKNOWN -}; - -struct avf_link_status { - enum avf_aq_phy_type phy_type; - enum avf_aq_link_speed link_speed; +enum iavf_mac_type { + IAVF_MAC_UNKNOWN = 0, + IAVF_MAC_XL710, + IAVF_MAC_VF, + IAVF_MAC_X722, + IAVF_MAC_X722_VF, + IAVF_MAC_GENERIC, +}; + +enum iavf_media_type { + IAVF_MEDIA_TYPE_UNKNOWN = 0, + IAVF_MEDIA_TYPE_FIBER, + IAVF_MEDIA_TYPE_BASET, + IAVF_MEDIA_TYPE_BACKPLANE, + IAVF_MEDIA_TYPE_CX4, + IAVF_MEDIA_TYPE_DA, + IAVF_MEDIA_TYPE_VIRTUAL +}; + +enum iavf_fc_mode { + IAVF_FC_NONE = 0, + IAVF_FC_RX_PAUSE, + IAVF_FC_TX_PAUSE, + IAVF_FC_FULL, + IAVF_FC_PFC, + IAVF_FC_DEFAULT +}; + +enum iavf_set_fc_aq_failures { + IAVF_SET_FC_AQ_FAIL_NONE = 0, + IAVF_SET_FC_AQ_FAIL_GET = 1, + IAVF_SET_FC_AQ_FAIL_SET = 2, + IAVF_SET_FC_AQ_FAIL_UPDATE = 4, + IAVF_SET_FC_AQ_FAIL_SET_UPDATE = 6 +}; + +enum iavf_vsi_type { + IAVF_VSI_MAIN = 0, + IAVF_VSI_VMDQ1 = 1, + IAVF_VSI_VMDQ2 = 2, + IAVF_VSI_CTRL = 3, + IAVF_VSI_FCOE = 4, + IAVF_VSI_MIRROR = 5, + IAVF_VSI_SRIOV = 6, + IAVF_VSI_FDIR = 7, + IAVF_VSI_TYPE_UNKNOWN +}; + +enum iavf_queue_type { + IAVF_QUEUE_TYPE_RX = 0, + IAVF_QUEUE_TYPE_TX, + IAVF_QUEUE_TYPE_PE_CEQ, + IAVF_QUEUE_TYPE_UNKNOWN +}; + +struct iavf_link_status { + enum iavf_aq_phy_type phy_type; + enum iavf_aq_link_speed link_speed; u8 link_info; u8 an_info; u8 req_fec_info; @@ -282,107 +282,107 @@ struct avf_link_status { u8 requested_speeds; u8 module_type[3]; /* 1st byte: module identifier */ -#define AVF_MODULE_TYPE_SFP 0x03 -#define AVF_MODULE_TYPE_QSFP 0x0D +#define IAVF_MODULE_TYPE_SFP 0x03 +#define IAVF_MODULE_TYPE_QSFP 0x0D /* 2nd byte: ethernet compliance codes for 10/40G */ -#define AVF_MODULE_TYPE_40G_ACTIVE 0x01 -#define AVF_MODULE_TYPE_40G_LR4 0x02 -#define AVF_MODULE_TYPE_40G_SR4 0x04 -#define AVF_MODULE_TYPE_40G_CR4 0x08 -#define AVF_MODULE_TYPE_10G_BASE_SR 0x10 -#define AVF_MODULE_TYPE_10G_BASE_LR 0x20 -#define AVF_MODULE_TYPE_10G_BASE_LRM 0x40 -#define AVF_MODULE_TYPE_10G_BASE_ER 0x80 +#define IAVF_MODULE_TYPE_40G_ACTIVE 0x01 +#define IAVF_MODULE_TYPE_40G_LR4 0x02 +#define IAVF_MODULE_TYPE_40G_SR4 0x04 +#define IAVF_MODULE_TYPE_40G_CR4 0x08 +#define IAVF_MODULE_TYPE_10G_BASE_SR 0x10 +#define IAVF_MODULE_TYPE_10G_BASE_LR 0x20 +#define IAVF_MODULE_TYPE_10G_BASE_LRM 0x40 +#define IAVF_MODULE_TYPE_10G_BASE_ER 0x80 /* 3rd byte: ethernet compliance codes for 1G */ -#define AVF_MODULE_TYPE_1000BASE_SX 0x01 -#define AVF_MODULE_TYPE_1000BASE_LX 0x02 -#define AVF_MODULE_TYPE_1000BASE_CX 0x04 -#define AVF_MODULE_TYPE_1000BASE_T 0x08 +#define IAVF_MODULE_TYPE_1000BASE_SX 0x01 +#define IAVF_MODULE_TYPE_1000BASE_LX 0x02 +#define IAVF_MODULE_TYPE_1000BASE_CX 0x04 +#define IAVF_MODULE_TYPE_1000BASE_T 0x08 }; -struct avf_phy_info { - struct avf_link_status link_info; - struct avf_link_status link_info_old; +struct iavf_phy_info { + struct iavf_link_status link_info; + struct iavf_link_status link_info_old; bool get_link_info; - enum avf_media_type media_type; + enum iavf_media_type media_type; /* all the phy types the NVM is capable of */ u64 phy_types; }; -#define AVF_CAP_PHY_TYPE_SGMII BIT_ULL(AVF_PHY_TYPE_SGMII) -#define AVF_CAP_PHY_TYPE_1000BASE_KX BIT_ULL(AVF_PHY_TYPE_1000BASE_KX) -#define AVF_CAP_PHY_TYPE_10GBASE_KX4 BIT_ULL(AVF_PHY_TYPE_10GBASE_KX4) -#define AVF_CAP_PHY_TYPE_10GBASE_KR BIT_ULL(AVF_PHY_TYPE_10GBASE_KR) -#define AVF_CAP_PHY_TYPE_40GBASE_KR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_KR4) -#define AVF_CAP_PHY_TYPE_XAUI BIT_ULL(AVF_PHY_TYPE_XAUI) -#define AVF_CAP_PHY_TYPE_XFI BIT_ULL(AVF_PHY_TYPE_XFI) -#define AVF_CAP_PHY_TYPE_SFI BIT_ULL(AVF_PHY_TYPE_SFI) -#define AVF_CAP_PHY_TYPE_XLAUI BIT_ULL(AVF_PHY_TYPE_XLAUI) -#define AVF_CAP_PHY_TYPE_XLPPI BIT_ULL(AVF_PHY_TYPE_XLPPI) -#define AVF_CAP_PHY_TYPE_40GBASE_CR4_CU BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4_CU) -#define AVF_CAP_PHY_TYPE_10GBASE_CR1_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1_CU) -#define AVF_CAP_PHY_TYPE_10GBASE_AOC BIT_ULL(AVF_PHY_TYPE_10GBASE_AOC) -#define AVF_CAP_PHY_TYPE_40GBASE_AOC BIT_ULL(AVF_PHY_TYPE_40GBASE_AOC) -#define AVF_CAP_PHY_TYPE_100BASE_TX BIT_ULL(AVF_PHY_TYPE_100BASE_TX) -#define AVF_CAP_PHY_TYPE_1000BASE_T BIT_ULL(AVF_PHY_TYPE_1000BASE_T) -#define AVF_CAP_PHY_TYPE_10GBASE_T BIT_ULL(AVF_PHY_TYPE_10GBASE_T) -#define AVF_CAP_PHY_TYPE_10GBASE_SR BIT_ULL(AVF_PHY_TYPE_10GBASE_SR) -#define AVF_CAP_PHY_TYPE_10GBASE_LR BIT_ULL(AVF_PHY_TYPE_10GBASE_LR) -#define AVF_CAP_PHY_TYPE_10GBASE_SFPP_CU BIT_ULL(AVF_PHY_TYPE_10GBASE_SFPP_CU) -#define AVF_CAP_PHY_TYPE_10GBASE_CR1 BIT_ULL(AVF_PHY_TYPE_10GBASE_CR1) -#define AVF_CAP_PHY_TYPE_40GBASE_CR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_CR4) -#define AVF_CAP_PHY_TYPE_40GBASE_SR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_SR4) -#define AVF_CAP_PHY_TYPE_40GBASE_LR4 BIT_ULL(AVF_PHY_TYPE_40GBASE_LR4) -#define AVF_CAP_PHY_TYPE_1000BASE_SX BIT_ULL(AVF_PHY_TYPE_1000BASE_SX) -#define AVF_CAP_PHY_TYPE_1000BASE_LX BIT_ULL(AVF_PHY_TYPE_1000BASE_LX) -#define AVF_CAP_PHY_TYPE_1000BASE_T_OPTICAL \ - BIT_ULL(AVF_PHY_TYPE_1000BASE_T_OPTICAL) -#define AVF_CAP_PHY_TYPE_20GBASE_KR2 BIT_ULL(AVF_PHY_TYPE_20GBASE_KR2) +#define IAVF_CAP_PHY_TYPE_SGMII BIT_ULL(IAVF_PHY_TYPE_SGMII) +#define IAVF_CAP_PHY_TYPE_1000BASE_KX BIT_ULL(IAVF_PHY_TYPE_1000BASE_KX) +#define IAVF_CAP_PHY_TYPE_10GBASE_KX4 BIT_ULL(IAVF_PHY_TYPE_10GBASE_KX4) +#define IAVF_CAP_PHY_TYPE_10GBASE_KR BIT_ULL(IAVF_PHY_TYPE_10GBASE_KR) +#define IAVF_CAP_PHY_TYPE_40GBASE_KR4 BIT_ULL(IAVF_PHY_TYPE_40GBASE_KR4) +#define IAVF_CAP_PHY_TYPE_XAUI BIT_ULL(IAVF_PHY_TYPE_XAUI) +#define IAVF_CAP_PHY_TYPE_XFI BIT_ULL(IAVF_PHY_TYPE_XFI) +#define IAVF_CAP_PHY_TYPE_SFI BIT_ULL(IAVF_PHY_TYPE_SFI) +#define IAVF_CAP_PHY_TYPE_XLAUI BIT_ULL(IAVF_PHY_TYPE_XLAUI) +#define IAVF_CAP_PHY_TYPE_XLPPI BIT_ULL(IAVF_PHY_TYPE_XLPPI) +#define IAVF_CAP_PHY_TYPE_40GBASE_CR4_CU BIT_ULL(IAVF_PHY_TYPE_40GBASE_CR4_CU) +#define IAVF_CAP_PHY_TYPE_10GBASE_CR1_CU BIT_ULL(IAVF_PHY_TYPE_10GBASE_CR1_CU) +#define IAVF_CAP_PHY_TYPE_10GBASE_AOC BIT_ULL(IAVF_PHY_TYPE_10GBASE_AOC) +#define IAVF_CAP_PHY_TYPE_40GBASE_AOC BIT_ULL(IAVF_PHY_TYPE_40GBASE_AOC) +#define IAVF_CAP_PHY_TYPE_100BASE_TX BIT_ULL(IAVF_PHY_TYPE_100BASE_TX) +#define IAVF_CAP_PHY_TYPE_1000BASE_T BIT_ULL(IAVF_PHY_TYPE_1000BASE_T) +#define IAVF_CAP_PHY_TYPE_10GBASE_T BIT_ULL(IAVF_PHY_TYPE_10GBASE_T) +#define IAVF_CAP_PHY_TYPE_10GBASE_SR BIT_ULL(IAVF_PHY_TYPE_10GBASE_SR) +#define IAVF_CAP_PHY_TYPE_10GBASE_LR BIT_ULL(IAVF_PHY_TYPE_10GBASE_LR) +#define IAVF_CAP_PHY_TYPE_10GBASE_SFPP_CU BIT_ULL(IAVF_PHY_TYPE_10GBASE_SFPP_CU) +#define IAVF_CAP_PHY_TYPE_10GBASE_CR1 BIT_ULL(IAVF_PHY_TYPE_10GBASE_CR1) +#define IAVF_CAP_PHY_TYPE_40GBASE_CR4 BIT_ULL(IAVF_PHY_TYPE_40GBASE_CR4) +#define IAVF_CAP_PHY_TYPE_40GBASE_SR4 BIT_ULL(IAVF_PHY_TYPE_40GBASE_SR4) +#define IAVF_CAP_PHY_TYPE_40GBASE_LR4 BIT_ULL(IAVF_PHY_TYPE_40GBASE_LR4) +#define IAVF_CAP_PHY_TYPE_1000BASE_SX BIT_ULL(IAVF_PHY_TYPE_1000BASE_SX) +#define IAVF_CAP_PHY_TYPE_1000BASE_LX BIT_ULL(IAVF_PHY_TYPE_1000BASE_LX) +#define IAVF_CAP_PHY_TYPE_1000BASE_T_OPTICAL \ + BIT_ULL(IAVF_PHY_TYPE_1000BASE_T_OPTICAL) +#define IAVF_CAP_PHY_TYPE_20GBASE_KR2 BIT_ULL(IAVF_PHY_TYPE_20GBASE_KR2) /* - * Defining the macro AVF_TYPE_OFFSET to implement a bit shift for some - * PHY types. There is an unused bit (31) in the AVF_CAP_PHY_TYPE_* bit - * fields but no corresponding gap in the avf_aq_phy_type enumeration. So, + * Defining the macro IAVF_TYPE_OFFSET to implement a bit shift for some + * PHY types. There is an unused bit (31) in the IAVF_CAP_PHY_TYPE_* bit + * fields but no corresponding gap in the iavf_aq_phy_type enumeration. So, * a shift is needed to adjust for this with values larger than 31. The - * only affected values are AVF_PHY_TYPE_25GBASE_*. + * only affected values are IAVF_PHY_TYPE_25GBASE_*. */ -#define AVF_PHY_TYPE_OFFSET 1 -#define AVF_CAP_PHY_TYPE_25GBASE_KR BIT_ULL(AVF_PHY_TYPE_25GBASE_KR + \ - AVF_PHY_TYPE_OFFSET) -#define AVF_CAP_PHY_TYPE_25GBASE_CR BIT_ULL(AVF_PHY_TYPE_25GBASE_CR + \ - AVF_PHY_TYPE_OFFSET) -#define AVF_CAP_PHY_TYPE_25GBASE_SR BIT_ULL(AVF_PHY_TYPE_25GBASE_SR + \ - AVF_PHY_TYPE_OFFSET) -#define AVF_CAP_PHY_TYPE_25GBASE_LR BIT_ULL(AVF_PHY_TYPE_25GBASE_LR + \ - AVF_PHY_TYPE_OFFSET) -#define AVF_CAP_PHY_TYPE_25GBASE_AOC BIT_ULL(AVF_PHY_TYPE_25GBASE_AOC + \ - AVF_PHY_TYPE_OFFSET) -#define AVF_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(AVF_PHY_TYPE_25GBASE_ACC + \ - AVF_PHY_TYPE_OFFSET) -#define AVF_HW_CAP_MAX_GPIO 30 -#define AVF_HW_CAP_MDIO_PORT_MODE_MDIO 0 -#define AVF_HW_CAP_MDIO_PORT_MODE_I2C 1 - -enum avf_acpi_programming_method { - AVF_ACPI_PROGRAMMING_METHOD_HW_FVL = 0, - AVF_ACPI_PROGRAMMING_METHOD_AQC_FPK = 1 -}; - -#define AVF_WOL_SUPPORT_MASK 0x1 -#define AVF_ACPI_PROGRAMMING_METHOD_MASK 0x2 -#define AVF_PROXY_SUPPORT_MASK 0x4 +#define IAVF_PHY_TYPE_OFFSET 1 +#define IAVF_CAP_PHY_TYPE_25GBASE_KR BIT_ULL(IAVF_PHY_TYPE_25GBASE_KR + \ + IAVF_PHY_TYPE_OFFSET) +#define IAVF_CAP_PHY_TYPE_25GBASE_CR BIT_ULL(IAVF_PHY_TYPE_25GBASE_CR + \ + IAVF_PHY_TYPE_OFFSET) +#define IAVF_CAP_PHY_TYPE_25GBASE_SR BIT_ULL(IAVF_PHY_TYPE_25GBASE_SR + \ + IAVF_PHY_TYPE_OFFSET) +#define IAVF_CAP_PHY_TYPE_25GBASE_LR BIT_ULL(IAVF_PHY_TYPE_25GBASE_LR + \ + IAVF_PHY_TYPE_OFFSET) +#define IAVF_CAP_PHY_TYPE_25GBASE_AOC BIT_ULL(IAVF_PHY_TYPE_25GBASE_AOC + \ + IAVF_PHY_TYPE_OFFSET) +#define IAVF_CAP_PHY_TYPE_25GBASE_ACC BIT_ULL(IAVF_PHY_TYPE_25GBASE_ACC + \ + IAVF_PHY_TYPE_OFFSET) +#define IAVF_HW_CAP_MAX_GPIO 30 +#define IAVF_HW_CAP_MDIO_PORT_MODE_MDIO 0 +#define IAVF_HW_CAP_MDIO_PORT_MODE_I2C 1 + +enum iavf_acpi_programming_method { + IAVF_ACPI_PROGRAMMING_METHOD_HW_FVL = 0, + IAVF_ACPI_PROGRAMMING_METHOD_AQC_FPK = 1 +}; + +#define IAVF_WOL_SUPPORT_MASK 0x1 +#define IAVF_ACPI_PROGRAMMING_METHOD_MASK 0x2 +#define IAVF_PROXY_SUPPORT_MASK 0x4 /* Capabilities of a PF or a VF or the whole device */ -struct avf_hw_capabilities { +struct iavf_hw_capabilities { u32 switch_mode; -#define AVF_NVM_IMAGE_TYPE_EVB 0x0 -#define AVF_NVM_IMAGE_TYPE_CLOUD 0x2 -#define AVF_NVM_IMAGE_TYPE_UDP_CLOUD 0x3 +#define IAVF_NVM_IMAGE_TYPE_EVB 0x0 +#define IAVF_NVM_IMAGE_TYPE_CLOUD 0x2 +#define IAVF_NVM_IMAGE_TYPE_UDP_CLOUD 0x3 u32 management_mode; u32 mng_protocols_over_mctp; -#define AVF_MNG_PROTOCOL_PLDM 0x2 -#define AVF_MNG_PROTOCOL_OEM_COMMANDS 0x4 -#define AVF_MNG_PROTOCOL_NCSI 0x8 +#define IAVF_MNG_PROTOCOL_PLDM 0x2 +#define IAVF_MNG_PROTOCOL_OEM_COMMANDS 0x4 +#define IAVF_MNG_PROTOCOL_NCSI 0x8 u32 npar_enable; u32 os2bmc; u32 valid_functions; @@ -396,18 +396,18 @@ struct avf_hw_capabilities { bool flex10_enable; bool flex10_capable; u32 flex10_mode; -#define AVF_FLEX10_MODE_UNKNOWN 0x0 -#define AVF_FLEX10_MODE_DCC 0x1 -#define AVF_FLEX10_MODE_DCI 0x2 +#define IAVF_FLEX10_MODE_UNKNOWN 0x0 +#define IAVF_FLEX10_MODE_DCC 0x1 +#define IAVF_FLEX10_MODE_DCI 0x2 u32 flex10_status; -#define AVF_FLEX10_STATUS_DCC_ERROR 0x1 -#define AVF_FLEX10_STATUS_VC_MODE 0x2 +#define IAVF_FLEX10_STATUS_DCC_ERROR 0x1 +#define IAVF_FLEX10_STATUS_VC_MODE 0x2 bool sec_rev_disabled; bool update_disabled; -#define AVF_NVM_MGMT_SEC_REV_DISABLED 0x1 -#define AVF_NVM_MGMT_UPDATE_DISABLED 0x2 +#define IAVF_NVM_MGMT_SEC_REV_DISABLED 0x1 +#define IAVF_NVM_MGMT_UPDATE_DISABLED 0x2 bool mgmt_cem; bool ieee_1588; @@ -418,8 +418,8 @@ struct avf_hw_capabilities { bool rss; u32 rss_table_size; u32 rss_table_entry_width; - bool led[AVF_HW_CAP_MAX_GPIO]; - bool sdp[AVF_HW_CAP_MAX_GPIO]; + bool led[IAVF_HW_CAP_MAX_GPIO]; + bool sdp[IAVF_HW_CAP_MAX_GPIO]; u32 nvm_image_type; u32 num_flow_director_filters; u32 num_vfs; @@ -439,12 +439,12 @@ struct avf_hw_capabilities { u32 maxtc; u64 wr_csr_prot; bool apm_wol_support; - enum avf_acpi_programming_method acpi_prog_method; + enum iavf_acpi_programming_method acpi_prog_method; bool proxy_support; }; -struct avf_mac_info { - enum avf_mac_type type; +struct iavf_mac_info { + enum iavf_mac_type type; u8 addr[ETH_ALEN]; u8 perm_addr[ETH_ALEN]; u8 san_addr[ETH_ALEN]; @@ -452,16 +452,16 @@ struct avf_mac_info { u16 max_fcoeq; }; -enum avf_aq_resources_ids { - AVF_NVM_RESOURCE_ID = 1 +enum iavf_aq_resources_ids { + IAVF_NVM_RESOURCE_ID = 1 }; -enum avf_aq_resource_access_type { - AVF_RESOURCE_READ = 1, - AVF_RESOURCE_WRITE +enum iavf_aq_resource_access_type { + IAVF_RESOURCE_READ = 1, + IAVF_RESOURCE_WRITE }; -struct avf_nvm_info { +struct iavf_nvm_info { u64 hw_semaphore_timeout; /* usec global time (GTIME resolution) */ u32 timeout; /* [ms] */ u16 sr_size; /* Shadow RAM size in words */ @@ -473,66 +473,66 @@ struct avf_nvm_info { /* definitions used in NVM update support */ -enum avf_nvmupd_cmd { - AVF_NVMUPD_INVALID, - AVF_NVMUPD_READ_CON, - AVF_NVMUPD_READ_SNT, - AVF_NVMUPD_READ_LCB, - AVF_NVMUPD_READ_SA, - AVF_NVMUPD_WRITE_ERA, - AVF_NVMUPD_WRITE_CON, - AVF_NVMUPD_WRITE_SNT, - AVF_NVMUPD_WRITE_LCB, - AVF_NVMUPD_WRITE_SA, - AVF_NVMUPD_CSUM_CON, - AVF_NVMUPD_CSUM_SA, - AVF_NVMUPD_CSUM_LCB, - AVF_NVMUPD_STATUS, - AVF_NVMUPD_EXEC_AQ, - AVF_NVMUPD_GET_AQ_RESULT, - AVF_NVMUPD_GET_AQ_EVENT, -}; - -enum avf_nvmupd_state { - AVF_NVMUPD_STATE_INIT, - AVF_NVMUPD_STATE_READING, - AVF_NVMUPD_STATE_WRITING, - AVF_NVMUPD_STATE_INIT_WAIT, - AVF_NVMUPD_STATE_WRITE_WAIT, - AVF_NVMUPD_STATE_ERROR +enum iavf_nvmupd_cmd { + IAVF_NVMUPD_INVALID, + IAVF_NVMUPD_READ_CON, + IAVF_NVMUPD_READ_SNT, + IAVF_NVMUPD_READ_LCB, + IAVF_NVMUPD_READ_SA, + IAVF_NVMUPD_WRITE_ERA, + IAVF_NVMUPD_WRITE_CON, + IAVF_NVMUPD_WRITE_SNT, + IAVF_NVMUPD_WRITE_LCB, + IAVF_NVMUPD_WRITE_SA, + IAVF_NVMUPD_CSUM_CON, + IAVF_NVMUPD_CSUM_SA, + IAVF_NVMUPD_CSUM_LCB, + IAVF_NVMUPD_STATUS, + IAVF_NVMUPD_EXEC_AQ, + IAVF_NVMUPD_GET_AQ_RESULT, + IAVF_NVMUPD_GET_AQ_EVENT, +}; + +enum iavf_nvmupd_state { + IAVF_NVMUPD_STATE_INIT, + IAVF_NVMUPD_STATE_READING, + IAVF_NVMUPD_STATE_WRITING, + IAVF_NVMUPD_STATE_INIT_WAIT, + IAVF_NVMUPD_STATE_WRITE_WAIT, + IAVF_NVMUPD_STATE_ERROR }; /* nvm_access definition and its masks/shifts need to be accessible to * application, core driver, and shared code. Where is the right file? */ -#define AVF_NVM_READ 0xB -#define AVF_NVM_WRITE 0xC - -#define AVF_NVM_MOD_PNT_MASK 0xFF - -#define AVF_NVM_TRANS_SHIFT 8 -#define AVF_NVM_TRANS_MASK (0xf << AVF_NVM_TRANS_SHIFT) -#define AVF_NVM_PRESERVATION_FLAGS_SHIFT 12 -#define AVF_NVM_PRESERVATION_FLAGS_MASK \ - (0x3 << AVF_NVM_PRESERVATION_FLAGS_SHIFT) -#define AVF_NVM_PRESERVATION_FLAGS_SELECTED 0x01 -#define AVF_NVM_PRESERVATION_FLAGS_ALL 0x02 -#define AVF_NVM_CON 0x0 -#define AVF_NVM_SNT 0x1 -#define AVF_NVM_LCB 0x2 -#define AVF_NVM_SA (AVF_NVM_SNT | AVF_NVM_LCB) -#define AVF_NVM_ERA 0x4 -#define AVF_NVM_CSUM 0x8 -#define AVF_NVM_AQE 0xe -#define AVF_NVM_EXEC 0xf - -#define AVF_NVM_ADAPT_SHIFT 16 -#define AVF_NVM_ADAPT_MASK (0xffffULL << AVF_NVM_ADAPT_SHIFT) - -#define AVF_NVMUPD_MAX_DATA 4096 -#define AVF_NVMUPD_IFACE_TIMEOUT 2 /* seconds */ - -struct avf_nvm_access { +#define IAVF_NVM_READ 0xB +#define IAVF_NVM_WRITE 0xC + +#define IAVF_NVM_MOD_PNT_MASK 0xFF + +#define IAVF_NVM_TRANS_SHIFT 8 +#define IAVF_NVM_TRANS_MASK (0xf << IAVF_NVM_TRANS_SHIFT) +#define IAVF_NVM_PRESERVATION_FLAGS_SHIFT 12 +#define IAVF_NVM_PRESERVATION_FLAGS_MASK \ + (0x3 << IAVF_NVM_PRESERVATION_FLAGS_SHIFT) +#define IAVF_NVM_PRESERVATION_FLAGS_SELECTED 0x01 +#define IAVF_NVM_PRESERVATION_FLAGS_ALL 0x02 +#define IAVF_NVM_CON 0x0 +#define IAVF_NVM_SNT 0x1 +#define IAVF_NVM_LCB 0x2 +#define IAVF_NVM_SA (IAVF_NVM_SNT | IAVF_NVM_LCB) +#define IAVF_NVM_ERA 0x4 +#define IAVF_NVM_CSUM 0x8 +#define IAVF_NVM_AQE 0xe +#define IAVF_NVM_EXEC 0xf + +#define IAVF_NVM_ADAPT_SHIFT 16 +#define IAVF_NVM_ADAPT_MASK (0xffffULL << IAVF_NVM_ADAPT_SHIFT) + +#define IAVF_NVMUPD_MAX_DATA 4096 +#define IAVF_NVMUPD_IFACE_TIMEOUT 2 /* seconds */ + +struct iavf_nvm_access { u32 command; u32 config; u32 offset; /* in bytes */ @@ -541,58 +541,58 @@ struct avf_nvm_access { }; /* (Q)SFP module access definitions */ -#define AVF_I2C_EEPROM_DEV_ADDR 0xA0 -#define AVF_I2C_EEPROM_DEV_ADDR2 0xA2 -#define AVF_MODULE_TYPE_ADDR 0x00 -#define AVF_MODULE_REVISION_ADDR 0x01 -#define AVF_MODULE_SFF_8472_COMP 0x5E -#define AVF_MODULE_SFF_8472_SWAP 0x5C -#define AVF_MODULE_SFF_ADDR_MODE 0x04 -#define AVF_MODULE_SFF_DIAG_CAPAB 0x40 -#define AVF_MODULE_TYPE_QSFP_PLUS 0x0D -#define AVF_MODULE_TYPE_QSFP28 0x11 -#define AVF_MODULE_QSFP_MAX_LEN 640 +#define IAVF_I2C_EEPROM_DEV_ADDR 0xA0 +#define IAVF_I2C_EEPROM_DEV_ADDR2 0xA2 +#define IAVF_MODULE_TYPE_ADDR 0x00 +#define IAVF_MODULE_REVISION_ADDR 0x01 +#define IAVF_MODULE_SFF_8472_COMP 0x5E +#define IAVF_MODULE_SFF_8472_SWAP 0x5C +#define IAVF_MODULE_SFF_ADDR_MODE 0x04 +#define IAVF_MODULE_SFF_DIAG_CAPAB 0x40 +#define IAVF_MODULE_TYPE_QSFP_PLUS 0x0D +#define IAVF_MODULE_TYPE_QSFP28 0x11 +#define IAVF_MODULE_QSFP_MAX_LEN 640 /* PCI bus types */ -enum avf_bus_type { - avf_bus_type_unknown = 0, - avf_bus_type_pci, - avf_bus_type_pcix, - avf_bus_type_pci_express, - avf_bus_type_reserved +enum iavf_bus_type { + iavf_bus_type_unknown = 0, + iavf_bus_type_pci, + iavf_bus_type_pcix, + iavf_bus_type_pci_express, + iavf_bus_type_reserved }; /* PCI bus speeds */ -enum avf_bus_speed { - avf_bus_speed_unknown = 0, - avf_bus_speed_33 = 33, - avf_bus_speed_66 = 66, - avf_bus_speed_100 = 100, - avf_bus_speed_120 = 120, - avf_bus_speed_133 = 133, - avf_bus_speed_2500 = 2500, - avf_bus_speed_5000 = 5000, - avf_bus_speed_8000 = 8000, - avf_bus_speed_reserved +enum iavf_bus_speed { + iavf_bus_speed_unknown = 0, + iavf_bus_speed_33 = 33, + iavf_bus_speed_66 = 66, + iavf_bus_speed_100 = 100, + iavf_bus_speed_120 = 120, + iavf_bus_speed_133 = 133, + iavf_bus_speed_2500 = 2500, + iavf_bus_speed_5000 = 5000, + iavf_bus_speed_8000 = 8000, + iavf_bus_speed_reserved }; /* PCI bus widths */ -enum avf_bus_width { - avf_bus_width_unknown = 0, - avf_bus_width_pcie_x1 = 1, - avf_bus_width_pcie_x2 = 2, - avf_bus_width_pcie_x4 = 4, - avf_bus_width_pcie_x8 = 8, - avf_bus_width_32 = 32, - avf_bus_width_64 = 64, - avf_bus_width_reserved +enum iavf_bus_width { + iavf_bus_width_unknown = 0, + iavf_bus_width_pcie_x1 = 1, + iavf_bus_width_pcie_x2 = 2, + iavf_bus_width_pcie_x4 = 4, + iavf_bus_width_pcie_x8 = 8, + iavf_bus_width_32 = 32, + iavf_bus_width_64 = 64, + iavf_bus_width_reserved }; /* Bus parameters */ -struct avf_bus_info { - enum avf_bus_speed speed; - enum avf_bus_width width; - enum avf_bus_type type; +struct iavf_bus_info { + enum iavf_bus_speed speed; + enum iavf_bus_width width; + enum iavf_bus_type type; u16 func; u16 device; @@ -601,39 +601,39 @@ struct avf_bus_info { }; /* Flow control (FC) parameters */ -struct avf_fc_info { - enum avf_fc_mode current_mode; /* FC mode in effect */ - enum avf_fc_mode requested_mode; /* FC mode requested by caller */ -}; - -#define AVF_MAX_TRAFFIC_CLASS 8 -#define AVF_MAX_USER_PRIORITY 8 -#define AVF_DCBX_MAX_APPS 32 -#define AVF_LLDPDU_SIZE 1500 -#define AVF_TLV_STATUS_OPER 0x1 -#define AVF_TLV_STATUS_SYNC 0x2 -#define AVF_TLV_STATUS_ERR 0x4 -#define AVF_CEE_OPER_MAX_APPS 3 -#define AVF_APP_PROTOID_FCOE 0x8906 -#define AVF_APP_PROTOID_ISCSI 0x0cbc -#define AVF_APP_PROTOID_FIP 0x8914 -#define AVF_APP_SEL_ETHTYPE 0x1 -#define AVF_APP_SEL_TCPIP 0x2 -#define AVF_CEE_APP_SEL_ETHTYPE 0x0 -#define AVF_CEE_APP_SEL_TCPIP 0x1 +struct iavf_fc_info { + enum iavf_fc_mode current_mode; /* FC mode in effect */ + enum iavf_fc_mode requested_mode; /* FC mode requested by caller */ +}; + +#define IAVF_MAX_TRAFFIC_CLASS 8 +#define IAVF_MAX_USER_PRIORITY 8 +#define IAVF_DCBX_MAX_APPS 32 +#define IAVF_LLDPDU_SIZE 1500 +#define IAVF_TLV_STATUS_OPER 0x1 +#define IAVF_TLV_STATUS_SYNC 0x2 +#define IAVF_TLV_STATUS_ERR 0x4 +#define IAVF_CEE_OPER_MAX_APPS 3 +#define IAVF_APP_PROTOID_FCOE 0x8906 +#define IAVF_APP_PROTOID_ISCSI 0x0cbc +#define IAVF_APP_PROTOID_FIP 0x8914 +#define IAVF_APP_SEL_ETHTYPE 0x1 +#define IAVF_APP_SEL_TCPIP 0x2 +#define IAVF_CEE_APP_SEL_ETHTYPE 0x0 +#define IAVF_CEE_APP_SEL_TCPIP 0x1 /* CEE or IEEE 802.1Qaz ETS Configuration data */ -struct avf_dcb_ets_config { +struct iavf_dcb_ets_config { u8 willing; u8 cbs; u8 maxtcs; - u8 prioritytable[AVF_MAX_TRAFFIC_CLASS]; - u8 tcbwtable[AVF_MAX_TRAFFIC_CLASS]; - u8 tsatable[AVF_MAX_TRAFFIC_CLASS]; + u8 prioritytable[IAVF_MAX_TRAFFIC_CLASS]; + u8 tcbwtable[IAVF_MAX_TRAFFIC_CLASS]; + u8 tsatable[IAVF_MAX_TRAFFIC_CLASS]; }; /* CEE or IEEE 802.1Qaz PFC Configuration data */ -struct avf_dcb_pfc_config { +struct iavf_dcb_pfc_config { u8 willing; u8 mbc; u8 pfccap; @@ -641,37 +641,37 @@ struct avf_dcb_pfc_config { }; /* CEE or IEEE 802.1Qaz Application Priority data */ -struct avf_dcb_app_priority_table { +struct iavf_dcb_app_priority_table { u8 priority; u8 selector; u16 protocolid; }; -struct avf_dcbx_config { +struct iavf_dcbx_config { u8 dcbx_mode; -#define AVF_DCBX_MODE_CEE 0x1 -#define AVF_DCBX_MODE_IEEE 0x2 +#define IAVF_DCBX_MODE_CEE 0x1 +#define IAVF_DCBX_MODE_IEEE 0x2 u8 app_mode; -#define AVF_DCBX_APPS_NON_WILLING 0x1 +#define IAVF_DCBX_APPS_NON_WILLING 0x1 u32 numapps; u32 tlv_status; /* CEE mode TLV status */ - struct avf_dcb_ets_config etscfg; - struct avf_dcb_ets_config etsrec; - struct avf_dcb_pfc_config pfc; - struct avf_dcb_app_priority_table app[AVF_DCBX_MAX_APPS]; + struct iavf_dcb_ets_config etscfg; + struct iavf_dcb_ets_config etsrec; + struct iavf_dcb_pfc_config pfc; + struct iavf_dcb_app_priority_table app[IAVF_DCBX_MAX_APPS]; }; /* Port hardware description */ -struct avf_hw { +struct iavf_hw { u8 *hw_addr; void *back; /* subsystem structs */ - struct avf_phy_info phy; - struct avf_mac_info mac; - struct avf_bus_info bus; - struct avf_nvm_info nvm; - struct avf_fc_info fc; + struct iavf_phy_info phy; + struct iavf_mac_info mac; + struct iavf_bus_info bus; + struct iavf_nvm_info nvm; + struct iavf_fc_info fc; /* pci info */ u16 device_id; @@ -683,8 +683,8 @@ struct avf_hw { bool adapter_stopped; /* capabilities for entire device and PCI func */ - struct avf_hw_capabilities dev_caps; - struct avf_hw_capabilities func_caps; + struct iavf_hw_capabilities dev_caps; + struct iavf_hw_capabilities func_caps; /* Flow Director shared filter space */ u16 fdir_shared_filter_count; @@ -702,35 +702,35 @@ struct avf_hw { u16 numa_node; /* Admin Queue info */ - struct avf_adminq_info aq; + struct iavf_adminq_info aq; /* state of nvm update process */ - enum avf_nvmupd_state nvmupd_state; - struct avf_aq_desc nvm_wb_desc; - struct avf_aq_desc nvm_aq_event_desc; - struct avf_virt_mem nvm_buff; + enum iavf_nvmupd_state nvmupd_state; + struct iavf_aq_desc nvm_wb_desc; + struct iavf_aq_desc nvm_aq_event_desc; + struct iavf_virt_mem nvm_buff; bool nvm_release_on_done; u16 nvm_wait_opcode; /* HMC info */ - struct avf_hmc_info hmc; /* HMC info struct */ + struct iavf_hmc_info hmc; /* HMC info struct */ /* LLDP/DCBX Status */ u16 dcbx_status; /* DCBX info */ - struct avf_dcbx_config local_dcbx_config; /* Oper/Local Cfg */ - struct avf_dcbx_config remote_dcbx_config; /* Peer Cfg */ - struct avf_dcbx_config desired_dcbx_config; /* CEE Desired Cfg */ + struct iavf_dcbx_config local_dcbx_config; /* Oper/Local Cfg */ + struct iavf_dcbx_config remote_dcbx_config; /* Peer Cfg */ + struct iavf_dcbx_config desired_dcbx_config; /* CEE Desired Cfg */ /* WoL and proxy support */ u16 num_wol_proxy_filters; u16 wol_proxy_vsi_seid; -#define AVF_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE BIT_ULL(0) -#define AVF_HW_FLAG_802_1AD_CAPABLE BIT_ULL(1) -#define AVF_HW_FLAG_AQ_PHY_ACCESS_CAPABLE BIT_ULL(2) -#define AVF_HW_FLAG_NVM_READ_REQUIRES_LOCK BIT_ULL(3) +#define IAVF_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE BIT_ULL(0) +#define IAVF_HW_FLAG_802_1AD_CAPABLE BIT_ULL(1) +#define IAVF_HW_FLAG_AQ_PHY_ACCESS_CAPABLE BIT_ULL(2) +#define IAVF_HW_FLAG_NVM_READ_REQUIRES_LOCK BIT_ULL(3) u64 flags; /* Used in set switch config AQ command */ @@ -743,13 +743,13 @@ struct avf_hw { char err_str[16]; }; -STATIC INLINE bool avf_is_vf(struct avf_hw *hw) +STATIC INLINE bool iavf_is_vf(struct iavf_hw *hw) { - return (hw->mac.type == AVF_MAC_VF || - hw->mac.type == AVF_MAC_X722_VF); + return (hw->mac.type == IAVF_MAC_VF || + hw->mac.type == IAVF_MAC_X722_VF); } -struct avf_driver_version { +struct iavf_driver_version { u8 major_version; u8 minor_version; u8 build_version; @@ -758,7 +758,7 @@ struct avf_driver_version { }; /* RX Descriptors */ -union avf_16byte_rx_desc { +union iavf_16byte_rx_desc { struct { __le64 pkt_addr; /* Packet buffer address */ __le64 hdr_addr; /* Header buffer address */ @@ -785,7 +785,7 @@ union avf_16byte_rx_desc { } wb; /* writeback */ }; -union avf_32byte_rx_desc { +union iavf_32byte_rx_desc { struct { __le64 pkt_addr; /* Packet buffer address */ __le64 hdr_addr; /* Header buffer address */ @@ -834,119 +834,119 @@ union avf_32byte_rx_desc { } wb; /* writeback */ }; -#define AVF_RXD_QW0_MIRROR_STATUS_SHIFT 8 -#define AVF_RXD_QW0_MIRROR_STATUS_MASK (0x3FUL << \ - AVF_RXD_QW0_MIRROR_STATUS_SHIFT) -#define AVF_RXD_QW0_FCOEINDX_SHIFT 0 -#define AVF_RXD_QW0_FCOEINDX_MASK (0xFFFUL << \ - AVF_RXD_QW0_FCOEINDX_SHIFT) +#define IAVF_RXD_QW0_MIRROR_STATUS_SHIFT 8 +#define IAVF_RXD_QW0_MIRROR_STATUS_MASK (0x3FUL << \ + IAVF_RXD_QW0_MIRROR_STATUS_SHIFT) +#define IAVF_RXD_QW0_FCOEINDX_SHIFT 0 +#define IAVF_RXD_QW0_FCOEINDX_MASK (0xFFFUL << \ + IAVF_RXD_QW0_FCOEINDX_SHIFT) -enum avf_rx_desc_status_bits { +enum iavf_rx_desc_status_bits { /* Note: These are predefined bit offsets */ - AVF_RX_DESC_STATUS_DD_SHIFT = 0, - AVF_RX_DESC_STATUS_EOF_SHIFT = 1, - AVF_RX_DESC_STATUS_L2TAG1P_SHIFT = 2, - AVF_RX_DESC_STATUS_L3L4P_SHIFT = 3, - AVF_RX_DESC_STATUS_CRCP_SHIFT = 4, - AVF_RX_DESC_STATUS_TSYNINDX_SHIFT = 5, /* 2 BITS */ - AVF_RX_DESC_STATUS_TSYNVALID_SHIFT = 7, - AVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 8, - - AVF_RX_DESC_STATUS_UMBCAST_SHIFT = 9, /* 2 BITS */ - AVF_RX_DESC_STATUS_FLM_SHIFT = 11, - AVF_RX_DESC_STATUS_FLTSTAT_SHIFT = 12, /* 2 BITS */ - AVF_RX_DESC_STATUS_LPBK_SHIFT = 14, - AVF_RX_DESC_STATUS_IPV6EXADD_SHIFT = 15, - AVF_RX_DESC_STATUS_RESERVED2_SHIFT = 16, /* 2 BITS */ - AVF_RX_DESC_STATUS_INT_UDP_0_SHIFT = 18, - AVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */ -}; - -#define AVF_RXD_QW1_STATUS_SHIFT 0 -#define AVF_RXD_QW1_STATUS_MASK ((BIT(AVF_RX_DESC_STATUS_LAST) - 1) << \ - AVF_RXD_QW1_STATUS_SHIFT) - -#define AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT AVF_RX_DESC_STATUS_TSYNINDX_SHIFT -#define AVF_RXD_QW1_STATUS_TSYNINDX_MASK (0x3UL << \ - AVF_RXD_QW1_STATUS_TSYNINDX_SHIFT) - -#define AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT AVF_RX_DESC_STATUS_TSYNVALID_SHIFT -#define AVF_RXD_QW1_STATUS_TSYNVALID_MASK BIT_ULL(AVF_RXD_QW1_STATUS_TSYNVALID_SHIFT) - -#define AVF_RXD_QW1_STATUS_UMBCAST_SHIFT AVF_RX_DESC_STATUS_UMBCAST -#define AVF_RXD_QW1_STATUS_UMBCAST_MASK (0x3UL << \ - AVF_RXD_QW1_STATUS_UMBCAST_SHIFT) - -enum avf_rx_desc_fltstat_values { - AVF_RX_DESC_FLTSTAT_NO_DATA = 0, - AVF_RX_DESC_FLTSTAT_RSV_FD_ID = 1, /* 16byte desc? FD_ID : RSV */ - AVF_RX_DESC_FLTSTAT_RSV = 2, - AVF_RX_DESC_FLTSTAT_RSS_HASH = 3, -}; - -#define AVF_RXD_PACKET_TYPE_UNICAST 0 -#define AVF_RXD_PACKET_TYPE_MULTICAST 1 -#define AVF_RXD_PACKET_TYPE_BROADCAST 2 -#define AVF_RXD_PACKET_TYPE_MIRRORED 3 - -#define AVF_RXD_QW1_ERROR_SHIFT 19 -#define AVF_RXD_QW1_ERROR_MASK (0xFFUL << AVF_RXD_QW1_ERROR_SHIFT) - -enum avf_rx_desc_error_bits { + IAVF_RX_DESC_STATUS_DD_SHIFT = 0, + IAVF_RX_DESC_STATUS_EOF_SHIFT = 1, + IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT = 2, + IAVF_RX_DESC_STATUS_L3L4P_SHIFT = 3, + IAVF_RX_DESC_STATUS_CRCP_SHIFT = 4, + IAVF_RX_DESC_STATUS_TSYNINDX_SHIFT = 5, /* 2 BITS */ + IAVF_RX_DESC_STATUS_TSYNVALID_SHIFT = 7, + IAVF_RX_DESC_STATUS_EXT_UDP_0_SHIFT = 8, + + IAVF_RX_DESC_STATUS_UMBCAST_SHIFT = 9, /* 2 BITS */ + IAVF_RX_DESC_STATUS_FLM_SHIFT = 11, + IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT = 12, /* 2 BITS */ + IAVF_RX_DESC_STATUS_LPBK_SHIFT = 14, + IAVF_RX_DESC_STATUS_IPV6EXADD_SHIFT = 15, + IAVF_RX_DESC_STATUS_RESERVED2_SHIFT = 16, /* 2 BITS */ + IAVF_RX_DESC_STATUS_INT_UDP_0_SHIFT = 18, + IAVF_RX_DESC_STATUS_LAST /* this entry must be last!!! */ +}; + +#define IAVF_RXD_QW1_STATUS_SHIFT 0 +#define IAVF_RXD_QW1_STATUS_MASK ((BIT(IAVF_RX_DESC_STATUS_LAST) - 1) << \ + IAVF_RXD_QW1_STATUS_SHIFT) + +#define IAVF_RXD_QW1_STATUS_TSYNINDX_SHIFT IAVF_RX_DESC_STATUS_TSYNINDX_SHIFT +#define IAVF_RXD_QW1_STATUS_TSYNINDX_MASK (0x3UL << \ + IAVF_RXD_QW1_STATUS_TSYNINDX_SHIFT) + +#define IAVF_RXD_QW1_STATUS_TSYNVALID_SHIFT IAVF_RX_DESC_STATUS_TSYNVALID_SHIFT +#define IAVF_RXD_QW1_STATUS_TSYNVALID_MASK BIT_ULL(IAVF_RXD_QW1_STATUS_TSYNVALID_SHIFT) + +#define IAVF_RXD_QW1_STATUS_UMBCAST_SHIFT IAVF_RX_DESC_STATUS_UMBCAST +#define IAVF_RXD_QW1_STATUS_UMBCAST_MASK (0x3UL << \ + IAVF_RXD_QW1_STATUS_UMBCAST_SHIFT) + +enum iavf_rx_desc_fltstat_values { + IAVF_RX_DESC_FLTSTAT_NO_DATA = 0, + IAVF_RX_DESC_FLTSTAT_RSV_FD_ID = 1, /* 16byte desc? FD_ID : RSV */ + IAVF_RX_DESC_FLTSTAT_RSV = 2, + IAVF_RX_DESC_FLTSTAT_RSS_HASH = 3, +}; + +#define IAVF_RXD_PACKET_TYPE_UNICAST 0 +#define IAVF_RXD_PACKET_TYPE_MULTICAST 1 +#define IAVF_RXD_PACKET_TYPE_BROADCAST 2 +#define IAVF_RXD_PACKET_TYPE_MIRRORED 3 + +#define IAVF_RXD_QW1_ERROR_SHIFT 19 +#define IAVF_RXD_QW1_ERROR_MASK (0xFFUL << IAVF_RXD_QW1_ERROR_SHIFT) + +enum iavf_rx_desc_error_bits { /* Note: These are predefined bit offsets */ - AVF_RX_DESC_ERROR_RXE_SHIFT = 0, - AVF_RX_DESC_ERROR_RECIPE_SHIFT = 1, - AVF_RX_DESC_ERROR_HBO_SHIFT = 2, - AVF_RX_DESC_ERROR_L3L4E_SHIFT = 3, /* 3 BITS */ - AVF_RX_DESC_ERROR_IPE_SHIFT = 3, - AVF_RX_DESC_ERROR_L4E_SHIFT = 4, - AVF_RX_DESC_ERROR_EIPE_SHIFT = 5, - AVF_RX_DESC_ERROR_OVERSIZE_SHIFT = 6, - AVF_RX_DESC_ERROR_PPRS_SHIFT = 7 + IAVF_RX_DESC_ERROR_RXE_SHIFT = 0, + IAVF_RX_DESC_ERROR_RECIPE_SHIFT = 1, + IAVF_RX_DESC_ERROR_HBO_SHIFT = 2, + IAVF_RX_DESC_ERROR_L3L4E_SHIFT = 3, /* 3 BITS */ + IAVF_RX_DESC_ERROR_IPE_SHIFT = 3, + IAVF_RX_DESC_ERROR_L4E_SHIFT = 4, + IAVF_RX_DESC_ERROR_EIPE_SHIFT = 5, + IAVF_RX_DESC_ERROR_OVERSIZE_SHIFT = 6, + IAVF_RX_DESC_ERROR_PPRS_SHIFT = 7 }; -enum avf_rx_desc_error_l3l4e_fcoe_masks { - AVF_RX_DESC_ERROR_L3L4E_NONE = 0, - AVF_RX_DESC_ERROR_L3L4E_PROT = 1, - AVF_RX_DESC_ERROR_L3L4E_FC = 2, - AVF_RX_DESC_ERROR_L3L4E_DMAC_ERR = 3, - AVF_RX_DESC_ERROR_L3L4E_DMAC_WARN = 4 +enum iavf_rx_desc_error_l3l4e_fcoe_masks { + IAVF_RX_DESC_ERROR_L3L4E_NONE = 0, + IAVF_RX_DESC_ERROR_L3L4E_PROT = 1, + IAVF_RX_DESC_ERROR_L3L4E_FC = 2, + IAVF_RX_DESC_ERROR_L3L4E_DMAC_ERR = 3, + IAVF_RX_DESC_ERROR_L3L4E_DMAC_WARN = 4 }; -#define AVF_RXD_QW1_PTYPE_SHIFT 30 -#define AVF_RXD_QW1_PTYPE_MASK (0xFFULL << AVF_RXD_QW1_PTYPE_SHIFT) +#define IAVF_RXD_QW1_PTYPE_SHIFT 30 +#define IAVF_RXD_QW1_PTYPE_MASK (0xFFULL << IAVF_RXD_QW1_PTYPE_SHIFT) /* Packet type non-ip values */ -enum avf_rx_l2_ptype { - AVF_RX_PTYPE_L2_RESERVED = 0, - AVF_RX_PTYPE_L2_MAC_PAY2 = 1, - AVF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, - AVF_RX_PTYPE_L2_FIP_PAY2 = 3, - AVF_RX_PTYPE_L2_OUI_PAY2 = 4, - AVF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, - AVF_RX_PTYPE_L2_LLDP_PAY2 = 6, - AVF_RX_PTYPE_L2_ECP_PAY2 = 7, - AVF_RX_PTYPE_L2_EVB_PAY2 = 8, - AVF_RX_PTYPE_L2_QCN_PAY2 = 9, - AVF_RX_PTYPE_L2_EAPOL_PAY2 = 10, - AVF_RX_PTYPE_L2_ARP = 11, - AVF_RX_PTYPE_L2_FCOE_PAY3 = 12, - AVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, - AVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, - AVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, - AVF_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, - AVF_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, - AVF_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, - AVF_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, - AVF_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, - AVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, - AVF_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, - AVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, - AVF_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, - AVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 -}; - -struct avf_rx_ptype_decoded { +enum iavf_rx_l2_ptype { + IAVF_RX_PTYPE_L2_RESERVED = 0, + IAVF_RX_PTYPE_L2_MAC_PAY2 = 1, + IAVF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, + IAVF_RX_PTYPE_L2_FIP_PAY2 = 3, + IAVF_RX_PTYPE_L2_OUI_PAY2 = 4, + IAVF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, + IAVF_RX_PTYPE_L2_LLDP_PAY2 = 6, + IAVF_RX_PTYPE_L2_ECP_PAY2 = 7, + IAVF_RX_PTYPE_L2_EVB_PAY2 = 8, + IAVF_RX_PTYPE_L2_QCN_PAY2 = 9, + IAVF_RX_PTYPE_L2_EAPOL_PAY2 = 10, + IAVF_RX_PTYPE_L2_ARP = 11, + IAVF_RX_PTYPE_L2_FCOE_PAY3 = 12, + IAVF_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, + IAVF_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, + IAVF_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, + IAVF_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, + IAVF_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, + IAVF_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, + IAVF_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, + IAVF_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, + IAVF_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, + IAVF_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, + IAVF_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, + IAVF_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, + IAVF_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 +}; + +struct iavf_rx_ptype_decoded { u32 ptype:8; u32 known:1; u32 outer_ip:1; @@ -959,412 +959,412 @@ struct avf_rx_ptype_decoded { u32 payload_layer:3; }; -enum avf_rx_ptype_outer_ip { - AVF_RX_PTYPE_OUTER_L2 = 0, - AVF_RX_PTYPE_OUTER_IP = 1 +enum iavf_rx_ptype_outer_ip { + IAVF_RX_PTYPE_OUTER_L2 = 0, + IAVF_RX_PTYPE_OUTER_IP = 1 }; -enum avf_rx_ptype_outer_ip_ver { - AVF_RX_PTYPE_OUTER_NONE = 0, - AVF_RX_PTYPE_OUTER_IPV4 = 0, - AVF_RX_PTYPE_OUTER_IPV6 = 1 +enum iavf_rx_ptype_outer_ip_ver { + IAVF_RX_PTYPE_OUTER_NONE = 0, + IAVF_RX_PTYPE_OUTER_IPV4 = 0, + IAVF_RX_PTYPE_OUTER_IPV6 = 1 }; -enum avf_rx_ptype_outer_fragmented { - AVF_RX_PTYPE_NOT_FRAG = 0, - AVF_RX_PTYPE_FRAG = 1 +enum iavf_rx_ptype_outer_fragmented { + IAVF_RX_PTYPE_NOT_FRAG = 0, + IAVF_RX_PTYPE_FRAG = 1 }; -enum avf_rx_ptype_tunnel_type { - AVF_RX_PTYPE_TUNNEL_NONE = 0, - AVF_RX_PTYPE_TUNNEL_IP_IP = 1, - AVF_RX_PTYPE_TUNNEL_IP_GRENAT = 2, - AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, - AVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, +enum iavf_rx_ptype_tunnel_type { + IAVF_RX_PTYPE_TUNNEL_NONE = 0, + IAVF_RX_PTYPE_TUNNEL_IP_IP = 1, + IAVF_RX_PTYPE_TUNNEL_IP_GRENAT = 2, + IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, + IAVF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, }; -enum avf_rx_ptype_tunnel_end_prot { - AVF_RX_PTYPE_TUNNEL_END_NONE = 0, - AVF_RX_PTYPE_TUNNEL_END_IPV4 = 1, - AVF_RX_PTYPE_TUNNEL_END_IPV6 = 2, +enum iavf_rx_ptype_tunnel_end_prot { + IAVF_RX_PTYPE_TUNNEL_END_NONE = 0, + IAVF_RX_PTYPE_TUNNEL_END_IPV4 = 1, + IAVF_RX_PTYPE_TUNNEL_END_IPV6 = 2, }; -enum avf_rx_ptype_inner_prot { - AVF_RX_PTYPE_INNER_PROT_NONE = 0, - AVF_RX_PTYPE_INNER_PROT_UDP = 1, - AVF_RX_PTYPE_INNER_PROT_TCP = 2, - AVF_RX_PTYPE_INNER_PROT_SCTP = 3, - AVF_RX_PTYPE_INNER_PROT_ICMP = 4, - AVF_RX_PTYPE_INNER_PROT_TIMESYNC = 5 +enum iavf_rx_ptype_inner_prot { + IAVF_RX_PTYPE_INNER_PROT_NONE = 0, + IAVF_RX_PTYPE_INNER_PROT_UDP = 1, + IAVF_RX_PTYPE_INNER_PROT_TCP = 2, + IAVF_RX_PTYPE_INNER_PROT_SCTP = 3, + IAVF_RX_PTYPE_INNER_PROT_ICMP = 4, + IAVF_RX_PTYPE_INNER_PROT_TIMESYNC = 5 }; -enum avf_rx_ptype_payload_layer { - AVF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, - AVF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, - AVF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, - AVF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, +enum iavf_rx_ptype_payload_layer { + IAVF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, + IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, + IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, + IAVF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, }; -#define AVF_RX_PTYPE_BIT_MASK 0x0FFFFFFF -#define AVF_RX_PTYPE_SHIFT 56 +#define IAVF_RX_PTYPE_BIT_MASK 0x0FFFFFFF +#define IAVF_RX_PTYPE_SHIFT 56 -#define AVF_RXD_QW1_LENGTH_PBUF_SHIFT 38 -#define AVF_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ - AVF_RXD_QW1_LENGTH_PBUF_SHIFT) +#define IAVF_RXD_QW1_LENGTH_PBUF_SHIFT 38 +#define IAVF_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ + IAVF_RXD_QW1_LENGTH_PBUF_SHIFT) -#define AVF_RXD_QW1_LENGTH_HBUF_SHIFT 52 -#define AVF_RXD_QW1_LENGTH_HBUF_MASK (0x7FFULL << \ - AVF_RXD_QW1_LENGTH_HBUF_SHIFT) +#define IAVF_RXD_QW1_LENGTH_HBUF_SHIFT 52 +#define IAVF_RXD_QW1_LENGTH_HBUF_MASK (0x7FFULL << \ + IAVF_RXD_QW1_LENGTH_HBUF_SHIFT) -#define AVF_RXD_QW1_LENGTH_SPH_SHIFT 63 -#define AVF_RXD_QW1_LENGTH_SPH_MASK BIT_ULL(AVF_RXD_QW1_LENGTH_SPH_SHIFT) +#define IAVF_RXD_QW1_LENGTH_SPH_SHIFT 63 +#define IAVF_RXD_QW1_LENGTH_SPH_MASK BIT_ULL(IAVF_RXD_QW1_LENGTH_SPH_SHIFT) -#define AVF_RXD_QW1_NEXTP_SHIFT 38 -#define AVF_RXD_QW1_NEXTP_MASK (0x1FFFULL << AVF_RXD_QW1_NEXTP_SHIFT) +#define IAVF_RXD_QW1_NEXTP_SHIFT 38 +#define IAVF_RXD_QW1_NEXTP_MASK (0x1FFFULL << IAVF_RXD_QW1_NEXTP_SHIFT) -#define AVF_RXD_QW2_EXT_STATUS_SHIFT 0 -#define AVF_RXD_QW2_EXT_STATUS_MASK (0xFFFFFUL << \ - AVF_RXD_QW2_EXT_STATUS_SHIFT) +#define IAVF_RXD_QW2_EXT_STATUS_SHIFT 0 +#define IAVF_RXD_QW2_EXT_STATUS_MASK (0xFFFFFUL << \ + IAVF_RXD_QW2_EXT_STATUS_SHIFT) -enum avf_rx_desc_ext_status_bits { +enum iavf_rx_desc_ext_status_bits { /* Note: These are predefined bit offsets */ - AVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 0, - AVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT = 1, - AVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT = 2, /* 2 BITS */ - AVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT = 4, /* 2 BITS */ - AVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT = 9, - AVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT = 10, - AVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT = 11, + IAVF_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 0, + IAVF_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT = 1, + IAVF_RX_DESC_EXT_STATUS_FLEXBL_SHIFT = 2, /* 2 BITS */ + IAVF_RX_DESC_EXT_STATUS_FLEXBH_SHIFT = 4, /* 2 BITS */ + IAVF_RX_DESC_EXT_STATUS_FDLONGB_SHIFT = 9, + IAVF_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT = 10, + IAVF_RX_DESC_EXT_STATUS_PELONGB_SHIFT = 11, }; -#define AVF_RXD_QW2_L2TAG2_SHIFT 0 -#define AVF_RXD_QW2_L2TAG2_MASK (0xFFFFUL << AVF_RXD_QW2_L2TAG2_SHIFT) +#define IAVF_RXD_QW2_L2TAG2_SHIFT 0 +#define IAVF_RXD_QW2_L2TAG2_MASK (0xFFFFUL << IAVF_RXD_QW2_L2TAG2_SHIFT) -#define AVF_RXD_QW2_L2TAG3_SHIFT 16 -#define AVF_RXD_QW2_L2TAG3_MASK (0xFFFFUL << AVF_RXD_QW2_L2TAG3_SHIFT) +#define IAVF_RXD_QW2_L2TAG3_SHIFT 16 +#define IAVF_RXD_QW2_L2TAG3_MASK (0xFFFFUL << IAVF_RXD_QW2_L2TAG3_SHIFT) -enum avf_rx_desc_pe_status_bits { +enum iavf_rx_desc_pe_status_bits { /* Note: These are predefined bit offsets */ - AVF_RX_DESC_PE_STATUS_QPID_SHIFT = 0, /* 18 BITS */ - AVF_RX_DESC_PE_STATUS_L4PORT_SHIFT = 0, /* 16 BITS */ - AVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT = 16, /* 8 BITS */ - AVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT = 24, - AVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT = 25, - AVF_RX_DESC_PE_STATUS_PORTV_SHIFT = 26, - AVF_RX_DESC_PE_STATUS_URG_SHIFT = 27, - AVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT = 28, - AVF_RX_DESC_PE_STATUS_IPOPT_SHIFT = 29 + IAVF_RX_DESC_PE_STATUS_QPID_SHIFT = 0, /* 18 BITS */ + IAVF_RX_DESC_PE_STATUS_L4PORT_SHIFT = 0, /* 16 BITS */ + IAVF_RX_DESC_PE_STATUS_IPINDEX_SHIFT = 16, /* 8 BITS */ + IAVF_RX_DESC_PE_STATUS_QPIDHIT_SHIFT = 24, + IAVF_RX_DESC_PE_STATUS_APBVTHIT_SHIFT = 25, + IAVF_RX_DESC_PE_STATUS_PORTV_SHIFT = 26, + IAVF_RX_DESC_PE_STATUS_URG_SHIFT = 27, + IAVF_RX_DESC_PE_STATUS_IPFRAG_SHIFT = 28, + IAVF_RX_DESC_PE_STATUS_IPOPT_SHIFT = 29 }; -#define AVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT 38 -#define AVF_RX_PROG_STATUS_DESC_LENGTH 0x2000000 +#define IAVF_RX_PROG_STATUS_DESC_LENGTH_SHIFT 38 +#define IAVF_RX_PROG_STATUS_DESC_LENGTH 0x2000000 -#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT 2 -#define AVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK (0x7UL << \ - AVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT) +#define IAVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT 2 +#define IAVF_RX_PROG_STATUS_DESC_QW1_PROGID_MASK (0x7UL << \ + IAVF_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT) -#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT 0 -#define AVF_RX_PROG_STATUS_DESC_QW1_STATUS_MASK (0x7FFFUL << \ - AVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT) +#define IAVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT 0 +#define IAVF_RX_PROG_STATUS_DESC_QW1_STATUS_MASK (0x7FFFUL << \ + IAVF_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT) -#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT 19 -#define AVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK (0x3FUL << \ - AVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT) +#define IAVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT 19 +#define IAVF_RX_PROG_STATUS_DESC_QW1_ERROR_MASK (0x3FUL << \ + IAVF_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT) -enum avf_rx_prog_status_desc_status_bits { +enum iavf_rx_prog_status_desc_status_bits { /* Note: These are predefined bit offsets */ - AVF_RX_PROG_STATUS_DESC_DD_SHIFT = 0, - AVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT = 2 /* 3 BITS */ + IAVF_RX_PROG_STATUS_DESC_DD_SHIFT = 0, + IAVF_RX_PROG_STATUS_DESC_PROG_ID_SHIFT = 2 /* 3 BITS */ }; -enum avf_rx_prog_status_desc_prog_id_masks { - AVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS = 1, - AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS = 2, - AVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS = 4, +enum iavf_rx_prog_status_desc_prog_id_masks { + IAVF_RX_PROG_STATUS_DESC_FD_FILTER_STATUS = 1, + IAVF_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS = 2, + IAVF_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS = 4, }; -enum avf_rx_prog_status_desc_error_bits { +enum iavf_rx_prog_status_desc_error_bits { /* Note: These are predefined bit offsets */ - AVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT = 0, - AVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT = 1, - AVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT = 2, - AVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT = 3 + IAVF_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT = 0, + IAVF_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT = 1, + IAVF_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT = 2, + IAVF_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT = 3 }; -#define AVF_TWO_BIT_MASK 0x3 -#define AVF_THREE_BIT_MASK 0x7 -#define AVF_FOUR_BIT_MASK 0xF -#define AVF_EIGHTEEN_BIT_MASK 0x3FFFF +#define IAVF_TWO_BIT_MASK 0x3 +#define IAVF_THREE_BIT_MASK 0x7 +#define IAVF_FOUR_BIT_MASK 0xF +#define IAVF_EIGHTEEN_BIT_MASK 0x3FFFF /* TX Descriptor */ -struct avf_tx_desc { +struct iavf_tx_desc { __le64 buffer_addr; /* Address of descriptor's data buf */ __le64 cmd_type_offset_bsz; }; -#define AVF_TXD_QW1_DTYPE_SHIFT 0 -#define AVF_TXD_QW1_DTYPE_MASK (0xFUL << AVF_TXD_QW1_DTYPE_SHIFT) - -enum avf_tx_desc_dtype_value { - AVF_TX_DESC_DTYPE_DATA = 0x0, - AVF_TX_DESC_DTYPE_NOP = 0x1, /* same as Context desc */ - AVF_TX_DESC_DTYPE_CONTEXT = 0x1, - AVF_TX_DESC_DTYPE_FCOE_CTX = 0x2, - AVF_TX_DESC_DTYPE_FILTER_PROG = 0x8, - AVF_TX_DESC_DTYPE_DDP_CTX = 0x9, - AVF_TX_DESC_DTYPE_FLEX_DATA = 0xB, - AVF_TX_DESC_DTYPE_FLEX_CTX_1 = 0xC, - AVF_TX_DESC_DTYPE_FLEX_CTX_2 = 0xD, - AVF_TX_DESC_DTYPE_DESC_DONE = 0xF -}; - -#define AVF_TXD_QW1_CMD_SHIFT 4 -#define AVF_TXD_QW1_CMD_MASK (0x3FFUL << AVF_TXD_QW1_CMD_SHIFT) - -enum avf_tx_desc_cmd_bits { - AVF_TX_DESC_CMD_EOP = 0x0001, - AVF_TX_DESC_CMD_RS = 0x0002, - AVF_TX_DESC_CMD_ICRC = 0x0004, - AVF_TX_DESC_CMD_IL2TAG1 = 0x0008, - AVF_TX_DESC_CMD_DUMMY = 0x0010, - AVF_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */ - AVF_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */ - AVF_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */ - AVF_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */ - AVF_TX_DESC_CMD_FCOET = 0x0080, - AVF_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */ - AVF_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */ - AVF_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */ - AVF_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */ - AVF_TX_DESC_CMD_L4T_EOFT_EOF_N = 0x0000, /* 2 BITS */ - AVF_TX_DESC_CMD_L4T_EOFT_EOF_T = 0x0100, /* 2 BITS */ - AVF_TX_DESC_CMD_L4T_EOFT_EOF_NI = 0x0200, /* 2 BITS */ - AVF_TX_DESC_CMD_L4T_EOFT_EOF_A = 0x0300, /* 2 BITS */ -}; - -#define AVF_TXD_QW1_OFFSET_SHIFT 16 -#define AVF_TXD_QW1_OFFSET_MASK (0x3FFFFULL << \ - AVF_TXD_QW1_OFFSET_SHIFT) - -enum avf_tx_desc_length_fields { +#define IAVF_TXD_QW1_DTYPE_SHIFT 0 +#define IAVF_TXD_QW1_DTYPE_MASK (0xFUL << IAVF_TXD_QW1_DTYPE_SHIFT) + +enum iavf_tx_desc_dtype_value { + IAVF_TX_DESC_DTYPE_DATA = 0x0, + IAVF_TX_DESC_DTYPE_NOP = 0x1, /* same as Context desc */ + IAVF_TX_DESC_DTYPE_CONTEXT = 0x1, + IAVF_TX_DESC_DTYPE_FCOE_CTX = 0x2, + IAVF_TX_DESC_DTYPE_FILTER_PROG = 0x8, + IAVF_TX_DESC_DTYPE_DDP_CTX = 0x9, + IAVF_TX_DESC_DTYPE_FLEX_DATA = 0xB, + IAVF_TX_DESC_DTYPE_FLEX_CTX_1 = 0xC, + IAVF_TX_DESC_DTYPE_FLEX_CTX_2 = 0xD, + IAVF_TX_DESC_DTYPE_DESC_DONE = 0xF +}; + +#define IAVF_TXD_QW1_CMD_SHIFT 4 +#define IAVF_TXD_QW1_CMD_MASK (0x3FFUL << IAVF_TXD_QW1_CMD_SHIFT) + +enum iavf_tx_desc_cmd_bits { + IAVF_TX_DESC_CMD_EOP = 0x0001, + IAVF_TX_DESC_CMD_RS = 0x0002, + IAVF_TX_DESC_CMD_ICRC = 0x0004, + IAVF_TX_DESC_CMD_IL2TAG1 = 0x0008, + IAVF_TX_DESC_CMD_DUMMY = 0x0010, + IAVF_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */ + IAVF_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */ + IAVF_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */ + IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */ + IAVF_TX_DESC_CMD_FCOET = 0x0080, + IAVF_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */ + IAVF_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */ + IAVF_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */ + IAVF_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */ + IAVF_TX_DESC_CMD_L4T_EOFT_EOF_N = 0x0000, /* 2 BITS */ + IAVF_TX_DESC_CMD_L4T_EOFT_EOF_T = 0x0100, /* 2 BITS */ + IAVF_TX_DESC_CMD_L4T_EOFT_EOF_NI = 0x0200, /* 2 BITS */ + IAVF_TX_DESC_CMD_L4T_EOFT_EOF_A = 0x0300, /* 2 BITS */ +}; + +#define IAVF_TXD_QW1_OFFSET_SHIFT 16 +#define IAVF_TXD_QW1_OFFSET_MASK (0x3FFFFULL << \ + IAVF_TXD_QW1_OFFSET_SHIFT) + +enum iavf_tx_desc_length_fields { /* Note: These are predefined bit offsets */ - AVF_TX_DESC_LENGTH_MACLEN_SHIFT = 0, /* 7 BITS */ - AVF_TX_DESC_LENGTH_IPLEN_SHIFT = 7, /* 7 BITS */ - AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT = 14 /* 4 BITS */ + IAVF_TX_DESC_LENGTH_MACLEN_SHIFT = 0, /* 7 BITS */ + IAVF_TX_DESC_LENGTH_IPLEN_SHIFT = 7, /* 7 BITS */ + IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT = 14 /* 4 BITS */ }; -#define AVF_TXD_QW1_MACLEN_MASK (0x7FUL << AVF_TX_DESC_LENGTH_MACLEN_SHIFT) -#define AVF_TXD_QW1_IPLEN_MASK (0x7FUL << AVF_TX_DESC_LENGTH_IPLEN_SHIFT) -#define AVF_TXD_QW1_L4LEN_MASK (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) -#define AVF_TXD_QW1_FCLEN_MASK (0xFUL << AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) +#define IAVF_TXD_QW1_MACLEN_MASK (0x7FUL << IAVF_TX_DESC_LENGTH_MACLEN_SHIFT) +#define IAVF_TXD_QW1_IPLEN_MASK (0x7FUL << IAVF_TX_DESC_LENGTH_IPLEN_SHIFT) +#define IAVF_TXD_QW1_L4LEN_MASK (0xFUL << IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) +#define IAVF_TXD_QW1_FCLEN_MASK (0xFUL << IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) -#define AVF_TXD_QW1_TX_BUF_SZ_SHIFT 34 -#define AVF_TXD_QW1_TX_BUF_SZ_MASK (0x3FFFULL << \ - AVF_TXD_QW1_TX_BUF_SZ_SHIFT) +#define IAVF_TXD_QW1_TX_BUF_SZ_SHIFT 34 +#define IAVF_TXD_QW1_TX_BUF_SZ_MASK (0x3FFFULL << \ + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT) -#define AVF_TXD_QW1_L2TAG1_SHIFT 48 -#define AVF_TXD_QW1_L2TAG1_MASK (0xFFFFULL << AVF_TXD_QW1_L2TAG1_SHIFT) +#define IAVF_TXD_QW1_L2TAG1_SHIFT 48 +#define IAVF_TXD_QW1_L2TAG1_MASK (0xFFFFULL << IAVF_TXD_QW1_L2TAG1_SHIFT) /* Context descriptors */ -struct avf_tx_context_desc { +struct iavf_tx_context_desc { __le32 tunneling_params; __le16 l2tag2; __le16 rsvd; __le64 type_cmd_tso_mss; }; -#define AVF_TXD_CTX_QW1_DTYPE_SHIFT 0 -#define AVF_TXD_CTX_QW1_DTYPE_MASK (0xFUL << AVF_TXD_CTX_QW1_DTYPE_SHIFT) +#define IAVF_TXD_CTX_QW1_DTYPE_SHIFT 0 +#define IAVF_TXD_CTX_QW1_DTYPE_MASK (0xFUL << IAVF_TXD_CTX_QW1_DTYPE_SHIFT) -#define AVF_TXD_CTX_QW1_CMD_SHIFT 4 -#define AVF_TXD_CTX_QW1_CMD_MASK (0xFFFFUL << AVF_TXD_CTX_QW1_CMD_SHIFT) +#define IAVF_TXD_CTX_QW1_CMD_SHIFT 4 +#define IAVF_TXD_CTX_QW1_CMD_MASK (0xFFFFUL << IAVF_TXD_CTX_QW1_CMD_SHIFT) -enum avf_tx_ctx_desc_cmd_bits { - AVF_TX_CTX_DESC_TSO = 0x01, - AVF_TX_CTX_DESC_TSYN = 0x02, - AVF_TX_CTX_DESC_IL2TAG2 = 0x04, - AVF_TX_CTX_DESC_IL2TAG2_IL2H = 0x08, - AVF_TX_CTX_DESC_SWTCH_NOTAG = 0x00, - AVF_TX_CTX_DESC_SWTCH_UPLINK = 0x10, - AVF_TX_CTX_DESC_SWTCH_LOCAL = 0x20, - AVF_TX_CTX_DESC_SWTCH_VSI = 0x30, - AVF_TX_CTX_DESC_SWPE = 0x40 +enum iavf_tx_ctx_desc_cmd_bits { + IAVF_TX_CTX_DESC_TSO = 0x01, + IAVF_TX_CTX_DESC_TSYN = 0x02, + IAVF_TX_CTX_DESC_IL2TAG2 = 0x04, + IAVF_TX_CTX_DESC_IL2TAG2_IL2H = 0x08, + IAVF_TX_CTX_DESC_SWTCH_NOTAG = 0x00, + IAVF_TX_CTX_DESC_SWTCH_UPLINK = 0x10, + IAVF_TX_CTX_DESC_SWTCH_LOCAL = 0x20, + IAVF_TX_CTX_DESC_SWTCH_VSI = 0x30, + IAVF_TX_CTX_DESC_SWPE = 0x40 }; -#define AVF_TXD_CTX_QW1_TSO_LEN_SHIFT 30 -#define AVF_TXD_CTX_QW1_TSO_LEN_MASK (0x3FFFFULL << \ - AVF_TXD_CTX_QW1_TSO_LEN_SHIFT) +#define IAVF_TXD_CTX_QW1_TSO_LEN_SHIFT 30 +#define IAVF_TXD_CTX_QW1_TSO_LEN_MASK (0x3FFFFULL << \ + IAVF_TXD_CTX_QW1_TSO_LEN_SHIFT) -#define AVF_TXD_CTX_QW1_MSS_SHIFT 50 -#define AVF_TXD_CTX_QW1_MSS_MASK (0x3FFFULL << \ - AVF_TXD_CTX_QW1_MSS_SHIFT) +#define IAVF_TXD_CTX_QW1_MSS_SHIFT 50 +#define IAVF_TXD_CTX_QW1_MSS_MASK (0x3FFFULL << \ + IAVF_TXD_CTX_QW1_MSS_SHIFT) -#define AVF_TXD_CTX_QW1_VSI_SHIFT 50 -#define AVF_TXD_CTX_QW1_VSI_MASK (0x1FFULL << AVF_TXD_CTX_QW1_VSI_SHIFT) +#define IAVF_TXD_CTX_QW1_VSI_SHIFT 50 +#define IAVF_TXD_CTX_QW1_VSI_MASK (0x1FFULL << IAVF_TXD_CTX_QW1_VSI_SHIFT) -#define AVF_TXD_CTX_QW0_EXT_IP_SHIFT 0 -#define AVF_TXD_CTX_QW0_EXT_IP_MASK (0x3ULL << \ - AVF_TXD_CTX_QW0_EXT_IP_SHIFT) +#define IAVF_TXD_CTX_QW0_EXT_IP_SHIFT 0 +#define IAVF_TXD_CTX_QW0_EXT_IP_MASK (0x3ULL << \ + IAVF_TXD_CTX_QW0_EXT_IP_SHIFT) -enum avf_tx_ctx_desc_eipt_offload { - AVF_TX_CTX_EXT_IP_NONE = 0x0, - AVF_TX_CTX_EXT_IP_IPV6 = 0x1, - AVF_TX_CTX_EXT_IP_IPV4_NO_CSUM = 0x2, - AVF_TX_CTX_EXT_IP_IPV4 = 0x3 +enum iavf_tx_ctx_desc_eipt_offload { + IAVF_TX_CTX_EXT_IP_NONE = 0x0, + IAVF_TX_CTX_EXT_IP_IPV6 = 0x1, + IAVF_TX_CTX_EXT_IP_IPV4_NO_CSUM = 0x2, + IAVF_TX_CTX_EXT_IP_IPV4 = 0x3 }; -#define AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT 2 -#define AVF_TXD_CTX_QW0_EXT_IPLEN_MASK (0x3FULL << \ - AVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT) +#define IAVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT 2 +#define IAVF_TXD_CTX_QW0_EXT_IPLEN_MASK (0x3FULL << \ + IAVF_TXD_CTX_QW0_EXT_IPLEN_SHIFT) -#define AVF_TXD_CTX_QW0_NATT_SHIFT 9 -#define AVF_TXD_CTX_QW0_NATT_MASK (0x3ULL << AVF_TXD_CTX_QW0_NATT_SHIFT) +#define IAVF_TXD_CTX_QW0_NATT_SHIFT 9 +#define IAVF_TXD_CTX_QW0_NATT_MASK (0x3ULL << IAVF_TXD_CTX_QW0_NATT_SHIFT) -#define AVF_TXD_CTX_UDP_TUNNELING BIT_ULL(AVF_TXD_CTX_QW0_NATT_SHIFT) -#define AVF_TXD_CTX_GRE_TUNNELING (0x2ULL << AVF_TXD_CTX_QW0_NATT_SHIFT) +#define IAVF_TXD_CTX_UDP_TUNNELING BIT_ULL(IAVF_TXD_CTX_QW0_NATT_SHIFT) +#define IAVF_TXD_CTX_GRE_TUNNELING (0x2ULL << IAVF_TXD_CTX_QW0_NATT_SHIFT) -#define AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT 11 -#define AVF_TXD_CTX_QW0_EIP_NOINC_MASK BIT_ULL(AVF_TXD_CTX_QW0_EIP_NOINC_SHIFT) +#define IAVF_TXD_CTX_QW0_EIP_NOINC_SHIFT 11 +#define IAVF_TXD_CTX_QW0_EIP_NOINC_MASK BIT_ULL(IAVF_TXD_CTX_QW0_EIP_NOINC_SHIFT) -#define AVF_TXD_CTX_EIP_NOINC_IPID_CONST AVF_TXD_CTX_QW0_EIP_NOINC_MASK +#define IAVF_TXD_CTX_EIP_NOINC_IPID_CONST IAVF_TXD_CTX_QW0_EIP_NOINC_MASK -#define AVF_TXD_CTX_QW0_NATLEN_SHIFT 12 -#define AVF_TXD_CTX_QW0_NATLEN_MASK (0X7FULL << \ - AVF_TXD_CTX_QW0_NATLEN_SHIFT) +#define IAVF_TXD_CTX_QW0_NATLEN_SHIFT 12 +#define IAVF_TXD_CTX_QW0_NATLEN_MASK (0X7FULL << \ + IAVF_TXD_CTX_QW0_NATLEN_SHIFT) -#define AVF_TXD_CTX_QW0_DECTTL_SHIFT 19 -#define AVF_TXD_CTX_QW0_DECTTL_MASK (0xFULL << \ - AVF_TXD_CTX_QW0_DECTTL_SHIFT) +#define IAVF_TXD_CTX_QW0_DECTTL_SHIFT 19 +#define IAVF_TXD_CTX_QW0_DECTTL_MASK (0xFULL << \ + IAVF_TXD_CTX_QW0_DECTTL_SHIFT) -#define AVF_TXD_CTX_QW0_L4T_CS_SHIFT 23 -#define AVF_TXD_CTX_QW0_L4T_CS_MASK BIT_ULL(AVF_TXD_CTX_QW0_L4T_CS_SHIFT) -struct avf_nop_desc { +#define IAVF_TXD_CTX_QW0_L4T_CS_SHIFT 23 +#define IAVF_TXD_CTX_QW0_L4T_CS_MASK BIT_ULL(IAVF_TXD_CTX_QW0_L4T_CS_SHIFT) +struct iavf_nop_desc { __le64 rsvd; __le64 dtype_cmd; }; -#define AVF_TXD_NOP_QW1_DTYPE_SHIFT 0 -#define AVF_TXD_NOP_QW1_DTYPE_MASK (0xFUL << AVF_TXD_NOP_QW1_DTYPE_SHIFT) +#define IAVF_TXD_NOP_QW1_DTYPE_SHIFT 0 +#define IAVF_TXD_NOP_QW1_DTYPE_MASK (0xFUL << IAVF_TXD_NOP_QW1_DTYPE_SHIFT) -#define AVF_TXD_NOP_QW1_CMD_SHIFT 4 -#define AVF_TXD_NOP_QW1_CMD_MASK (0x7FUL << AVF_TXD_NOP_QW1_CMD_SHIFT) +#define IAVF_TXD_NOP_QW1_CMD_SHIFT 4 +#define IAVF_TXD_NOP_QW1_CMD_MASK (0x7FUL << IAVF_TXD_NOP_QW1_CMD_SHIFT) -enum avf_tx_nop_desc_cmd_bits { +enum iavf_tx_nop_desc_cmd_bits { /* Note: These are predefined bit offsets */ - AVF_TX_NOP_DESC_EOP_SHIFT = 0, - AVF_TX_NOP_DESC_RS_SHIFT = 1, - AVF_TX_NOP_DESC_RSV_SHIFT = 2 /* 5 bits */ + IAVF_TX_NOP_DESC_EOP_SHIFT = 0, + IAVF_TX_NOP_DESC_RS_SHIFT = 1, + IAVF_TX_NOP_DESC_RSV_SHIFT = 2 /* 5 bits */ }; -struct avf_filter_program_desc { +struct iavf_filter_program_desc { __le32 qindex_flex_ptype_vsi; __le32 rsvd; __le32 dtype_cmd_cntindex; __le32 fd_id; }; -#define AVF_TXD_FLTR_QW0_QINDEX_SHIFT 0 -#define AVF_TXD_FLTR_QW0_QINDEX_MASK (0x7FFUL << \ - AVF_TXD_FLTR_QW0_QINDEX_SHIFT) -#define AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT 11 -#define AVF_TXD_FLTR_QW0_FLEXOFF_MASK (0x7UL << \ - AVF_TXD_FLTR_QW0_FLEXOFF_SHIFT) -#define AVF_TXD_FLTR_QW0_PCTYPE_SHIFT 17 -#define AVF_TXD_FLTR_QW0_PCTYPE_MASK (0x3FUL << \ - AVF_TXD_FLTR_QW0_PCTYPE_SHIFT) +#define IAVF_TXD_FLTR_QW0_QINDEX_SHIFT 0 +#define IAVF_TXD_FLTR_QW0_QINDEX_MASK (0x7FFUL << \ + IAVF_TXD_FLTR_QW0_QINDEX_SHIFT) +#define IAVF_TXD_FLTR_QW0_FLEXOFF_SHIFT 11 +#define IAVF_TXD_FLTR_QW0_FLEXOFF_MASK (0x7UL << \ + IAVF_TXD_FLTR_QW0_FLEXOFF_SHIFT) +#define IAVF_TXD_FLTR_QW0_PCTYPE_SHIFT 17 +#define IAVF_TXD_FLTR_QW0_PCTYPE_MASK (0x3FUL << \ + IAVF_TXD_FLTR_QW0_PCTYPE_SHIFT) /* Packet Classifier Types for filters */ -enum avf_filter_pctype { +enum iavf_filter_pctype { /* Note: Values 0-28 are reserved for future use. * Value 29, 30, 32 are not supported on XL710 and X710. */ - AVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP = 29, - AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP = 30, - AVF_FILTER_PCTYPE_NONF_IPV4_UDP = 31, - AVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK = 32, - AVF_FILTER_PCTYPE_NONF_IPV4_TCP = 33, - AVF_FILTER_PCTYPE_NONF_IPV4_SCTP = 34, - AVF_FILTER_PCTYPE_NONF_IPV4_OTHER = 35, - AVF_FILTER_PCTYPE_FRAG_IPV4 = 36, + IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP = 29, + IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP = 30, + IAVF_FILTER_PCTYPE_NONF_IPV4_UDP = 31, + IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK = 32, + IAVF_FILTER_PCTYPE_NONF_IPV4_TCP = 33, + IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP = 34, + IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER = 35, + IAVF_FILTER_PCTYPE_FRAG_IPV4 = 36, /* Note: Values 37-38 are reserved for future use. * Value 39, 40, 42 are not supported on XL710 and X710. */ - AVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP = 39, - AVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP = 40, - AVF_FILTER_PCTYPE_NONF_IPV6_UDP = 41, - AVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK = 42, - AVF_FILTER_PCTYPE_NONF_IPV6_TCP = 43, - AVF_FILTER_PCTYPE_NONF_IPV6_SCTP = 44, - AVF_FILTER_PCTYPE_NONF_IPV6_OTHER = 45, - AVF_FILTER_PCTYPE_FRAG_IPV6 = 46, + IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP = 39, + IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP = 40, + IAVF_FILTER_PCTYPE_NONF_IPV6_UDP = 41, + IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK = 42, + IAVF_FILTER_PCTYPE_NONF_IPV6_TCP = 43, + IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP = 44, + IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER = 45, + IAVF_FILTER_PCTYPE_FRAG_IPV6 = 46, /* Note: Value 47 is reserved for future use */ - AVF_FILTER_PCTYPE_FCOE_OX = 48, - AVF_FILTER_PCTYPE_FCOE_RX = 49, - AVF_FILTER_PCTYPE_FCOE_OTHER = 50, + IAVF_FILTER_PCTYPE_FCOE_OX = 48, + IAVF_FILTER_PCTYPE_FCOE_RX = 49, + IAVF_FILTER_PCTYPE_FCOE_OTHER = 50, /* Note: Values 51-62 are reserved for future use */ - AVF_FILTER_PCTYPE_L2_PAYLOAD = 63, + IAVF_FILTER_PCTYPE_L2_PAYLOAD = 63, }; -enum avf_filter_program_desc_dest { - AVF_FILTER_PROGRAM_DESC_DEST_DROP_PACKET = 0x0, - AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX = 0x1, - AVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER = 0x2, +enum iavf_filter_program_desc_dest { + IAVF_FILTER_PROGRAM_DESC_DEST_DROP_PACKET = 0x0, + IAVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX = 0x1, + IAVF_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER = 0x2, }; -enum avf_filter_program_desc_fd_status { - AVF_FILTER_PROGRAM_DESC_FD_STATUS_NONE = 0x0, - AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID = 0x1, - AVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES = 0x2, - AVF_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES = 0x3, +enum iavf_filter_program_desc_fd_status { + IAVF_FILTER_PROGRAM_DESC_FD_STATUS_NONE = 0x0, + IAVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID = 0x1, + IAVF_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES = 0x2, + IAVF_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES = 0x3, }; -#define AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT 23 -#define AVF_TXD_FLTR_QW0_DEST_VSI_MASK (0x1FFUL << \ - AVF_TXD_FLTR_QW0_DEST_VSI_SHIFT) +#define IAVF_TXD_FLTR_QW0_DEST_VSI_SHIFT 23 +#define IAVF_TXD_FLTR_QW0_DEST_VSI_MASK (0x1FFUL << \ + IAVF_TXD_FLTR_QW0_DEST_VSI_SHIFT) -#define AVF_TXD_FLTR_QW1_DTYPE_SHIFT 0 -#define AVF_TXD_FLTR_QW1_DTYPE_MASK (0xFUL << AVF_TXD_FLTR_QW1_DTYPE_SHIFT) +#define IAVF_TXD_FLTR_QW1_DTYPE_SHIFT 0 +#define IAVF_TXD_FLTR_QW1_DTYPE_MASK (0xFUL << IAVF_TXD_FLTR_QW1_DTYPE_SHIFT) -#define AVF_TXD_FLTR_QW1_CMD_SHIFT 4 -#define AVF_TXD_FLTR_QW1_CMD_MASK (0xFFFFULL << \ - AVF_TXD_FLTR_QW1_CMD_SHIFT) +#define IAVF_TXD_FLTR_QW1_CMD_SHIFT 4 +#define IAVF_TXD_FLTR_QW1_CMD_MASK (0xFFFFULL << \ + IAVF_TXD_FLTR_QW1_CMD_SHIFT) -#define AVF_TXD_FLTR_QW1_PCMD_SHIFT (0x0ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT) -#define AVF_TXD_FLTR_QW1_PCMD_MASK (0x7ULL << AVF_TXD_FLTR_QW1_PCMD_SHIFT) +#define IAVF_TXD_FLTR_QW1_PCMD_SHIFT (0x0ULL + IAVF_TXD_FLTR_QW1_CMD_SHIFT) +#define IAVF_TXD_FLTR_QW1_PCMD_MASK (0x7ULL << IAVF_TXD_FLTR_QW1_PCMD_SHIFT) -enum avf_filter_program_desc_pcmd { - AVF_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE = 0x1, - AVF_FILTER_PROGRAM_DESC_PCMD_REMOVE = 0x2, +enum iavf_filter_program_desc_pcmd { + IAVF_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE = 0x1, + IAVF_FILTER_PROGRAM_DESC_PCMD_REMOVE = 0x2, }; -#define AVF_TXD_FLTR_QW1_DEST_SHIFT (0x3ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT) -#define AVF_TXD_FLTR_QW1_DEST_MASK (0x3ULL << AVF_TXD_FLTR_QW1_DEST_SHIFT) +#define IAVF_TXD_FLTR_QW1_DEST_SHIFT (0x3ULL + IAVF_TXD_FLTR_QW1_CMD_SHIFT) +#define IAVF_TXD_FLTR_QW1_DEST_MASK (0x3ULL << IAVF_TXD_FLTR_QW1_DEST_SHIFT) -#define AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT (0x7ULL + AVF_TXD_FLTR_QW1_CMD_SHIFT) -#define AVF_TXD_FLTR_QW1_CNT_ENA_MASK BIT_ULL(AVF_TXD_FLTR_QW1_CNT_ENA_SHIFT) +#define IAVF_TXD_FLTR_QW1_CNT_ENA_SHIFT (0x7ULL + IAVF_TXD_FLTR_QW1_CMD_SHIFT) +#define IAVF_TXD_FLTR_QW1_CNT_ENA_MASK BIT_ULL(IAVF_TXD_FLTR_QW1_CNT_ENA_SHIFT) -#define AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT (0x9ULL + \ - AVF_TXD_FLTR_QW1_CMD_SHIFT) -#define AVF_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \ - AVF_TXD_FLTR_QW1_FD_STATUS_SHIFT) +#define IAVF_TXD_FLTR_QW1_FD_STATUS_SHIFT (0x9ULL + \ + IAVF_TXD_FLTR_QW1_CMD_SHIFT) +#define IAVF_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \ + IAVF_TXD_FLTR_QW1_FD_STATUS_SHIFT) -#define AVF_TXD_FLTR_QW1_ATR_SHIFT (0xEULL + \ - AVF_TXD_FLTR_QW1_CMD_SHIFT) -#define AVF_TXD_FLTR_QW1_ATR_MASK BIT_ULL(AVF_TXD_FLTR_QW1_ATR_SHIFT) +#define IAVF_TXD_FLTR_QW1_ATR_SHIFT (0xEULL + \ + IAVF_TXD_FLTR_QW1_CMD_SHIFT) +#define IAVF_TXD_FLTR_QW1_ATR_MASK BIT_ULL(IAVF_TXD_FLTR_QW1_ATR_SHIFT) -#define AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT 20 -#define AVF_TXD_FLTR_QW1_CNTINDEX_MASK (0x1FFUL << \ - AVF_TXD_FLTR_QW1_CNTINDEX_SHIFT) +#define IAVF_TXD_FLTR_QW1_CNTINDEX_SHIFT 20 +#define IAVF_TXD_FLTR_QW1_CNTINDEX_MASK (0x1FFUL << \ + IAVF_TXD_FLTR_QW1_CNTINDEX_SHIFT) -enum avf_filter_type { - AVF_FLOW_DIRECTOR_FLTR = 0, - AVF_PE_QUAD_HASH_FLTR = 1, - AVF_ETHERTYPE_FLTR, - AVF_FCOE_CTX_FLTR, - AVF_MAC_VLAN_FLTR, - AVF_HASH_FLTR +enum iavf_filter_type { + IAVF_FLOW_DIRECTOR_FLTR = 0, + IAVF_PE_QUAD_HASH_FLTR = 1, + IAVF_ETHERTYPE_FLTR, + IAVF_FCOE_CTX_FLTR, + IAVF_MAC_VLAN_FLTR, + IAVF_HASH_FLTR }; -struct avf_vsi_context { +struct iavf_vsi_context { u16 seid; u16 uplink_seid; u16 vsi_number; @@ -1374,21 +1374,21 @@ struct avf_vsi_context { u8 pf_num; u8 vf_num; u8 connection_type; - struct avf_aqc_vsi_properties_data info; + struct iavf_aqc_vsi_properties_data info; }; -struct avf_veb_context { +struct iavf_veb_context { u16 seid; u16 uplink_seid; u16 veb_number; u16 vebs_allocated; u16 vebs_unallocated; u16 flags; - struct avf_aqc_get_veb_parameters_completion info; + struct iavf_aqc_get_veb_parameters_completion info; }; /* Statistics collected by each port, VSI, VEB, and S-channel */ -struct avf_eth_stats { +struct iavf_eth_stats { u64 rx_bytes; /* gorc */ u64 rx_unicast; /* uprc */ u64 rx_multicast; /* mprc */ @@ -1404,15 +1404,15 @@ struct avf_eth_stats { }; /* Statistics collected per VEB per TC */ -struct avf_veb_tc_stats { - u64 tc_rx_packets[AVF_MAX_TRAFFIC_CLASS]; - u64 tc_rx_bytes[AVF_MAX_TRAFFIC_CLASS]; - u64 tc_tx_packets[AVF_MAX_TRAFFIC_CLASS]; - u64 tc_tx_bytes[AVF_MAX_TRAFFIC_CLASS]; +struct iavf_veb_tc_stats { + u64 tc_rx_packets[IAVF_MAX_TRAFFIC_CLASS]; + u64 tc_rx_bytes[IAVF_MAX_TRAFFIC_CLASS]; + u64 tc_tx_packets[IAVF_MAX_TRAFFIC_CLASS]; + u64 tc_tx_bytes[IAVF_MAX_TRAFFIC_CLASS]; }; /* Statistics collected per function for FCoE */ -struct avf_fcoe_stats { +struct iavf_fcoe_stats { u64 rx_fcoe_packets; /* fcoeprc */ u64 rx_fcoe_dwords; /* focedwrc */ u64 rx_fcoe_dropped; /* fcoerpdc */ @@ -1424,14 +1424,14 @@ struct avf_fcoe_stats { }; /* offset to per function FCoE statistics block */ -#define AVF_FCOE_VF_STAT_OFFSET 0 -#define AVF_FCOE_PF_STAT_OFFSET 128 -#define AVF_FCOE_STAT_MAX (AVF_FCOE_PF_STAT_OFFSET + AVF_MAX_PF) +#define IAVF_FCOE_VF_STAT_OFFSET 0 +#define IAVF_FCOE_PF_STAT_OFFSET 128 +#define IAVF_FCOE_STAT_MAX (IAVF_FCOE_PF_STAT_OFFSET + IAVF_MAX_PF) /* Statistics collected by the MAC */ -struct avf_hw_port_stats { +struct iavf_hw_port_stats { /* eth stats collected by the port */ - struct avf_eth_stats eth; + struct iavf_eth_stats eth; /* additional port specific stats */ u64 tx_dropped_link_down; /* tdold */ @@ -1484,243 +1484,243 @@ struct avf_hw_port_stats { }; /* Checksum and Shadow RAM pointers */ -#define AVF_SR_NVM_CONTROL_WORD 0x00 -#define AVF_SR_PCIE_ANALOG_CONFIG_PTR 0x03 -#define AVF_SR_PHY_ANALOG_CONFIG_PTR 0x04 -#define AVF_SR_OPTION_ROM_PTR 0x05 -#define AVF_SR_RO_PCIR_REGS_AUTO_LOAD_PTR 0x06 -#define AVF_SR_AUTO_GENERATED_POINTERS_PTR 0x07 -#define AVF_SR_PCIR_REGS_AUTO_LOAD_PTR 0x08 -#define AVF_SR_EMP_GLOBAL_MODULE_PTR 0x09 -#define AVF_SR_RO_PCIE_LCB_PTR 0x0A -#define AVF_SR_EMP_IMAGE_PTR 0x0B -#define AVF_SR_PE_IMAGE_PTR 0x0C -#define AVF_SR_CSR_PROTECTED_LIST_PTR 0x0D -#define AVF_SR_MNG_CONFIG_PTR 0x0E -#define AVF_EMP_MODULE_PTR 0x0F -#define AVF_SR_EMP_MODULE_PTR 0x48 -#define AVF_SR_PBA_FLAGS 0x15 -#define AVF_SR_PBA_BLOCK_PTR 0x16 -#define AVF_SR_BOOT_CONFIG_PTR 0x17 -#define AVF_NVM_OEM_VER_OFF 0x83 -#define AVF_SR_NVM_DEV_STARTER_VERSION 0x18 -#define AVF_SR_NVM_WAKE_ON_LAN 0x19 -#define AVF_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR 0x27 -#define AVF_SR_PERMANENT_SAN_MAC_ADDRESS_PTR 0x28 -#define AVF_SR_NVM_MAP_VERSION 0x29 -#define AVF_SR_NVM_IMAGE_VERSION 0x2A -#define AVF_SR_NVM_STRUCTURE_VERSION 0x2B -#define AVF_SR_NVM_EETRACK_LO 0x2D -#define AVF_SR_NVM_EETRACK_HI 0x2E -#define AVF_SR_VPD_PTR 0x2F -#define AVF_SR_PXE_SETUP_PTR 0x30 -#define AVF_SR_PXE_CONFIG_CUST_OPTIONS_PTR 0x31 -#define AVF_SR_NVM_ORIGINAL_EETRACK_LO 0x34 -#define AVF_SR_NVM_ORIGINAL_EETRACK_HI 0x35 -#define AVF_SR_SW_ETHERNET_MAC_ADDRESS_PTR 0x37 -#define AVF_SR_POR_REGS_AUTO_LOAD_PTR 0x38 -#define AVF_SR_EMPR_REGS_AUTO_LOAD_PTR 0x3A -#define AVF_SR_GLOBR_REGS_AUTO_LOAD_PTR 0x3B -#define AVF_SR_CORER_REGS_AUTO_LOAD_PTR 0x3C -#define AVF_SR_PHY_ACTIVITY_LIST_PTR 0x3D -#define AVF_SR_PCIE_ALT_AUTO_LOAD_PTR 0x3E -#define AVF_SR_SW_CHECKSUM_WORD 0x3F -#define AVF_SR_1ST_FREE_PROVISION_AREA_PTR 0x40 -#define AVF_SR_4TH_FREE_PROVISION_AREA_PTR 0x42 -#define AVF_SR_3RD_FREE_PROVISION_AREA_PTR 0x44 -#define AVF_SR_2ND_FREE_PROVISION_AREA_PTR 0x46 -#define AVF_SR_EMP_SR_SETTINGS_PTR 0x48 -#define AVF_SR_FEATURE_CONFIGURATION_PTR 0x49 -#define AVF_SR_CONFIGURATION_METADATA_PTR 0x4D -#define AVF_SR_IMMEDIATE_VALUES_PTR 0x4E +#define IAVF_SR_NVM_CONTROL_WORD 0x00 +#define IAVF_SR_PCIE_ANALOG_CONFIG_PTR 0x03 +#define IAVF_SR_PHY_ANALOG_CONFIG_PTR 0x04 +#define IAVF_SR_OPTION_ROM_PTR 0x05 +#define IAVF_SR_RO_PCIR_REGS_AUTO_LOAD_PTR 0x06 +#define IAVF_SR_AUTO_GENERATED_POINTERS_PTR 0x07 +#define IAVF_SR_PCIR_REGS_AUTO_LOAD_PTR 0x08 +#define IAVF_SR_EMP_GLOBAL_MODULE_PTR 0x09 +#define IAVF_SR_RO_PCIE_LCB_PTR 0x0A +#define IAVF_SR_EMP_IMAGE_PTR 0x0B +#define IAVF_SR_PE_IMAGE_PTR 0x0C +#define IAVF_SR_CSR_PROTECTED_LIST_PTR 0x0D +#define IAVF_SR_MNG_CONFIG_PTR 0x0E +#define IAVF_EMP_MODULE_PTR 0x0F +#define IAVF_SR_EMP_MODULE_PTR 0x48 +#define IAVF_SR_PBA_FLAGS 0x15 +#define IAVF_SR_PBA_BLOCK_PTR 0x16 +#define IAVF_SR_BOOT_CONFIG_PTR 0x17 +#define IAVF_NVM_OEM_VER_OFF 0x83 +#define IAVF_SR_NVM_DEV_STARTER_VERSION 0x18 +#define IAVF_SR_NVM_WAKE_ON_LAN 0x19 +#define IAVF_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR 0x27 +#define IAVF_SR_PERMANENT_SAN_MAC_ADDRESS_PTR 0x28 +#define IAVF_SR_NVM_MAP_VERSION 0x29 +#define IAVF_SR_NVM_IMAGE_VERSION 0x2A +#define IAVF_SR_NVM_STRUCTURE_VERSION 0x2B +#define IAVF_SR_NVM_EETRACK_LO 0x2D +#define IAVF_SR_NVM_EETRACK_HI 0x2E +#define IAVF_SR_VPD_PTR 0x2F +#define IAVF_SR_PXE_SETUP_PTR 0x30 +#define IAVF_SR_PXE_CONFIG_CUST_OPTIONS_PTR 0x31 +#define IAVF_SR_NVM_ORIGINAL_EETRACK_LO 0x34 +#define IAVF_SR_NVM_ORIGINAL_EETRACK_HI 0x35 +#define IAVF_SR_SW_ETHERNET_MAC_ADDRESS_PTR 0x37 +#define IAVF_SR_POR_REGS_AUTO_LOAD_PTR 0x38 +#define IAVF_SR_EMPR_REGS_AUTO_LOAD_PTR 0x3A +#define IAVF_SR_GLOBR_REGS_AUTO_LOAD_PTR 0x3B +#define IAVF_SR_CORER_REGS_AUTO_LOAD_PTR 0x3C +#define IAVF_SR_PHY_ACTIVITY_LIST_PTR 0x3D +#define IAVF_SR_PCIE_ALT_AUTO_LOAD_PTR 0x3E +#define IAVF_SR_SW_CHECKSUM_WORD 0x3F +#define IAVF_SR_1ST_FREE_PROVISION_AREA_PTR 0x40 +#define IAVF_SR_4TH_FREE_PROVISION_AREA_PTR 0x42 +#define IAVF_SR_3RD_FREE_PROVISION_AREA_PTR 0x44 +#define IAVF_SR_2ND_FREE_PROVISION_AREA_PTR 0x46 +#define IAVF_SR_EMP_SR_SETTINGS_PTR 0x48 +#define IAVF_SR_FEATURE_CONFIGURATION_PTR 0x49 +#define IAVF_SR_CONFIGURATION_METADATA_PTR 0x4D +#define IAVF_SR_IMMEDIATE_VALUES_PTR 0x4E /* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */ -#define AVF_SR_VPD_MODULE_MAX_SIZE 1024 -#define AVF_SR_PCIE_ALT_MODULE_MAX_SIZE 1024 -#define AVF_SR_CONTROL_WORD_1_SHIFT 0x06 -#define AVF_SR_CONTROL_WORD_1_MASK (0x03 << AVF_SR_CONTROL_WORD_1_SHIFT) -#define AVF_SR_CONTROL_WORD_1_NVM_BANK_VALID BIT(5) -#define AVF_SR_NVM_MAP_STRUCTURE_TYPE BIT(12) -#define AVF_PTR_TYPE BIT(15) +#define IAVF_SR_VPD_MODULE_MAX_SIZE 1024 +#define IAVF_SR_PCIE_ALT_MODULE_MAX_SIZE 1024 +#define IAVF_SR_CONTROL_WORD_1_SHIFT 0x06 +#define IAVF_SR_CONTROL_WORD_1_MASK (0x03 << IAVF_SR_CONTROL_WORD_1_SHIFT) +#define IAVF_SR_CONTROL_WORD_1_NVM_BANK_VALID BIT(5) +#define IAVF_SR_NVM_MAP_STRUCTURE_TYPE BIT(12) +#define IAVF_PTR_TYPE BIT(15) /* Shadow RAM related */ -#define AVF_SR_SECTOR_SIZE_IN_WORDS 0x800 -#define AVF_SR_BUF_ALIGNMENT 4096 -#define AVF_SR_WORDS_IN_1KB 512 +#define IAVF_SR_SECTOR_SIZE_IN_WORDS 0x800 +#define IAVF_SR_BUF_ALIGNMENT 4096 +#define IAVF_SR_WORDS_IN_1KB 512 /* Checksum should be calculated such that after adding all the words, * including the checksum word itself, the sum should be 0xBABA. */ -#define AVF_SR_SW_CHECKSUM_BASE 0xBABA +#define IAVF_SR_SW_CHECKSUM_BASE 0xBABA -#define AVF_SRRD_SRCTL_ATTEMPTS 100000 +#define IAVF_SRRD_SRCTL_ATTEMPTS 100000 -/* FCoE Tx context descriptor - Use the avf_tx_context_desc struct */ +/* FCoE Tx context descriptor - Use the iavf_tx_context_desc struct */ enum i40E_fcoe_tx_ctx_desc_cmd_bits { - AVF_FCOE_TX_CTX_DESC_OPCODE_SINGLE_SEND = 0x00, /* 4 BITS */ - AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS2 = 0x01, /* 4 BITS */ - AVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS3 = 0x05, /* 4 BITS */ - AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS2 = 0x02, /* 4 BITS */ - AVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS3 = 0x06, /* 4 BITS */ - AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS2 = 0x03, /* 4 BITS */ - AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS3 = 0x07, /* 4 BITS */ - AVF_FCOE_TX_CTX_DESC_OPCODE_DDP_CTX_INVL = 0x08, /* 4 BITS */ - AVF_FCOE_TX_CTX_DESC_OPCODE_DWO_CTX_INVL = 0x09, /* 4 BITS */ - AVF_FCOE_TX_CTX_DESC_RELOFF = 0x10, - AVF_FCOE_TX_CTX_DESC_CLRSEQ = 0x20, - AVF_FCOE_TX_CTX_DESC_DIFENA = 0x40, - AVF_FCOE_TX_CTX_DESC_IL2TAG2 = 0x80 + IAVF_FCOE_TX_CTX_DESC_OPCODE_SINGLE_SEND = 0x00, /* 4 BITS */ + IAVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS2 = 0x01, /* 4 BITS */ + IAVF_FCOE_TX_CTX_DESC_OPCODE_TSO_FC_CLASS3 = 0x05, /* 4 BITS */ + IAVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS2 = 0x02, /* 4 BITS */ + IAVF_FCOE_TX_CTX_DESC_OPCODE_ETSO_FC_CLASS3 = 0x06, /* 4 BITS */ + IAVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS2 = 0x03, /* 4 BITS */ + IAVF_FCOE_TX_CTX_DESC_OPCODE_DWO_FC_CLASS3 = 0x07, /* 4 BITS */ + IAVF_FCOE_TX_CTX_DESC_OPCODE_DDP_CTX_INVL = 0x08, /* 4 BITS */ + IAVF_FCOE_TX_CTX_DESC_OPCODE_DWO_CTX_INVL = 0x09, /* 4 BITS */ + IAVF_FCOE_TX_CTX_DESC_RELOFF = 0x10, + IAVF_FCOE_TX_CTX_DESC_CLRSEQ = 0x20, + IAVF_FCOE_TX_CTX_DESC_DIFENA = 0x40, + IAVF_FCOE_TX_CTX_DESC_IL2TAG2 = 0x80 }; /* FCoE DIF/DIX Context descriptor */ -struct avf_fcoe_difdix_context_desc { +struct iavf_fcoe_difdix_context_desc { __le64 flags_buff0_buff1_ref; __le64 difapp_msk_bias; }; -#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT 0 -#define AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_MASK (0xFFFULL << \ - AVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT) +#define IAVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT 0 +#define IAVF_FCOE_DIFDIX_CTX_QW0_FLAGS_MASK (0xFFFULL << \ + IAVF_FCOE_DIFDIX_CTX_QW0_FLAGS_SHIFT) -enum avf_fcoe_difdix_ctx_desc_flags_bits { +enum iavf_fcoe_difdix_ctx_desc_flags_bits { /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_RSVD = 0x0000, + IAVF_FCOE_DIFDIX_CTX_DESC_RSVD = 0x0000, /* 1 BIT */ - AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGCHK = 0x0000, + IAVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGCHK = 0x0000, /* 1 BIT */ - AVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGNOTCHK = 0x0004, + IAVF_FCOE_DIFDIX_CTX_DESC_APPTYPE_TAGNOTCHK = 0x0004, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_OPAQUE = 0x0000, + IAVF_FCOE_DIFDIX_CTX_DESC_GTYPE_OPAQUE = 0x0000, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY = 0x0008, + IAVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY = 0x0008, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPTAG = 0x0010, + IAVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPTAG = 0x0010, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPREFTAG = 0x0018, + IAVF_FCOE_DIFDIX_CTX_DESC_GTYPE_CHKINTEGRITY_APPREFTAG = 0x0018, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_CNST = 0x0000, + IAVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_CNST = 0x0000, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_INC1BLK = 0x0020, + IAVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_INC1BLK = 0x0020, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_APPTAG = 0x0040, + IAVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_APPTAG = 0x0040, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_RSVD = 0x0060, + IAVF_FCOE_DIFDIX_CTX_DESC_REFTYPE_RSVD = 0x0060, /* 1 BIT */ - AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_XSUM = 0x0000, + IAVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_XSUM = 0x0000, /* 1 BIT */ - AVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_CRC = 0x0080, + IAVF_FCOE_DIFDIX_CTX_DESC_DIXMODE_CRC = 0x0080, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_UNTAG = 0x0000, + IAVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_UNTAG = 0x0000, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_BUF = 0x0100, + IAVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_BUF = 0x0100, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_RSVD = 0x0200, + IAVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_RSVD = 0x0200, /* 2 BITS */ - AVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_EMBDTAGS = 0x0300, + IAVF_FCOE_DIFDIX_CTX_DESC_DIFHOST_EMBDTAGS = 0x0300, /* 1 BIT */ - AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_UNTAG = 0x0000, + IAVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_UNTAG = 0x0000, /* 1 BIT */ - AVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_TAG = 0x0400, + IAVF_FCOE_DIFDIX_CTX_DESC_DIFLAN_TAG = 0x0400, /* 1 BIT */ - AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_512B = 0x0000, + IAVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_512B = 0x0000, /* 1 BIT */ - AVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_4K = 0x0800 + IAVF_FCOE_DIFDIX_CTX_DESC_DIFBLK_4K = 0x0800 }; -#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT 12 -#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_MASK (0x3FFULL << \ - AVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT) +#define IAVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT 12 +#define IAVF_FCOE_DIFDIX_CTX_QW0_BUFF0_MASK (0x3FFULL << \ + IAVF_FCOE_DIFDIX_CTX_QW0_BUFF0_SHIFT) -#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT 22 -#define AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_MASK (0x3FFULL << \ - AVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT) +#define IAVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT 22 +#define IAVF_FCOE_DIFDIX_CTX_QW0_BUFF1_MASK (0x3FFULL << \ + IAVF_FCOE_DIFDIX_CTX_QW0_BUFF1_SHIFT) -#define AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT 32 -#define AVF_FCOE_DIFDIX_CTX_QW0_REF_MASK (0xFFFFFFFFULL << \ - AVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT) +#define IAVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT 32 +#define IAVF_FCOE_DIFDIX_CTX_QW0_REF_MASK (0xFFFFFFFFULL << \ + IAVF_FCOE_DIFDIX_CTX_QW0_REF_SHIFT) -#define AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT 0 -#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MASK (0xFFFFULL << \ - AVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT) +#define IAVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT 0 +#define IAVF_FCOE_DIFDIX_CTX_QW1_APP_MASK (0xFFFFULL << \ + IAVF_FCOE_DIFDIX_CTX_QW1_APP_SHIFT) -#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT 16 -#define AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_MASK (0xFFFFULL << \ - AVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT) +#define IAVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT 16 +#define IAVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_MASK (0xFFFFULL << \ + IAVF_FCOE_DIFDIX_CTX_QW1_APP_MSK_SHIFT) -#define AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT 32 -#define AVF_FCOE_DIFDIX_CTX_QW0_REF_BIAS_MASK (0xFFFFFFFFULL << \ - AVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT) +#define IAVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT 32 +#define IAVF_FCOE_DIFDIX_CTX_QW0_REF_BIAS_MASK (0xFFFFFFFFULL << \ + IAVF_FCOE_DIFDIX_CTX_QW1_REF_BIAS_SHIFT) /* FCoE DIF/DIX Buffers descriptor */ -struct avf_fcoe_difdix_buffers_desc { +struct iavf_fcoe_difdix_buffers_desc { __le64 buff_addr0; __le64 buff_addr1; }; /* FCoE DDP Context descriptor */ -struct avf_fcoe_ddp_context_desc { +struct iavf_fcoe_ddp_context_desc { __le64 rsvd; __le64 type_cmd_foff_lsize; }; -#define AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT 0 -#define AVF_FCOE_DDP_CTX_QW1_DTYPE_MASK (0xFULL << \ - AVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT) +#define IAVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT 0 +#define IAVF_FCOE_DDP_CTX_QW1_DTYPE_MASK (0xFULL << \ + IAVF_FCOE_DDP_CTX_QW1_DTYPE_SHIFT) -#define AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT 4 -#define AVF_FCOE_DDP_CTX_QW1_CMD_MASK (0xFULL << \ - AVF_FCOE_DDP_CTX_QW1_CMD_SHIFT) +#define IAVF_FCOE_DDP_CTX_QW1_CMD_SHIFT 4 +#define IAVF_FCOE_DDP_CTX_QW1_CMD_MASK (0xFULL << \ + IAVF_FCOE_DDP_CTX_QW1_CMD_SHIFT) -enum avf_fcoe_ddp_ctx_desc_cmd_bits { - AVF_FCOE_DDP_CTX_DESC_BSIZE_512B = 0x00, /* 2 BITS */ - AVF_FCOE_DDP_CTX_DESC_BSIZE_4K = 0x01, /* 2 BITS */ - AVF_FCOE_DDP_CTX_DESC_BSIZE_8K = 0x02, /* 2 BITS */ - AVF_FCOE_DDP_CTX_DESC_BSIZE_16K = 0x03, /* 2 BITS */ - AVF_FCOE_DDP_CTX_DESC_DIFENA = 0x04, /* 1 BIT */ - AVF_FCOE_DDP_CTX_DESC_LASTSEQH = 0x08, /* 1 BIT */ +enum iavf_fcoe_ddp_ctx_desc_cmd_bits { + IAVF_FCOE_DDP_CTX_DESC_BSIZE_512B = 0x00, /* 2 BITS */ + IAVF_FCOE_DDP_CTX_DESC_BSIZE_4K = 0x01, /* 2 BITS */ + IAVF_FCOE_DDP_CTX_DESC_BSIZE_8K = 0x02, /* 2 BITS */ + IAVF_FCOE_DDP_CTX_DESC_BSIZE_16K = 0x03, /* 2 BITS */ + IAVF_FCOE_DDP_CTX_DESC_DIFENA = 0x04, /* 1 BIT */ + IAVF_FCOE_DDP_CTX_DESC_LASTSEQH = 0x08, /* 1 BIT */ }; -#define AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT 16 -#define AVF_FCOE_DDP_CTX_QW1_FOFF_MASK (0x3FFFULL << \ - AVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT) +#define IAVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT 16 +#define IAVF_FCOE_DDP_CTX_QW1_FOFF_MASK (0x3FFFULL << \ + IAVF_FCOE_DDP_CTX_QW1_FOFF_SHIFT) -#define AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT 32 -#define AVF_FCOE_DDP_CTX_QW1_LSIZE_MASK (0x3FFFULL << \ - AVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT) +#define IAVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT 32 +#define IAVF_FCOE_DDP_CTX_QW1_LSIZE_MASK (0x3FFFULL << \ + IAVF_FCOE_DDP_CTX_QW1_LSIZE_SHIFT) /* FCoE DDP/DWO Queue Context descriptor */ -struct avf_fcoe_queue_context_desc { +struct iavf_fcoe_queue_context_desc { __le64 dmaindx_fbase; /* 0:11 DMAINDX, 12:63 FBASE */ __le64 flen_tph; /* 0:12 FLEN, 13:15 TPH */ }; -#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT 0 -#define AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_MASK (0xFFFULL << \ - AVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT) +#define IAVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT 0 +#define IAVF_FCOE_QUEUE_CTX_QW0_DMAINDX_MASK (0xFFFULL << \ + IAVF_FCOE_QUEUE_CTX_QW0_DMAINDX_SHIFT) -#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT 12 -#define AVF_FCOE_QUEUE_CTX_QW0_FBASE_MASK (0xFFFFFFFFFFFFFULL << \ - AVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT) +#define IAVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT 12 +#define IAVF_FCOE_QUEUE_CTX_QW0_FBASE_MASK (0xFFFFFFFFFFFFFULL << \ + IAVF_FCOE_QUEUE_CTX_QW0_FBASE_SHIFT) -#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT 0 -#define AVF_FCOE_QUEUE_CTX_QW1_FLEN_MASK (0x1FFFULL << \ - AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT) +#define IAVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT 0 +#define IAVF_FCOE_QUEUE_CTX_QW1_FLEN_MASK (0x1FFFULL << \ + IAVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT) -#define AVF_FCOE_QUEUE_CTX_QW1_TPH_SHIFT 13 -#define AVF_FCOE_QUEUE_CTX_QW1_TPH_MASK (0x7ULL << \ - AVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT) +#define IAVF_FCOE_QUEUE_CTX_QW1_TPH_SHIFT 13 +#define IAVF_FCOE_QUEUE_CTX_QW1_TPH_MASK (0x7ULL << \ + IAVF_FCOE_QUEUE_CTX_QW1_FLEN_SHIFT) -enum avf_fcoe_queue_ctx_desc_tph_bits { - AVF_FCOE_QUEUE_CTX_DESC_TPHRDESC = 0x1, - AVF_FCOE_QUEUE_CTX_DESC_TPHDATA = 0x2 +enum iavf_fcoe_queue_ctx_desc_tph_bits { + IAVF_FCOE_QUEUE_CTX_DESC_TPHRDESC = 0x1, + IAVF_FCOE_QUEUE_CTX_DESC_TPHDATA = 0x2 }; -#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT 30 -#define AVF_FCOE_QUEUE_CTX_QW1_RECIPE_MASK (0x3ULL << \ - AVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT) +#define IAVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT 30 +#define IAVF_FCOE_QUEUE_CTX_QW1_RECIPE_MASK (0x3ULL << \ + IAVF_FCOE_QUEUE_CTX_QW1_RECIPE_SHIFT) /* FCoE DDP/DWO Filter Context descriptor */ -struct avf_fcoe_filter_context_desc { +struct iavf_fcoe_filter_context_desc { __le32 param; __le16 seqn; @@ -1731,110 +1731,110 @@ struct avf_fcoe_filter_context_desc { __le64 flags_rsvd_lanq; }; -#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT 4 -#define AVF_FCOE_FILTER_CTX_QW0_DMAINDX_MASK (0xFFF << \ - AVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT) +#define IAVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT 4 +#define IAVF_FCOE_FILTER_CTX_QW0_DMAINDX_MASK (0xFFF << \ + IAVF_FCOE_FILTER_CTX_QW0_DMAINDX_SHIFT) -enum avf_fcoe_filter_ctx_desc_flags_bits { - AVF_FCOE_FILTER_CTX_DESC_CTYP_DDP = 0x00, - AVF_FCOE_FILTER_CTX_DESC_CTYP_DWO = 0x01, - AVF_FCOE_FILTER_CTX_DESC_ENODE_INIT = 0x00, - AVF_FCOE_FILTER_CTX_DESC_ENODE_RSP = 0x02, - AVF_FCOE_FILTER_CTX_DESC_FC_CLASS2 = 0x00, - AVF_FCOE_FILTER_CTX_DESC_FC_CLASS3 = 0x04 +enum iavf_fcoe_filter_ctx_desc_flags_bits { + IAVF_FCOE_FILTER_CTX_DESC_CTYP_DDP = 0x00, + IAVF_FCOE_FILTER_CTX_DESC_CTYP_DWO = 0x01, + IAVF_FCOE_FILTER_CTX_DESC_ENODE_INIT = 0x00, + IAVF_FCOE_FILTER_CTX_DESC_ENODE_RSP = 0x02, + IAVF_FCOE_FILTER_CTX_DESC_FC_CLASS2 = 0x00, + IAVF_FCOE_FILTER_CTX_DESC_FC_CLASS3 = 0x04 }; -#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT 0 -#define AVF_FCOE_FILTER_CTX_QW1_FLAGS_MASK (0xFFULL << \ - AVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT) +#define IAVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT 0 +#define IAVF_FCOE_FILTER_CTX_QW1_FLAGS_MASK (0xFFULL << \ + IAVF_FCOE_FILTER_CTX_QW1_FLAGS_SHIFT) -#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT 8 -#define AVF_FCOE_FILTER_CTX_QW1_PCTYPE_MASK (0x3FULL << \ - AVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT) +#define IAVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT 8 +#define IAVF_FCOE_FILTER_CTX_QW1_PCTYPE_MASK (0x3FULL << \ + IAVF_FCOE_FILTER_CTX_QW1_PCTYPE_SHIFT) -#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT 53 -#define AVF_FCOE_FILTER_CTX_QW1_LANQINDX_MASK (0x7FFULL << \ - AVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT) +#define IAVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT 53 +#define IAVF_FCOE_FILTER_CTX_QW1_LANQINDX_MASK (0x7FFULL << \ + IAVF_FCOE_FILTER_CTX_QW1_LANQINDX_SHIFT) -enum avf_switch_element_types { - AVF_SWITCH_ELEMENT_TYPE_MAC = 1, - AVF_SWITCH_ELEMENT_TYPE_PF = 2, - AVF_SWITCH_ELEMENT_TYPE_VF = 3, - AVF_SWITCH_ELEMENT_TYPE_EMP = 4, - AVF_SWITCH_ELEMENT_TYPE_BMC = 6, - AVF_SWITCH_ELEMENT_TYPE_PE = 16, - AVF_SWITCH_ELEMENT_TYPE_VEB = 17, - AVF_SWITCH_ELEMENT_TYPE_PA = 18, - AVF_SWITCH_ELEMENT_TYPE_VSI = 19, +enum iavf_switch_element_types { + IAVF_SWITCH_ELEMENT_TYPE_MAC = 1, + IAVF_SWITCH_ELEMENT_TYPE_PF = 2, + IAVF_SWITCH_ELEMENT_TYPE_VF = 3, + IAVF_SWITCH_ELEMENT_TYPE_EMP = 4, + IAVF_SWITCH_ELEMENT_TYPE_BMC = 6, + IAVF_SWITCH_ELEMENT_TYPE_PE = 16, + IAVF_SWITCH_ELEMENT_TYPE_VEB = 17, + IAVF_SWITCH_ELEMENT_TYPE_PA = 18, + IAVF_SWITCH_ELEMENT_TYPE_VSI = 19, }; /* Supported EtherType filters */ -enum avf_ether_type_index { - AVF_ETHER_TYPE_1588 = 0, - AVF_ETHER_TYPE_FIP = 1, - AVF_ETHER_TYPE_OUI_EXTENDED = 2, - AVF_ETHER_TYPE_MAC_CONTROL = 3, - AVF_ETHER_TYPE_LLDP = 4, - AVF_ETHER_TYPE_EVB_PROTOCOL1 = 5, - AVF_ETHER_TYPE_EVB_PROTOCOL2 = 6, - AVF_ETHER_TYPE_QCN_CNM = 7, - AVF_ETHER_TYPE_8021X = 8, - AVF_ETHER_TYPE_ARP = 9, - AVF_ETHER_TYPE_RSV1 = 10, - AVF_ETHER_TYPE_RSV2 = 11, +enum iavf_ether_type_index { + IAVF_ETHER_TYPE_1588 = 0, + IAVF_ETHER_TYPE_FIP = 1, + IAVF_ETHER_TYPE_OUI_EXTENDED = 2, + IAVF_ETHER_TYPE_MAC_CONTROL = 3, + IAVF_ETHER_TYPE_LLDP = 4, + IAVF_ETHER_TYPE_EVB_PROTOCOL1 = 5, + IAVF_ETHER_TYPE_EVB_PROTOCOL2 = 6, + IAVF_ETHER_TYPE_QCN_CNM = 7, + IAVF_ETHER_TYPE_8021X = 8, + IAVF_ETHER_TYPE_ARP = 9, + IAVF_ETHER_TYPE_RSV1 = 10, + IAVF_ETHER_TYPE_RSV2 = 11, }; /* Filter context base size is 1K */ -#define AVF_HASH_FILTER_BASE_SIZE 1024 +#define IAVF_HASH_FILTER_BASE_SIZE 1024 /* Supported Hash filter values */ -enum avf_hash_filter_size { - AVF_HASH_FILTER_SIZE_1K = 0, - AVF_HASH_FILTER_SIZE_2K = 1, - AVF_HASH_FILTER_SIZE_4K = 2, - AVF_HASH_FILTER_SIZE_8K = 3, - AVF_HASH_FILTER_SIZE_16K = 4, - AVF_HASH_FILTER_SIZE_32K = 5, - AVF_HASH_FILTER_SIZE_64K = 6, - AVF_HASH_FILTER_SIZE_128K = 7, - AVF_HASH_FILTER_SIZE_256K = 8, - AVF_HASH_FILTER_SIZE_512K = 9, - AVF_HASH_FILTER_SIZE_1M = 10, +enum iavf_hash_filter_size { + IAVF_HASH_FILTER_SIZE_1K = 0, + IAVF_HASH_FILTER_SIZE_2K = 1, + IAVF_HASH_FILTER_SIZE_4K = 2, + IAVF_HASH_FILTER_SIZE_8K = 3, + IAVF_HASH_FILTER_SIZE_16K = 4, + IAVF_HASH_FILTER_SIZE_32K = 5, + IAVF_HASH_FILTER_SIZE_64K = 6, + IAVF_HASH_FILTER_SIZE_128K = 7, + IAVF_HASH_FILTER_SIZE_256K = 8, + IAVF_HASH_FILTER_SIZE_512K = 9, + IAVF_HASH_FILTER_SIZE_1M = 10, }; /* DMA context base size is 0.5K */ -#define AVF_DMA_CNTX_BASE_SIZE 512 +#define IAVF_DMA_CNTX_BASE_SIZE 512 /* Supported DMA context values */ -enum avf_dma_cntx_size { - AVF_DMA_CNTX_SIZE_512 = 0, - AVF_DMA_CNTX_SIZE_1K = 1, - AVF_DMA_CNTX_SIZE_2K = 2, - AVF_DMA_CNTX_SIZE_4K = 3, - AVF_DMA_CNTX_SIZE_8K = 4, - AVF_DMA_CNTX_SIZE_16K = 5, - AVF_DMA_CNTX_SIZE_32K = 6, - AVF_DMA_CNTX_SIZE_64K = 7, - AVF_DMA_CNTX_SIZE_128K = 8, - AVF_DMA_CNTX_SIZE_256K = 9, +enum iavf_dma_cntx_size { + IAVF_DMA_CNTX_SIZE_512 = 0, + IAVF_DMA_CNTX_SIZE_1K = 1, + IAVF_DMA_CNTX_SIZE_2K = 2, + IAVF_DMA_CNTX_SIZE_4K = 3, + IAVF_DMA_CNTX_SIZE_8K = 4, + IAVF_DMA_CNTX_SIZE_16K = 5, + IAVF_DMA_CNTX_SIZE_32K = 6, + IAVF_DMA_CNTX_SIZE_64K = 7, + IAVF_DMA_CNTX_SIZE_128K = 8, + IAVF_DMA_CNTX_SIZE_256K = 9, }; /* Supported Hash look up table (LUT) sizes */ -enum avf_hash_lut_size { - AVF_HASH_LUT_SIZE_128 = 0, - AVF_HASH_LUT_SIZE_512 = 1, +enum iavf_hash_lut_size { + IAVF_HASH_LUT_SIZE_128 = 0, + IAVF_HASH_LUT_SIZE_512 = 1, }; /* Structure to hold a per PF filter control settings */ -struct avf_filter_control_settings { +struct iavf_filter_control_settings { /* number of PE Quad Hash filter buckets */ - enum avf_hash_filter_size pe_filt_num; + enum iavf_hash_filter_size pe_filt_num; /* number of PE Quad Hash contexts */ - enum avf_dma_cntx_size pe_cntx_num; + enum iavf_dma_cntx_size pe_cntx_num; /* number of FCoE filter buckets */ - enum avf_hash_filter_size fcoe_filt_num; + enum iavf_hash_filter_size fcoe_filt_num; /* number of FCoE DDP contexts */ - enum avf_dma_cntx_size fcoe_cntx_num; + enum iavf_dma_cntx_size fcoe_cntx_num; /* size of the Hash LUT */ - enum avf_hash_lut_size hash_lut_size; + enum iavf_hash_lut_size hash_lut_size; /* enable FDIR filters for PF and its VFs */ bool enable_fdir; /* enable Ethertype filters for PF and its VFs */ @@ -1844,24 +1844,24 @@ struct avf_filter_control_settings { }; /* Structure to hold device level control filter counts */ -struct avf_control_filter_stats { +struct iavf_control_filter_stats { u16 mac_etype_used; /* Used perfect match MAC/EtherType filters */ u16 etype_used; /* Used perfect EtherType filters */ u16 mac_etype_free; /* Un-used perfect match MAC/EtherType filters */ u16 etype_free; /* Un-used perfect EtherType filters */ }; -enum avf_reset_type { - AVF_RESET_POR = 0, - AVF_RESET_CORER = 1, - AVF_RESET_GLOBR = 2, - AVF_RESET_EMPR = 3, +enum iavf_reset_type { + IAVF_RESET_POR = 0, + IAVF_RESET_CORER = 1, + IAVF_RESET_GLOBR = 2, + IAVF_RESET_EMPR = 3, }; /* IEEE 802.1AB LLDP Agent Variables from NVM */ -#define AVF_NVM_LLDP_CFG_PTR 0x06 -#define AVF_SR_LLDP_CFG_PTR 0x31 -struct avf_lldp_variables { +#define IAVF_NVM_LLDP_CFG_PTR 0x06 +#define IAVF_SR_LLDP_CFG_PTR 0x31 +struct iavf_lldp_variables { u16 length; u16 adminstatus; u16 msgfasttx; @@ -1872,111 +1872,111 @@ struct avf_lldp_variables { }; /* Offsets into Alternate Ram */ -#define AVF_ALT_STRUCT_FIRST_PF_OFFSET 0 /* in dwords */ -#define AVF_ALT_STRUCT_DWORDS_PER_PF 64 /* in dwords */ -#define AVF_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET 0xD /* in dwords */ -#define AVF_ALT_STRUCT_USER_PRIORITY_OFFSET 0xC /* in dwords */ -#define AVF_ALT_STRUCT_MIN_BW_OFFSET 0xE /* in dwords */ -#define AVF_ALT_STRUCT_MAX_BW_OFFSET 0xF /* in dwords */ +#define IAVF_ALT_STRUCT_FIRST_PF_OFFSET 0 /* in dwords */ +#define IAVF_ALT_STRUCT_DWORDS_PER_PF 64 /* in dwords */ +#define IAVF_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET 0xD /* in dwords */ +#define IAVF_ALT_STRUCT_USER_PRIORITY_OFFSET 0xC /* in dwords */ +#define IAVF_ALT_STRUCT_MIN_BW_OFFSET 0xE /* in dwords */ +#define IAVF_ALT_STRUCT_MAX_BW_OFFSET 0xF /* in dwords */ /* Alternate Ram Bandwidth Masks */ -#define AVF_ALT_BW_VALUE_MASK 0xFF -#define AVF_ALT_BW_RELATIVE_MASK 0x40000000 -#define AVF_ALT_BW_VALID_MASK 0x80000000 +#define IAVF_ALT_BW_VALUE_MASK 0xFF +#define IAVF_ALT_BW_RELATIVE_MASK 0x40000000 +#define IAVF_ALT_BW_VALID_MASK 0x80000000 /* RSS Hash Table Size */ -#define AVF_PFQF_CTL_0_HASHLUTSIZE_512 0x00010000 +#define IAVF_PFQF_CTL_0_HASHLUTSIZE_512 0x00010000 /* INPUT SET MASK for RSS, flow director, and flexible payload */ -#define AVF_L3_SRC_SHIFT 47 -#define AVF_L3_SRC_MASK (0x3ULL << AVF_L3_SRC_SHIFT) -#define AVF_L3_V6_SRC_SHIFT 43 -#define AVF_L3_V6_SRC_MASK (0xFFULL << AVF_L3_V6_SRC_SHIFT) -#define AVF_L3_DST_SHIFT 35 -#define AVF_L3_DST_MASK (0x3ULL << AVF_L3_DST_SHIFT) -#define AVF_L3_V6_DST_SHIFT 35 -#define AVF_L3_V6_DST_MASK (0xFFULL << AVF_L3_V6_DST_SHIFT) -#define AVF_L4_SRC_SHIFT 34 -#define AVF_L4_SRC_MASK (0x1ULL << AVF_L4_SRC_SHIFT) -#define AVF_L4_DST_SHIFT 33 -#define AVF_L4_DST_MASK (0x1ULL << AVF_L4_DST_SHIFT) -#define AVF_VERIFY_TAG_SHIFT 31 -#define AVF_VERIFY_TAG_MASK (0x3ULL << AVF_VERIFY_TAG_SHIFT) - -#define AVF_FLEX_50_SHIFT 13 -#define AVF_FLEX_50_MASK (0x1ULL << AVF_FLEX_50_SHIFT) -#define AVF_FLEX_51_SHIFT 12 -#define AVF_FLEX_51_MASK (0x1ULL << AVF_FLEX_51_SHIFT) -#define AVF_FLEX_52_SHIFT 11 -#define AVF_FLEX_52_MASK (0x1ULL << AVF_FLEX_52_SHIFT) -#define AVF_FLEX_53_SHIFT 10 -#define AVF_FLEX_53_MASK (0x1ULL << AVF_FLEX_53_SHIFT) -#define AVF_FLEX_54_SHIFT 9 -#define AVF_FLEX_54_MASK (0x1ULL << AVF_FLEX_54_SHIFT) -#define AVF_FLEX_55_SHIFT 8 -#define AVF_FLEX_55_MASK (0x1ULL << AVF_FLEX_55_SHIFT) -#define AVF_FLEX_56_SHIFT 7 -#define AVF_FLEX_56_MASK (0x1ULL << AVF_FLEX_56_SHIFT) -#define AVF_FLEX_57_SHIFT 6 -#define AVF_FLEX_57_MASK (0x1ULL << AVF_FLEX_57_SHIFT) +#define IAVF_L3_SRC_SHIFT 47 +#define IAVF_L3_SRC_MASK (0x3ULL << IAVF_L3_SRC_SHIFT) +#define IAVF_L3_V6_SRC_SHIFT 43 +#define IAVF_L3_V6_SRC_MASK (0xFFULL << IAVF_L3_V6_SRC_SHIFT) +#define IAVF_L3_DST_SHIFT 35 +#define IAVF_L3_DST_MASK (0x3ULL << IAVF_L3_DST_SHIFT) +#define IAVF_L3_V6_DST_SHIFT 35 +#define IAVF_L3_V6_DST_MASK (0xFFULL << IAVF_L3_V6_DST_SHIFT) +#define IAVF_L4_SRC_SHIFT 34 +#define IAVF_L4_SRC_MASK (0x1ULL << IAVF_L4_SRC_SHIFT) +#define IAVF_L4_DST_SHIFT 33 +#define IAVF_L4_DST_MASK (0x1ULL << IAVF_L4_DST_SHIFT) +#define IAVF_VERIFY_TAG_SHIFT 31 +#define IAVF_VERIFY_TAG_MASK (0x3ULL << IAVF_VERIFY_TAG_SHIFT) + +#define IAVF_FLEX_50_SHIFT 13 +#define IAVF_FLEX_50_MASK (0x1ULL << IAVF_FLEX_50_SHIFT) +#define IAVF_FLEX_51_SHIFT 12 +#define IAVF_FLEX_51_MASK (0x1ULL << IAVF_FLEX_51_SHIFT) +#define IAVF_FLEX_52_SHIFT 11 +#define IAVF_FLEX_52_MASK (0x1ULL << IAVF_FLEX_52_SHIFT) +#define IAVF_FLEX_53_SHIFT 10 +#define IAVF_FLEX_53_MASK (0x1ULL << IAVF_FLEX_53_SHIFT) +#define IAVF_FLEX_54_SHIFT 9 +#define IAVF_FLEX_54_MASK (0x1ULL << IAVF_FLEX_54_SHIFT) +#define IAVF_FLEX_55_SHIFT 8 +#define IAVF_FLEX_55_MASK (0x1ULL << IAVF_FLEX_55_SHIFT) +#define IAVF_FLEX_56_SHIFT 7 +#define IAVF_FLEX_56_MASK (0x1ULL << IAVF_FLEX_56_SHIFT) +#define IAVF_FLEX_57_SHIFT 6 +#define IAVF_FLEX_57_MASK (0x1ULL << IAVF_FLEX_57_SHIFT) /* Version format for Dynamic Device Personalization(DDP) */ -struct avf_ddp_version { +struct iavf_ddp_version { u8 major; u8 minor; u8 update; u8 draft; }; -#define AVF_DDP_NAME_SIZE 32 +#define IAVF_DDP_NAME_SIZE 32 /* Package header */ -struct avf_package_header { - struct avf_ddp_version version; +struct iavf_package_header { + struct iavf_ddp_version version; u32 segment_count; u32 segment_offset[1]; }; /* Generic segment header */ -struct avf_generic_seg_header { +struct iavf_generic_seg_header { #define SEGMENT_TYPE_METADATA 0x00000001 #define SEGMENT_TYPE_NOTES 0x00000002 -#define SEGMENT_TYPE_AVF 0x00000011 +#define SEGMENT_TYPE_IAVF 0x00000011 #define SEGMENT_TYPE_X722 0x00000012 u32 type; - struct avf_ddp_version version; + struct iavf_ddp_version version; u32 size; - char name[AVF_DDP_NAME_SIZE]; + char name[IAVF_DDP_NAME_SIZE]; }; -struct avf_metadata_segment { - struct avf_generic_seg_header header; - struct avf_ddp_version version; -#define AVF_DDP_TRACKID_RDONLY 0 -#define AVF_DDP_TRACKID_INVALID 0xFFFFFFFF +struct iavf_metadata_segment { + struct iavf_generic_seg_header header; + struct iavf_ddp_version version; +#define IAVF_DDP_TRACKID_RDONLY 0 +#define IAVF_DDP_TRACKID_INVALID 0xFFFFFFFF u32 track_id; - char name[AVF_DDP_NAME_SIZE]; + char name[IAVF_DDP_NAME_SIZE]; }; -struct avf_device_id_entry { +struct iavf_device_id_entry { u32 vendor_dev_id; u32 sub_vendor_dev_id; }; -struct avf_profile_segment { - struct avf_generic_seg_header header; - struct avf_ddp_version version; - char name[AVF_DDP_NAME_SIZE]; +struct iavf_profile_segment { + struct iavf_generic_seg_header header; + struct iavf_ddp_version version; + char name[IAVF_DDP_NAME_SIZE]; u32 device_table_count; - struct avf_device_id_entry device_table[1]; + struct iavf_device_id_entry device_table[1]; }; -struct avf_section_table { +struct iavf_section_table { u32 section_count; u32 section_offset[1]; }; -struct avf_profile_section_header { +struct iavf_profile_section_header { u16 tbl_size; u16 data_end; struct { @@ -1996,7 +1996,7 @@ struct avf_profile_section_header { } section; }; -struct avf_profile_tlv_section_record { +struct iavf_profile_tlv_section_record { u8 rtype; u8 type; u16 len; @@ -2004,7 +2004,7 @@ struct avf_profile_tlv_section_record { }; /* Generic AQ section in proflie */ -struct avf_profile_aq_section { +struct iavf_profile_aq_section { u16 opcode; u16 flags; u8 param[16]; @@ -2012,13 +2012,13 @@ struct avf_profile_aq_section { u8 data[1]; }; -struct avf_profile_info { +struct iavf_profile_info { u32 track_id; - struct avf_ddp_version version; + struct iavf_ddp_version version; u8 op; -#define AVF_DDP_ADD_TRACKID 0x01 -#define AVF_DDP_REMOVE_TRACKID 0x02 +#define IAVF_DDP_ADD_TRACKID 0x01 +#define IAVF_DDP_REMOVE_TRACKID 0x02 u8 reserved[7]; - u8 name[AVF_DDP_NAME_SIZE]; + u8 name[IAVF_DDP_NAME_SIZE]; }; -#endif /* _AVF_TYPE_H_ */ +#endif /* _IAVF_TYPE_H_ */ diff --git a/drivers/net/iavf/base/virtchnl.h b/drivers/net/iavf/base/virtchnl.h index 167518f0d..13bfdb86b 100644 --- a/drivers/net/iavf/base/virtchnl.h +++ b/drivers/net/iavf/base/virtchnl.h @@ -94,7 +94,7 @@ enum virtchnl_link_speed { }; /* for hsplit_0 field of Rx HMC context */ -/* deprecated with AVF 1.0 */ +/* deprecated with IAVF 1.0 */ enum virtchnl_rx_hsplit { VIRTCHNL_RX_HSPLIT_NO_SPLIT = 0, VIRTCHNL_RX_HSPLIT_SPLIT_L2 = 1, @@ -289,9 +289,9 @@ struct virtchnl_txq_info { u16 vsi_id; u16 queue_id; u16 ring_len; /* number of descriptors, multiple of 8 */ - u16 headwb_enabled; /* deprecated with AVF 1.0 */ + u16 headwb_enabled; /* deprecated with IAVF 1.0 */ u64 dma_ring_addr; - u64 dma_headwb_addr; /* deprecated with AVF 1.0 */ + u64 dma_headwb_addr; /* deprecated with IAVF 1.0 */ }; VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info); @@ -308,12 +308,12 @@ struct virtchnl_rxq_info { u16 queue_id; u32 ring_len; /* number of descriptors, multiple of 32 */ u16 hdr_size; - u16 splithdr_enabled; /* deprecated with AVF 1.0 */ + u16 splithdr_enabled; /* deprecated with IAVF 1.0 */ u32 databuffer_size; u32 max_pkt_size; u32 pad1; u64 dma_ring_addr; - enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with AVF 1.0 */ + enum virtchnl_rx_hsplit rx_split_pos; /* deprecated with IAVF 1.0 */ u32 pad2; }; diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index dcf8d1c72..e6e3e8d30 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -2,69 +2,69 @@ * Copyright(c) 2017 Intel Corporation */ -#ifndef _AVF_ETHDEV_H_ -#define _AVF_ETHDEV_H_ +#ifndef _IAVF_ETHDEV_H_ +#define _IAVF_ETHDEV_H_ #include -#define AVF_AQ_LEN 32 -#define AVF_AQ_BUF_SZ 4096 -#define AVF_RESET_WAIT_CNT 50 -#define AVF_BUF_SIZE_MIN 1024 -#define AVF_FRAME_SIZE_MAX 9728 -#define AVF_QUEUE_BASE_ADDR_UNIT 128 +#define IAVF_AQ_LEN 32 +#define IAVF_AQ_BUF_SZ 4096 +#define IAVF_RESET_WAIT_CNT 50 +#define IAVF_BUF_SIZE_MIN 1024 +#define IAVF_FRAME_SIZE_MAX 9728 +#define IAVF_QUEUE_BASE_ADDR_UNIT 128 -#define AVF_MAX_NUM_QUEUES 16 +#define IAVF_MAX_NUM_QUEUES 16 -#define AVF_NUM_MACADDR_MAX 64 +#define IAVF_NUM_MACADDR_MAX 64 -#define AVF_DEFAULT_RX_PTHRESH 8 -#define AVF_DEFAULT_RX_HTHRESH 8 -#define AVF_DEFAULT_RX_WTHRESH 0 +#define IAVF_DEFAULT_RX_PTHRESH 8 +#define IAVF_DEFAULT_RX_HTHRESH 8 +#define IAVF_DEFAULT_RX_WTHRESH 0 -#define AVF_DEFAULT_RX_FREE_THRESH 32 +#define IAVF_DEFAULT_RX_FREE_THRESH 32 -#define AVF_DEFAULT_TX_PTHRESH 32 -#define AVF_DEFAULT_TX_HTHRESH 0 -#define AVF_DEFAULT_TX_WTHRESH 0 +#define IAVF_DEFAULT_TX_PTHRESH 32 +#define IAVF_DEFAULT_TX_HTHRESH 0 +#define IAVF_DEFAULT_TX_WTHRESH 0 -#define AVF_DEFAULT_TX_FREE_THRESH 32 -#define AVF_DEFAULT_TX_RS_THRESH 32 +#define IAVF_DEFAULT_TX_FREE_THRESH 32 +#define IAVF_DEFAULT_TX_RS_THRESH 32 -#define AVF_BASIC_OFFLOAD_CAPS ( \ +#define IAVF_BASIC_OFFLOAD_CAPS ( \ VF_BASE_MODE_OFFLOADS | \ VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \ VIRTCHNL_VF_OFFLOAD_RX_POLLING) -#define AVF_RSS_OFFLOAD_ALL ( \ +#define IAVF_RSS_OFFLOAD_ALL ( \ ETH_RSS_FRAG_IPV4 | \ ETH_RSS_NONFRAG_IPV4_TCP | \ ETH_RSS_NONFRAG_IPV4_UDP | \ ETH_RSS_NONFRAG_IPV4_SCTP | \ ETH_RSS_NONFRAG_IPV4_OTHER) -#define AVF_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET -#define AVF_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET +#define IAVF_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET +#define IAVF_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET /* Default queue interrupt throttling time in microseconds */ -#define AVF_ITR_INDEX_DEFAULT 0 -#define AVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */ -#define AVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ +#define IAVF_ITR_INDEX_DEFAULT 0 +#define IAVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */ +#define IAVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ /* The overhead from MTU to max frame size. * Considering QinQ packet, the VLAN tag needs to be counted twice. */ -#define AVF_VLAN_TAG_SIZE 4 -#define AVF_ETH_OVERHEAD \ - (ETHER_HDR_LEN + ETHER_CRC_LEN + AVF_VLAN_TAG_SIZE * 2) +#define IAVF_VLAN_TAG_SIZE 4 +#define IAVF_ETH_OVERHEAD \ + (ETHER_HDR_LEN + ETHER_CRC_LEN + IAVF_VLAN_TAG_SIZE * 2) -struct avf_adapter; -struct avf_rx_queue; -struct avf_tx_queue; +struct iavf_adapter; +struct iavf_rx_queue; +struct iavf_tx_queue; /* Structure that defines a VSI, associated with a adapter. */ -struct avf_vsi { - struct avf_adapter *adapter; /* Backreference to associated adapter */ +struct iavf_vsi { + struct iavf_adapter *adapter; /* Backreference to associated adapter */ uint16_t vsi_id; uint16_t nb_qps; /* Number of queue pairs VSI can occupy */ uint16_t nb_used_qps; /* Number of queue pairs VSI uses */ @@ -74,10 +74,10 @@ struct avf_vsi { }; /* TODO: is that correct to assume the max number to be 16 ?*/ -#define AVF_MAX_MSIX_VECTORS 16 +#define IAVF_MAX_MSIX_VECTORS 16 /* Structure to store private data specific for VF instance. */ -struct avf_info { +struct iavf_info { uint16_t num_queue_pairs; uint16_t max_pkt_len; /* Maximum packet length */ uint16_t mac_num; /* Number of MAC addresses */ @@ -97,7 +97,7 @@ struct avf_info { bool link_up; enum virtchnl_link_speed link_speed; - struct avf_vsi vsi; + struct iavf_vsi vsi; bool vf_reset; uint64_t flags; @@ -106,16 +106,16 @@ struct avf_info { uint16_t nb_msix; /* number of MSI-X interrupts on Rx */ uint16_t msix_base; /* msix vector base from */ /* queue bitmask for each vector */ - uint16_t rxq_map[AVF_MAX_MSIX_VECTORS]; + uint16_t rxq_map[IAVF_MAX_MSIX_VECTORS]; }; -#define AVF_MAX_PKT_TYPE 256 +#define IAVF_MAX_PKT_TYPE 256 /* Structure to store private data for each VF instance. */ -struct avf_adapter { - struct avf_hw hw; +struct iavf_adapter { + struct iavf_hw hw; struct rte_eth_dev *eth_dev; - struct avf_info vf; + struct iavf_info vf; bool rx_bulk_alloc_allowed; /* For vector PMD */ @@ -123,43 +123,43 @@ struct avf_adapter { bool tx_vec_allowed; }; -/* AVF_DEV_PRIVATE_TO */ -#define AVF_DEV_PRIVATE_TO_ADAPTER(adapter) \ - ((struct avf_adapter *)adapter) -#define AVF_DEV_PRIVATE_TO_VF(adapter) \ - (&((struct avf_adapter *)adapter)->vf) -#define AVF_DEV_PRIVATE_TO_HW(adapter) \ - (&((struct avf_adapter *)adapter)->hw) - -/* AVF_VSI_TO */ -#define AVF_VSI_TO_HW(vsi) \ - (&(((struct avf_vsi *)vsi)->adapter->hw)) -#define AVF_VSI_TO_VF(vsi) \ - (&(((struct avf_vsi *)vsi)->adapter->vf)) -#define AVF_VSI_TO_ETH_DEV(vsi) \ - (((struct avf_vsi *)vsi)->adapter->eth_dev) +/* IAVF_DEV_PRIVATE_TO */ +#define IAVF_DEV_PRIVATE_TO_ADAPTER(adapter) \ + ((struct iavf_adapter *)adapter) +#define IAVF_DEV_PRIVATE_TO_VF(adapter) \ + (&((struct iavf_adapter *)adapter)->vf) +#define IAVF_DEV_PRIVATE_TO_HW(adapter) \ + (&((struct iavf_adapter *)adapter)->hw) + +/* IAVF_VSI_TO */ +#define IAVF_VSI_TO_HW(vsi) \ + (&(((struct iavf_vsi *)vsi)->adapter->hw)) +#define IAVF_VSI_TO_VF(vsi) \ + (&(((struct iavf_vsi *)vsi)->adapter->vf)) +#define IAVF_VSI_TO_ETH_DEV(vsi) \ + (((struct iavf_vsi *)vsi)->adapter->eth_dev) static inline void -avf_init_adminq_parameter(struct avf_hw *hw) +iavf_init_adminq_parameter(struct iavf_hw *hw) { - hw->aq.num_arq_entries = AVF_AQ_LEN; - hw->aq.num_asq_entries = AVF_AQ_LEN; - hw->aq.arq_buf_size = AVF_AQ_BUF_SZ; - hw->aq.asq_buf_size = AVF_AQ_BUF_SZ; + hw->aq.num_arq_entries = IAVF_AQ_LEN; + hw->aq.num_asq_entries = IAVF_AQ_LEN; + hw->aq.arq_buf_size = IAVF_AQ_BUF_SZ; + hw->aq.asq_buf_size = IAVF_AQ_BUF_SZ; } static inline uint16_t -avf_calc_itr_interval(int16_t interval) +iavf_calc_itr_interval(int16_t interval) { - if (interval < 0 || interval > AVF_QUEUE_ITR_INTERVAL_MAX) - interval = AVF_QUEUE_ITR_INTERVAL_DEFAULT; + if (interval < 0 || interval > IAVF_QUEUE_ITR_INTERVAL_MAX) + interval = IAVF_QUEUE_ITR_INTERVAL_DEFAULT; /* Convert to hardware count, as writing each 1 represents 2 us */ return interval / 2; } /* structure used for sending and checking response of virtchnl ops */ -struct avf_cmd_info { +struct iavf_cmd_info { enum virtchnl_ops ops; uint8_t *in_args; /* buffer for sending */ uint32_t in_args_size; /* buffer size for sending */ @@ -171,7 +171,7 @@ struct avf_cmd_info { * _atomic_set_cmd successfully. */ static inline void -_clear_cmd(struct avf_info *vf) +_clear_cmd(struct iavf_info *vf) { rte_wmb(); vf->pend_cmd = VIRTCHNL_OP_UNKNOWN; @@ -180,7 +180,7 @@ _clear_cmd(struct avf_info *vf) /* Check there is pending cmd in execution. If none, set new command. */ static inline int -_atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops) +_atomic_set_cmd(struct iavf_info *vf, enum virtchnl_ops ops) { int ret = rte_atomic32_cmpset(&vf->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops); @@ -190,27 +190,27 @@ _atomic_set_cmd(struct avf_info *vf, enum virtchnl_ops ops) return !ret; } -int avf_check_api_version(struct avf_adapter *adapter); -int avf_get_vf_resource(struct avf_adapter *adapter); -void avf_handle_virtchnl_msg(struct rte_eth_dev *dev); -int avf_enable_vlan_strip(struct avf_adapter *adapter); -int avf_disable_vlan_strip(struct avf_adapter *adapter); -int avf_switch_queue(struct avf_adapter *adapter, uint16_t qid, +int iavf_check_api_version(struct iavf_adapter *adapter); +int iavf_get_vf_resource(struct iavf_adapter *adapter); +void iavf_handle_virtchnl_msg(struct rte_eth_dev *dev); +int iavf_enable_vlan_strip(struct iavf_adapter *adapter); +int iavf_disable_vlan_strip(struct iavf_adapter *adapter); +int iavf_switch_queue(struct iavf_adapter *adapter, uint16_t qid, bool rx, bool on); -int avf_enable_queues(struct avf_adapter *adapter); -int avf_disable_queues(struct avf_adapter *adapter); -int avf_configure_rss_lut(struct avf_adapter *adapter); -int avf_configure_rss_key(struct avf_adapter *adapter); -int avf_configure_queues(struct avf_adapter *adapter); -int avf_config_irq_map(struct avf_adapter *adapter); -void avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add); -int avf_dev_link_update(struct rte_eth_dev *dev, +int iavf_enable_queues(struct iavf_adapter *adapter); +int iavf_disable_queues(struct iavf_adapter *adapter); +int iavf_configure_rss_lut(struct iavf_adapter *adapter); +int iavf_configure_rss_key(struct iavf_adapter *adapter); +int iavf_configure_queues(struct iavf_adapter *adapter); +int iavf_config_irq_map(struct iavf_adapter *adapter); +void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add); +int iavf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete); -int avf_query_stats(struct avf_adapter *adapter, +int iavf_query_stats(struct iavf_adapter *adapter, struct virtchnl_eth_stats **pstats); -int avf_config_promisc(struct avf_adapter *adapter, bool enable_unicast, +int iavf_config_promisc(struct iavf_adapter *adapter, bool enable_unicast, bool enable_multicast); -int avf_add_del_eth_addr(struct avf_adapter *adapter, +int iavf_add_del_eth_addr(struct iavf_adapter *adapter, struct ether_addr *addr, bool add); -int avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add); -#endif /* _AVF_ETHDEV_H_ */ +int iavf_add_del_vlan(struct iavf_adapter *adapter, uint16_t vlanid, bool add); +#endif /* _IAVF_ETHDEV_H_ */ diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index efc20e168..846e604a6 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -33,103 +33,103 @@ #include "iavf.h" #include "iavf_rxtx.h" -static int avf_dev_configure(struct rte_eth_dev *dev); -static int avf_dev_start(struct rte_eth_dev *dev); -static void avf_dev_stop(struct rte_eth_dev *dev); -static void avf_dev_close(struct rte_eth_dev *dev); -static void avf_dev_info_get(struct rte_eth_dev *dev, +static int iavf_dev_configure(struct rte_eth_dev *dev); +static int iavf_dev_start(struct rte_eth_dev *dev); +static void iavf_dev_stop(struct rte_eth_dev *dev); +static void iavf_dev_close(struct rte_eth_dev *dev); +static void iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); -static const uint32_t *avf_dev_supported_ptypes_get(struct rte_eth_dev *dev); -static int avf_dev_stats_get(struct rte_eth_dev *dev, +static const uint32_t *iavf_dev_supported_ptypes_get(struct rte_eth_dev *dev); +static int iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); -static void avf_dev_promiscuous_enable(struct rte_eth_dev *dev); -static void avf_dev_promiscuous_disable(struct rte_eth_dev *dev); -static void avf_dev_allmulticast_enable(struct rte_eth_dev *dev); -static void avf_dev_allmulticast_disable(struct rte_eth_dev *dev); -static int avf_dev_add_mac_addr(struct rte_eth_dev *dev, +static void iavf_dev_promiscuous_enable(struct rte_eth_dev *dev); +static void iavf_dev_promiscuous_disable(struct rte_eth_dev *dev); +static void iavf_dev_allmulticast_enable(struct rte_eth_dev *dev); +static void iavf_dev_allmulticast_disable(struct rte_eth_dev *dev); +static int iavf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr, uint32_t index, uint32_t pool); -static void avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index); -static int avf_dev_vlan_filter_set(struct rte_eth_dev *dev, +static void iavf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index); +static int iavf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); -static int avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); -static int avf_dev_rss_reta_update(struct rte_eth_dev *dev, +static int iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); +static int iavf_dev_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size); -static int avf_dev_rss_reta_query(struct rte_eth_dev *dev, +static int iavf_dev_rss_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size); -static int avf_dev_rss_hash_update(struct rte_eth_dev *dev, +static int iavf_dev_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); -static int avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, +static int iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); -static int avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); -static int avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, +static int iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); +static int iavf_dev_set_default_mac_addr(struct rte_eth_dev *dev, struct ether_addr *mac_addr); -static int avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, +static int iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); -static int avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, +static int iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); -int avf_logtype_init; -int avf_logtype_driver; +int iavf_logtype_init; +int iavf_logtype_driver; -static const struct rte_pci_id pci_id_avf_map[] = { - { RTE_PCI_DEVICE(AVF_INTEL_VENDOR_ID, AVF_DEV_ID_ADAPTIVE_VF) }, +static const struct rte_pci_id pci_id_iavf_map[] = { + { RTE_PCI_DEVICE(IAVF_INTEL_VENDOR_ID, IAVF_DEV_ID_ADAPTIVE_VF) }, { .vendor_id = 0, /* sentinel */ }, }; -static const struct eth_dev_ops avf_eth_dev_ops = { - .dev_configure = avf_dev_configure, - .dev_start = avf_dev_start, - .dev_stop = avf_dev_stop, - .dev_close = avf_dev_close, - .dev_infos_get = avf_dev_info_get, - .dev_supported_ptypes_get = avf_dev_supported_ptypes_get, - .link_update = avf_dev_link_update, - .stats_get = avf_dev_stats_get, - .promiscuous_enable = avf_dev_promiscuous_enable, - .promiscuous_disable = avf_dev_promiscuous_disable, - .allmulticast_enable = avf_dev_allmulticast_enable, - .allmulticast_disable = avf_dev_allmulticast_disable, - .mac_addr_add = avf_dev_add_mac_addr, - .mac_addr_remove = avf_dev_del_mac_addr, - .vlan_filter_set = avf_dev_vlan_filter_set, - .vlan_offload_set = avf_dev_vlan_offload_set, - .rx_queue_start = avf_dev_rx_queue_start, - .rx_queue_stop = avf_dev_rx_queue_stop, - .tx_queue_start = avf_dev_tx_queue_start, - .tx_queue_stop = avf_dev_tx_queue_stop, - .rx_queue_setup = avf_dev_rx_queue_setup, - .rx_queue_release = avf_dev_rx_queue_release, - .tx_queue_setup = avf_dev_tx_queue_setup, - .tx_queue_release = avf_dev_tx_queue_release, - .mac_addr_set = avf_dev_set_default_mac_addr, - .reta_update = avf_dev_rss_reta_update, - .reta_query = avf_dev_rss_reta_query, - .rss_hash_update = avf_dev_rss_hash_update, - .rss_hash_conf_get = avf_dev_rss_hash_conf_get, - .rxq_info_get = avf_dev_rxq_info_get, - .txq_info_get = avf_dev_txq_info_get, - .rx_queue_count = avf_dev_rxq_count, - .rx_descriptor_status = avf_dev_rx_desc_status, - .tx_descriptor_status = avf_dev_tx_desc_status, - .mtu_set = avf_dev_mtu_set, - .rx_queue_intr_enable = avf_dev_rx_queue_intr_enable, - .rx_queue_intr_disable = avf_dev_rx_queue_intr_disable, +static const struct eth_dev_ops iavf_eth_dev_ops = { + .dev_configure = iavf_dev_configure, + .dev_start = iavf_dev_start, + .dev_stop = iavf_dev_stop, + .dev_close = iavf_dev_close, + .dev_infos_get = iavf_dev_info_get, + .dev_supported_ptypes_get = iavf_dev_supported_ptypes_get, + .link_update = iavf_dev_link_update, + .stats_get = iavf_dev_stats_get, + .promiscuous_enable = iavf_dev_promiscuous_enable, + .promiscuous_disable = iavf_dev_promiscuous_disable, + .allmulticast_enable = iavf_dev_allmulticast_enable, + .allmulticast_disable = iavf_dev_allmulticast_disable, + .mac_addr_add = iavf_dev_add_mac_addr, + .mac_addr_remove = iavf_dev_del_mac_addr, + .vlan_filter_set = iavf_dev_vlan_filter_set, + .vlan_offload_set = iavf_dev_vlan_offload_set, + .rx_queue_start = iavf_dev_rx_queue_start, + .rx_queue_stop = iavf_dev_rx_queue_stop, + .tx_queue_start = iavf_dev_tx_queue_start, + .tx_queue_stop = iavf_dev_tx_queue_stop, + .rx_queue_setup = iavf_dev_rx_queue_setup, + .rx_queue_release = iavf_dev_rx_queue_release, + .tx_queue_setup = iavf_dev_tx_queue_setup, + .tx_queue_release = iavf_dev_tx_queue_release, + .mac_addr_set = iavf_dev_set_default_mac_addr, + .reta_update = iavf_dev_rss_reta_update, + .reta_query = iavf_dev_rss_reta_query, + .rss_hash_update = iavf_dev_rss_hash_update, + .rss_hash_conf_get = iavf_dev_rss_hash_conf_get, + .rxq_info_get = iavf_dev_rxq_info_get, + .txq_info_get = iavf_dev_txq_info_get, + .rx_queue_count = iavf_dev_rxq_count, + .rx_descriptor_status = iavf_dev_rx_desc_status, + .tx_descriptor_status = iavf_dev_tx_desc_status, + .mtu_set = iavf_dev_mtu_set, + .rx_queue_intr_enable = iavf_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = iavf_dev_rx_queue_intr_disable, }; static int -avf_dev_configure(struct rte_eth_dev *dev) +iavf_dev_configure(struct rte_eth_dev *dev) { - struct avf_adapter *ad = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(ad); + struct iavf_adapter *ad = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); struct rte_eth_conf *dev_conf = &dev->data->dev_conf; ad->rx_bulk_alloc_allowed = true; -#ifdef RTE_LIBRTE_AVF_INC_VECTOR +#ifdef RTE_LIBRTE_IAVF_INC_VECTOR /* Initialize to TRUE. If any of Rx queues doesn't meet the * vector Rx/Tx preconditions, it will be reset. */ @@ -143,24 +143,24 @@ avf_dev_configure(struct rte_eth_dev *dev) /* Vlan stripping setting */ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) { if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) - avf_enable_vlan_strip(ad); + iavf_enable_vlan_strip(ad); else - avf_disable_vlan_strip(ad); + iavf_disable_vlan_strip(ad); } return 0; } static int -avf_init_rss(struct avf_adapter *adapter) +iavf_init_rss(struct iavf_adapter *adapter) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct rte_eth_rss_conf *rss_conf; uint8_t i, j, nb_q; int ret; rss_conf = &adapter->eth_dev->data->dev_conf.rx_adv_conf.rss_conf; nb_q = RTE_MIN(adapter->eth_dev->data->nb_rx_queues, - AVF_MAX_NUM_QUEUES); + IAVF_MAX_NUM_QUEUES); if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) { PMD_DRV_LOG(DEBUG, "RSS is not supported"); @@ -171,11 +171,11 @@ avf_init_rss(struct avf_adapter *adapter) /* set all lut items to default queue */ for (i = 0; i < vf->vf_res->rss_lut_size; i++) vf->rss_lut[i] = 0; - ret = avf_configure_rss_lut(adapter); + ret = iavf_configure_rss_lut(adapter); return ret; } - /* In AVF, RSS enablement is set by PF driver. It is not supported + /* In IAVF, RSS enablement is set by PF driver. It is not supported * to set based on rss_conf->rss_hf. */ @@ -196,10 +196,10 @@ avf_init_rss(struct avf_adapter *adapter) vf->rss_lut[i] = j; } /* send virtchnnl ops to configure rss*/ - ret = avf_configure_rss_lut(adapter); + ret = iavf_configure_rss_lut(adapter); if (ret) return ret; - ret = avf_configure_rss_key(adapter); + ret = iavf_configure_rss_key(adapter); if (ret) return ret; @@ -207,16 +207,16 @@ avf_init_rss(struct avf_adapter *adapter) } static int -avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq) +iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq) { - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_eth_dev_data *dev_data = dev->data; uint16_t buf_size, max_pkt_len, len; buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; /* Calculate the maximum packet length allowed */ - len = rxq->rx_buf_len * AVF_MAX_CHAINED_RX_BUFFERS; + len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS; max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len); /* Check if the jumbo frame and maximum packet length are set @@ -224,12 +224,12 @@ avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq) */ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { if (max_pkt_len <= ETHER_MAX_LEN || - max_pkt_len > AVF_FRAME_SIZE_MAX) { + max_pkt_len > IAVF_FRAME_SIZE_MAX) { PMD_DRV_LOG(ERR, "maximum packet length must be " "larger than %u and smaller than %u, " "as jumbo frame is enabled", (uint32_t)ETHER_MAX_LEN, - (uint32_t)AVF_FRAME_SIZE_MAX); + (uint32_t)IAVF_FRAME_SIZE_MAX); return -EINVAL; } } else { @@ -246,45 +246,45 @@ avf_init_rxq(struct rte_eth_dev *dev, struct avf_rx_queue *rxq) rxq->max_pkt_len = max_pkt_len; if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) || - (rxq->max_pkt_len + 2 * AVF_VLAN_TAG_SIZE) > buf_size) { + (rxq->max_pkt_len + 2 * IAVF_VLAN_TAG_SIZE) > buf_size) { dev_data->scattered_rx = 1; } - AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); - AVF_WRITE_FLUSH(hw); + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + IAVF_WRITE_FLUSH(hw); return 0; } static int -avf_init_queues(struct rte_eth_dev *dev) +iavf_init_queues(struct rte_eth_dev *dev) { - struct avf_rx_queue **rxq = - (struct avf_rx_queue **)dev->data->rx_queues; - int i, ret = AVF_SUCCESS; + struct iavf_rx_queue **rxq = + (struct iavf_rx_queue **)dev->data->rx_queues; + int i, ret = IAVF_SUCCESS; for (i = 0; i < dev->data->nb_rx_queues; i++) { if (!rxq[i] || !rxq[i]->q_set) continue; - ret = avf_init_rxq(dev, rxq[i]); - if (ret != AVF_SUCCESS) + ret = iavf_init_rxq(dev, rxq[i]); + if (ret != IAVF_SUCCESS) break; } /* set rx/tx function to vector/scatter/single-segment * according to parameters */ - avf_set_rx_function(dev); - avf_set_tx_function(dev); + iavf_set_rx_function(dev); + iavf_set_tx_function(dev); return ret; } -static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev, +static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev, struct rte_intr_handle *intr_handle) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); uint16_t interval, i; int vec; @@ -312,37 +312,37 @@ static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev, if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { /* If WB_ON_ITR supports, enable it */ - vf->msix_base = AVF_RX_VEC_START; - AVF_WRITE_REG(hw, AVFINT_DYN_CTLN1(vf->msix_base - 1), - AVFINT_DYN_CTLN1_ITR_INDX_MASK | - AVFINT_DYN_CTLN1_WB_ON_ITR_MASK); + vf->msix_base = IAVF_RX_VEC_START; + IAVF_WRITE_REG(hw, IAVFINT_DYN_CTLN1(vf->msix_base - 1), + IAVFINT_DYN_CTLN1_ITR_INDX_MASK | + IAVFINT_DYN_CTLN1_WB_ON_ITR_MASK); } else { /* If no WB_ON_ITR offload flags, need to set * interrupt for descriptor write back. */ - vf->msix_base = AVF_MISC_VEC_ID; + vf->msix_base = IAVF_MISC_VEC_ID; /* set ITR to max */ - interval = avf_calc_itr_interval( - AVF_QUEUE_ITR_INTERVAL_MAX); - AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, - AVFINT_DYN_CTL01_INTENA_MASK | - (AVF_ITR_INDEX_DEFAULT << - AVFINT_DYN_CTL01_ITR_INDX_SHIFT) | + interval = iavf_calc_itr_interval( + IAVF_QUEUE_ITR_INTERVAL_MAX); + IAVF_WRITE_REG(hw, IAVFINT_DYN_CTL01, + IAVFINT_DYN_CTL01_INTENA_MASK | + (IAVF_ITR_INDEX_DEFAULT << + IAVFINT_DYN_CTL01_ITR_INDX_SHIFT) | (interval << - AVFINT_DYN_CTL01_INTERVAL_SHIFT)); + IAVFINT_DYN_CTL01_INTERVAL_SHIFT)); } - AVF_WRITE_FLUSH(hw); + IAVF_WRITE_FLUSH(hw); /* map all queues to the same interrupt */ for (i = 0; i < dev->data->nb_rx_queues; i++) vf->rxq_map[vf->msix_base] |= 1 << i; } else { if (!rte_intr_allow_others(intr_handle)) { vf->nb_msix = 1; - vf->msix_base = AVF_MISC_VEC_ID; + vf->msix_base = IAVF_MISC_VEC_ID; for (i = 0; i < dev->data->nb_rx_queues; i++) { vf->rxq_map[vf->msix_base] |= 1 << i; - intr_handle->intr_vec[i] = AVF_MISC_VEC_ID; + intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID; } PMD_DRV_LOG(DEBUG, "vector %u are mapping to all Rx queues", @@ -353,13 +353,13 @@ static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev, */ vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors, intr_handle->nb_efd); - vf->msix_base = AVF_RX_VEC_START; - vec = AVF_RX_VEC_START; + vf->msix_base = IAVF_RX_VEC_START; + vec = IAVF_RX_VEC_START; for (i = 0; i < dev->data->nb_rx_queues; i++) { vf->rxq_map[vec] |= 1 << i; intr_handle->intr_vec[i] = vec++; if (vec >= vf->nb_msix) - vec = AVF_RX_VEC_START; + vec = IAVF_RX_VEC_START; } PMD_DRV_LOG(DEBUG, "%u vectors are mapping to %u Rx queues", @@ -367,7 +367,7 @@ static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev, } } - if (avf_config_irq_map(adapter)) { + if (iavf_config_irq_map(adapter)) { PMD_DRV_LOG(ERR, "config interrupt mapping failed"); return -1; } @@ -375,17 +375,17 @@ static int avf_config_rx_queues_irqs(struct rte_eth_dev *dev, } static int -avf_start_queues(struct rte_eth_dev *dev) +iavf_start_queues(struct rte_eth_dev *dev) { - struct avf_rx_queue *rxq; - struct avf_tx_queue *txq; + struct iavf_rx_queue *rxq; + struct iavf_tx_queue *txq; int i; for (i = 0; i < dev->data->nb_tx_queues; i++) { txq = dev->data->tx_queues[i]; if (txq->tx_deferred_start) continue; - if (avf_dev_tx_queue_start(dev, i) != 0) { + if (iavf_dev_tx_queue_start(dev, i) != 0) { PMD_DRV_LOG(ERR, "Fail to start queue %u", i); return -1; } @@ -395,7 +395,7 @@ avf_start_queues(struct rte_eth_dev *dev) rxq = dev->data->rx_queues[i]; if (rxq->rx_deferred_start) continue; - if (avf_dev_rx_queue_start(dev, i) != 0) { + if (iavf_dev_rx_queue_start(dev, i) != 0) { PMD_DRV_LOG(ERR, "Fail to start queue %u", i); return -1; } @@ -405,12 +405,12 @@ avf_start_queues(struct rte_eth_dev *dev) } static int -avf_dev_start(struct rte_eth_dev *dev) +iavf_dev_start(struct rte_eth_dev *dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_intr_handle *intr_handle = dev->intr_handle; PMD_INIT_FUNC_TRACE(); @@ -421,24 +421,24 @@ avf_dev_start(struct rte_eth_dev *dev) vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); - if (avf_init_queues(dev) != 0) { + if (iavf_init_queues(dev) != 0) { PMD_DRV_LOG(ERR, "failed to do Queue init"); return -1; } if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { - if (avf_init_rss(adapter) != 0) { + if (iavf_init_rss(adapter) != 0) { PMD_DRV_LOG(ERR, "configure rss failed"); goto err_rss; } } - if (avf_configure_queues(adapter) != 0) { + if (iavf_configure_queues(adapter) != 0) { PMD_DRV_LOG(ERR, "configure queues failed"); goto err_queue; } - if (avf_config_rx_queues_irqs(dev, intr_handle) != 0) { + if (iavf_config_rx_queues_irqs(dev, intr_handle) != 0) { PMD_DRV_LOG(ERR, "configure irq failed"); goto err_queue; } @@ -449,9 +449,9 @@ avf_dev_start(struct rte_eth_dev *dev) } /* Set all mac addrs */ - avf_add_del_all_mac_addr(adapter, TRUE); + iavf_add_del_all_mac_addr(adapter, TRUE); - if (avf_start_queues(dev) != 0) { + if (iavf_start_queues(dev) != 0) { PMD_DRV_LOG(ERR, "enable queues failed"); goto err_mac; } @@ -459,18 +459,18 @@ avf_dev_start(struct rte_eth_dev *dev) return 0; err_mac: - avf_add_del_all_mac_addr(adapter, FALSE); + iavf_add_del_all_mac_addr(adapter, FALSE); err_queue: err_rss: return -1; } static void -avf_dev_stop(struct rte_eth_dev *dev) +iavf_dev_stop(struct rte_eth_dev *dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_intr_handle *intr_handle = dev->intr_handle; PMD_INIT_FUNC_TRACE(); @@ -478,7 +478,7 @@ avf_dev_stop(struct rte_eth_dev *dev) if (hw->adapter_stopped == 1) return; - avf_stop_queues(dev); + iavf_stop_queues(dev); /* Disable the interrupt for Rx */ rte_intr_efd_disable(intr_handle); @@ -489,24 +489,24 @@ avf_dev_stop(struct rte_eth_dev *dev) } /* remove all mac addrs */ - avf_add_del_all_mac_addr(adapter, FALSE); + iavf_add_del_all_mac_addr(adapter, FALSE); hw->adapter_stopped = 1; } static void -avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) +iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); memset(dev_info, 0, sizeof(*dev_info)); dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs; dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs; - dev_info->min_rx_bufsize = AVF_BUF_SIZE_MIN; - dev_info->max_rx_pktlen = AVF_FRAME_SIZE_MAX; + dev_info->min_rx_bufsize = IAVF_BUF_SIZE_MIN; + dev_info->max_rx_pktlen = IAVF_FRAME_SIZE_MAX; dev_info->hash_key_size = vf->vf_res->rss_key_size; dev_info->reta_size = vf->vf_res->rss_lut_size; - dev_info->flow_type_rss_offloads = AVF_RSS_OFFLOAD_ALL; - dev_info->max_mac_addrs = AVF_NUM_MACADDR_MAX; + dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL; + dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX; dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_QINQ_STRIP | @@ -533,32 +533,32 @@ avf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_TX_OFFLOAD_MULTI_SEGS; dev_info->default_rxconf = (struct rte_eth_rxconf) { - .rx_free_thresh = AVF_DEFAULT_RX_FREE_THRESH, + .rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH, .rx_drop_en = 0, .offloads = 0, }; dev_info->default_txconf = (struct rte_eth_txconf) { - .tx_free_thresh = AVF_DEFAULT_TX_FREE_THRESH, - .tx_rs_thresh = AVF_DEFAULT_TX_RS_THRESH, + .tx_free_thresh = IAVF_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = IAVF_DEFAULT_TX_RS_THRESH, .offloads = 0, }; dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { - .nb_max = AVF_MAX_RING_DESC, - .nb_min = AVF_MIN_RING_DESC, - .nb_align = AVF_ALIGN_RING_DESC, + .nb_max = IAVF_MAX_RING_DESC, + .nb_min = IAVF_MIN_RING_DESC, + .nb_align = IAVF_ALIGN_RING_DESC, }; dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { - .nb_max = AVF_MAX_RING_DESC, - .nb_min = AVF_MIN_RING_DESC, - .nb_align = AVF_ALIGN_RING_DESC, + .nb_max = IAVF_MAX_RING_DESC, + .nb_min = IAVF_MIN_RING_DESC, + .nb_align = IAVF_ALIGN_RING_DESC, }; } static const uint32_t * -avf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) +iavf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) { static const uint32_t ptypes[] = { RTE_PTYPE_L2_ETHER, @@ -575,11 +575,11 @@ avf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) } int -avf_dev_link_update(struct rte_eth_dev *dev, +iavf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { struct rte_eth_link new_link; - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); /* Only read status info stored in VF, and the info is updated * when receive LINK_CHANGE evnet from PF by Virtchnnl. @@ -623,77 +623,77 @@ avf_dev_link_update(struct rte_eth_dev *dev, } static void -avf_dev_promiscuous_enable(struct rte_eth_dev *dev) +iavf_dev_promiscuous_enable(struct rte_eth_dev *dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); int ret; if (vf->promisc_unicast_enabled) return; - ret = avf_config_promisc(adapter, TRUE, vf->promisc_multicast_enabled); + ret = iavf_config_promisc(adapter, TRUE, vf->promisc_multicast_enabled); if (!ret) vf->promisc_unicast_enabled = TRUE; } static void -avf_dev_promiscuous_disable(struct rte_eth_dev *dev) +iavf_dev_promiscuous_disable(struct rte_eth_dev *dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); int ret; if (!vf->promisc_unicast_enabled) return; - ret = avf_config_promisc(adapter, FALSE, vf->promisc_multicast_enabled); + ret = iavf_config_promisc(adapter, FALSE, vf->promisc_multicast_enabled); if (!ret) vf->promisc_unicast_enabled = FALSE; } static void -avf_dev_allmulticast_enable(struct rte_eth_dev *dev) +iavf_dev_allmulticast_enable(struct rte_eth_dev *dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); int ret; if (vf->promisc_multicast_enabled) return; - ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, TRUE); + ret = iavf_config_promisc(adapter, vf->promisc_unicast_enabled, TRUE); if (!ret) vf->promisc_multicast_enabled = TRUE; } static void -avf_dev_allmulticast_disable(struct rte_eth_dev *dev) +iavf_dev_allmulticast_disable(struct rte_eth_dev *dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); int ret; if (!vf->promisc_multicast_enabled) return; - ret = avf_config_promisc(adapter, vf->promisc_unicast_enabled, FALSE); + ret = iavf_config_promisc(adapter, vf->promisc_unicast_enabled, FALSE); if (!ret) vf->promisc_multicast_enabled = FALSE; } static int -avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr, +iavf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr, __rte_unused uint32_t index, __rte_unused uint32_t pool) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); int err; if (is_zero_ether_addr(addr)) { @@ -701,7 +701,7 @@ avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr, return -EINVAL; } - err = avf_add_del_eth_addr(adapter, addr, TRUE); + err = iavf_add_del_eth_addr(adapter, addr, TRUE); if (err) { PMD_DRV_LOG(ERR, "fail to add MAC address"); return -EIO; @@ -713,17 +713,17 @@ avf_dev_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr, } static void -avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index) +iavf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct ether_addr *addr; int err; addr = &dev->data->mac_addrs[index]; - err = avf_add_del_eth_addr(adapter, addr, FALSE); + err = iavf_add_del_eth_addr(adapter, addr, FALSE); if (err) PMD_DRV_LOG(ERR, "fail to delete MAC address"); @@ -731,28 +731,28 @@ avf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index) } static int -avf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +iavf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); int err; if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) return -ENOTSUP; - err = avf_add_del_vlan(adapter, vlan_id, on); + err = iavf_add_del_vlan(adapter, vlan_id, on); if (err) return -EIO; return 0; } static int -avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) +iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct rte_eth_conf *dev_conf = &dev->data->dev_conf; int err; @@ -763,9 +763,9 @@ avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) if (mask & ETH_VLAN_STRIP_MASK) { /* Enable or disable VLAN stripping */ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) - err = avf_enable_vlan_strip(adapter); + err = iavf_enable_vlan_strip(adapter); else - err = avf_disable_vlan_strip(adapter); + err = iavf_disable_vlan_strip(adapter); if (err) return -EIO; @@ -774,13 +774,13 @@ avf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) } static int -avf_dev_rss_reta_update(struct rte_eth_dev *dev, +iavf_dev_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); uint8_t *lut; uint16_t i, idx, shift; int ret; @@ -812,7 +812,7 @@ avf_dev_rss_reta_update(struct rte_eth_dev *dev, rte_memcpy(vf->rss_lut, lut, reta_size); /* send virtchnnl ops to configure rss*/ - ret = avf_configure_rss_lut(adapter); + ret = iavf_configure_rss_lut(adapter); if (ret) /* revert back */ rte_memcpy(vf->rss_lut, lut, reta_size); rte_free(lut); @@ -821,13 +821,13 @@ avf_dev_rss_reta_update(struct rte_eth_dev *dev, } static int -avf_dev_rss_reta_query(struct rte_eth_dev *dev, +iavf_dev_rss_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); uint16_t i, idx, shift; if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) @@ -851,12 +851,12 @@ avf_dev_rss_reta_query(struct rte_eth_dev *dev, } static int -avf_dev_rss_hash_update(struct rte_eth_dev *dev, +iavf_dev_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) return -ENOTSUP; @@ -875,22 +875,22 @@ avf_dev_rss_hash_update(struct rte_eth_dev *dev, rte_memcpy(vf->rss_key, rss_conf->rss_key, rss_conf->rss_key_len); - return avf_configure_rss_key(adapter); + return iavf_configure_rss_key(adapter); } static int -avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, +iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) return -ENOTSUP; /* Just set it to default value now. */ - rss_conf->rss_hf = AVF_RSS_OFFLOAD_ALL; + rss_conf->rss_hf = IAVF_RSS_OFFLOAD_ALL; if (!rss_conf->rss_key) return 0; @@ -902,12 +902,12 @@ avf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, } static int -avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - uint32_t frame_size = mtu + AVF_ETH_OVERHEAD; + uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD; int ret = 0; - if (mtu < ETHER_MIN_MTU || frame_size > AVF_FRAME_SIZE_MAX) + if (mtu < ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX) return -EINVAL; /* mtu setting is forbidden if port is start */ @@ -929,12 +929,12 @@ avf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) } static int -avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, +iavf_dev_set_default_mac_addr(struct rte_eth_dev *dev, struct ether_addr *mac_addr) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); struct ether_addr *perm_addr, *old_addr; int ret; @@ -948,7 +948,7 @@ avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, if (is_valid_assigned_ether_addr(perm_addr)) return -EPERM; - ret = avf_add_del_eth_addr(adapter, old_addr, FALSE); + ret = iavf_add_del_eth_addr(adapter, old_addr, FALSE); if (ret) PMD_DRV_LOG(ERR, "Fail to delete old MAC:" " %02X:%02X:%02X:%02X:%02X:%02X", @@ -959,7 +959,7 @@ avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, old_addr->addr_bytes[4], old_addr->addr_bytes[5]); - ret = avf_add_del_eth_addr(adapter, mac_addr, TRUE); + ret = iavf_add_del_eth_addr(adapter, mac_addr, TRUE); if (ret) PMD_DRV_LOG(ERR, "Fail to add new MAC:" " %02X:%02X:%02X:%02X:%02X:%02X", @@ -978,14 +978,14 @@ avf_dev_set_default_mac_addr(struct rte_eth_dev *dev, } static int -avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct virtchnl_eth_stats *pstats = NULL; int ret; - ret = avf_query_stats(adapter, &pstats); + ret = iavf_query_stats(adapter, &pstats); if (ret == 0) { stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + pstats->rx_broadcast; @@ -1002,28 +1002,28 @@ avf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) } static int -avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); uint16_t msix_intr; msix_intr = pci_dev->intr_handle.intr_vec[queue_id]; - if (msix_intr == AVF_MISC_VEC_ID) { + if (msix_intr == IAVF_MISC_VEC_ID) { PMD_DRV_LOG(INFO, "MISC is also enabled for control"); - AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, - AVFINT_DYN_CTL01_INTENA_MASK | - AVFINT_DYN_CTL01_ITR_INDX_MASK); + IAVF_WRITE_REG(hw, IAVFINT_DYN_CTL01, + IAVFINT_DYN_CTL01_INTENA_MASK | + IAVFINT_DYN_CTL01_ITR_INDX_MASK); } else { - AVF_WRITE_REG(hw, - AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START), - AVFINT_DYN_CTLN1_INTENA_MASK | - AVFINT_DYN_CTLN1_ITR_INDX_MASK); + IAVF_WRITE_REG(hw, + IAVFINT_DYN_CTLN1(msix_intr - IAVF_RX_VEC_START), + IAVFINT_DYN_CTLN1_INTENA_MASK | + IAVFINT_DYN_CTLN1_ITR_INDX_MASK); } - AVF_WRITE_FLUSH(hw); + IAVF_WRITE_FLUSH(hw); rte_intr_enable(&pci_dev->intr_handle); @@ -1031,94 +1031,94 @@ avf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) } static int -avf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); uint16_t msix_intr; msix_intr = pci_dev->intr_handle.intr_vec[queue_id]; - if (msix_intr == AVF_MISC_VEC_ID) { + if (msix_intr == IAVF_MISC_VEC_ID) { PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it"); return -EIO; } - AVF_WRITE_REG(hw, - AVFINT_DYN_CTLN1(msix_intr - AVF_RX_VEC_START), + IAVF_WRITE_REG(hw, + IAVFINT_DYN_CTLN1(msix_intr - IAVF_RX_VEC_START), 0); - AVF_WRITE_FLUSH(hw); + IAVF_WRITE_FLUSH(hw); return 0; } static int -avf_check_vf_reset_done(struct avf_hw *hw) +iavf_check_vf_reset_done(struct iavf_hw *hw) { int i, reset; - for (i = 0; i < AVF_RESET_WAIT_CNT; i++) { - reset = AVF_READ_REG(hw, AVFGEN_RSTAT) & - AVFGEN_RSTAT_VFR_STATE_MASK; - reset = reset >> AVFGEN_RSTAT_VFR_STATE_SHIFT; + for (i = 0; i < IAVF_RESET_WAIT_CNT; i++) { + reset = IAVF_READ_REG(hw, IAVFGEN_RSTAT) & + IAVFGEN_RSTAT_VFR_STATE_MASK; + reset = reset >> IAVFGEN_RSTAT_VFR_STATE_SHIFT; if (reset == VIRTCHNL_VFR_VFACTIVE || reset == VIRTCHNL_VFR_COMPLETED) break; rte_delay_ms(20); } - if (i >= AVF_RESET_WAIT_CNT) + if (i >= IAVF_RESET_WAIT_CNT) return -1; return 0; } static int -avf_init_vf(struct rte_eth_dev *dev) +iavf_init_vf(struct rte_eth_dev *dev) { int err, bufsz; - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - err = avf_set_mac_type(hw); + err = iavf_set_mac_type(hw); if (err) { PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err); goto err; } - err = avf_check_vf_reset_done(hw); + err = iavf_check_vf_reset_done(hw); if (err) { PMD_INIT_LOG(ERR, "VF is still resetting"); goto err; } - avf_init_adminq_parameter(hw); - err = avf_init_adminq(hw); + iavf_init_adminq_parameter(hw); + err = iavf_init_adminq(hw); if (err) { PMD_INIT_LOG(ERR, "init_adminq failed: %d", err); goto err; } - vf->aq_resp = rte_zmalloc("vf_aq_resp", AVF_AQ_BUF_SZ, 0); + vf->aq_resp = rte_zmalloc("vf_aq_resp", IAVF_AQ_BUF_SZ, 0); if (!vf->aq_resp) { PMD_INIT_LOG(ERR, "unable to allocate vf_aq_resp memory"); goto err_aq; } - if (avf_check_api_version(adapter) != 0) { + if (iavf_check_api_version(adapter) != 0) { PMD_INIT_LOG(ERR, "check_api version failed"); goto err_api; } bufsz = sizeof(struct virtchnl_vf_resource) + - (AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource)); + (IAVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource)); vf->vf_res = rte_zmalloc("vf_res", bufsz, 0); if (!vf->vf_res) { PMD_INIT_LOG(ERR, "unable to allocate vf_res memory"); goto err_api; } - if (avf_get_vf_resource(adapter) != 0) { - PMD_INIT_LOG(ERR, "avf_get_vf_config failed"); + if (iavf_get_vf_resource(adapter) != 0) { + PMD_INIT_LOG(ERR, "iavf_get_vf_config failed"); goto err_alloc; } /* Allocate memort for RSS info */ @@ -1146,70 +1146,70 @@ avf_init_vf(struct rte_eth_dev *dev) err_api: rte_free(vf->aq_resp); err_aq: - avf_shutdown_adminq(hw); + iavf_shutdown_adminq(hw); err: return -1; } /* Enable default admin queue interrupt setting */ static inline void -avf_enable_irq0(struct avf_hw *hw) +iavf_enable_irq0(struct iavf_hw *hw) { /* Enable admin queue interrupt trigger */ - AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, AVFINT_ICR0_ENA1_ADMINQ_MASK); + IAVF_WRITE_REG(hw, IAVFINT_ICR0_ENA1, IAVFINT_ICR0_ENA1_ADMINQ_MASK); - AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, AVFINT_DYN_CTL01_INTENA_MASK | - AVFINT_DYN_CTL01_CLEARPBA_MASK | AVFINT_DYN_CTL01_ITR_INDX_MASK); + IAVF_WRITE_REG(hw, IAVFINT_DYN_CTL01, IAVFINT_DYN_CTL01_INTENA_MASK | + IAVFINT_DYN_CTL01_CLEARPBA_MASK | IAVFINT_DYN_CTL01_ITR_INDX_MASK); - AVF_WRITE_FLUSH(hw); + IAVF_WRITE_FLUSH(hw); } static inline void -avf_disable_irq0(struct avf_hw *hw) +iavf_disable_irq0(struct iavf_hw *hw) { /* Disable all interrupt types */ - AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1, 0); - AVF_WRITE_REG(hw, AVFINT_DYN_CTL01, - AVFINT_DYN_CTL01_ITR_INDX_MASK); - AVF_WRITE_FLUSH(hw); + IAVF_WRITE_REG(hw, IAVFINT_ICR0_ENA1, 0); + IAVF_WRITE_REG(hw, IAVFINT_DYN_CTL01, + IAVFINT_DYN_CTL01_ITR_INDX_MASK); + IAVF_WRITE_FLUSH(hw); } static void -avf_dev_interrupt_handler(void *param) +iavf_dev_interrupt_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); - avf_disable_irq0(hw); + iavf_disable_irq0(hw); - avf_handle_virtchnl_msg(dev); + iavf_handle_virtchnl_msg(dev); - avf_enable_irq0(hw); + iavf_enable_irq0(hw); } static int -avf_dev_init(struct rte_eth_dev *eth_dev) +iavf_dev_init(struct rte_eth_dev *eth_dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter); + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); PMD_INIT_FUNC_TRACE(); /* assign ops func pointer */ - eth_dev->dev_ops = &avf_eth_dev_ops; - eth_dev->rx_pkt_burst = &avf_recv_pkts; - eth_dev->tx_pkt_burst = &avf_xmit_pkts; - eth_dev->tx_pkt_prepare = &avf_prep_pkts; + eth_dev->dev_ops = &iavf_eth_dev_ops; + eth_dev->rx_pkt_burst = &iavf_recv_pkts; + eth_dev->tx_pkt_burst = &iavf_xmit_pkts; + eth_dev->tx_pkt_prepare = &iavf_prep_pkts; /* For secondary processes, we don't initialise any further as primary * has already done this work. Only check if we need a different RX * and TX function. */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - avf_set_rx_function(eth_dev); - avf_set_tx_function(eth_dev); + iavf_set_rx_function(eth_dev); + iavf_set_tx_function(eth_dev); return 0; } rte_eth_copy_pci_info(eth_dev, pci_dev); @@ -1222,23 +1222,23 @@ avf_dev_init(struct rte_eth_dev *eth_dev) hw->bus.device = pci_dev->addr.devid; hw->bus.func = pci_dev->addr.function; hw->hw_addr = (void *)pci_dev->mem_resource[0].addr; - hw->back = AVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private); + hw->back = IAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private); adapter->eth_dev = eth_dev; - if (avf_init_vf(eth_dev) != 0) { + if (iavf_init_vf(eth_dev) != 0) { PMD_INIT_LOG(ERR, "Init vf failed"); return -1; } /* copy mac addr */ eth_dev->data->mac_addrs = rte_zmalloc( - "avf_mac", - ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX, + "iavf_mac", + ETHER_ADDR_LEN * IAVF_NUM_MACADDR_MAX, 0); if (!eth_dev->data->mac_addrs) { PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to" " store MAC addresses", - ETHER_ADDR_LEN * AVF_NUM_MACADDR_MAX); + ETHER_ADDR_LEN * IAVF_NUM_MACADDR_MAX); return -ENOMEM; } /* If the MAC address is not configured by host, @@ -1251,41 +1251,41 @@ avf_dev_init(struct rte_eth_dev *eth_dev) /* register callback func to eal lib */ rte_intr_callback_register(&pci_dev->intr_handle, - avf_dev_interrupt_handler, + iavf_dev_interrupt_handler, (void *)eth_dev); /* enable uio intr after callback register */ rte_intr_enable(&pci_dev->intr_handle); /* configure and enable device interrupt */ - avf_enable_irq0(hw); + iavf_enable_irq0(hw); return 0; } static void -avf_dev_close(struct rte_eth_dev *dev) +iavf_dev_close(struct rte_eth_dev *dev) { - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; - avf_dev_stop(dev); - avf_shutdown_adminq(hw); + iavf_dev_stop(dev); + iavf_shutdown_adminq(hw); /* disable uio intr before callback unregister */ rte_intr_disable(intr_handle); /* unregister callback func from eal lib */ rte_intr_callback_unregister(intr_handle, - avf_dev_interrupt_handler, dev); - avf_disable_irq0(hw); + iavf_dev_interrupt_handler, dev); + iavf_disable_irq0(hw); } static int -avf_dev_uninit(struct rte_eth_dev *dev) +iavf_dev_uninit(struct rte_eth_dev *dev) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return -EPERM; @@ -1294,7 +1294,7 @@ avf_dev_uninit(struct rte_eth_dev *dev) dev->rx_pkt_burst = NULL; dev->tx_pkt_burst = NULL; if (hw->adapter_stopped == 0) - avf_dev_close(dev); + iavf_dev_close(dev); rte_free(vf->vf_res); vf->vsi_res = NULL; @@ -1315,44 +1315,44 @@ avf_dev_uninit(struct rte_eth_dev *dev) return 0; } -static int eth_avf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, +static int eth_iavf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_probe(pci_dev, - sizeof(struct avf_adapter), avf_dev_init); + sizeof(struct iavf_adapter), iavf_dev_init); } -static int eth_avf_pci_remove(struct rte_pci_device *pci_dev) +static int eth_iavf_pci_remove(struct rte_pci_device *pci_dev) { - return rte_eth_dev_pci_generic_remove(pci_dev, avf_dev_uninit); + return rte_eth_dev_pci_generic_remove(pci_dev, iavf_dev_uninit); } /* Adaptive virtual function driver struct */ -static struct rte_pci_driver rte_avf_pmd = { - .id_table = pci_id_avf_map, +static struct rte_pci_driver rte_iavf_pmd = { + .id_table = pci_id_iavf_map, .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC | RTE_PCI_DRV_IOVA_AS_VA, - .probe = eth_avf_pci_probe, - .remove = eth_avf_pci_remove, + .probe = eth_iavf_pci_probe, + .remove = eth_iavf_pci_remove, }; -RTE_PMD_REGISTER_PCI(net_avf, rte_avf_pmd); -RTE_PMD_REGISTER_PCI_TABLE(net_avf, pci_id_avf_map); -RTE_PMD_REGISTER_KMOD_DEP(net_avf, "* igb_uio | vfio-pci"); -RTE_INIT(avf_init_log) +RTE_PMD_REGISTER_PCI(net_iavf, rte_iavf_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_iavf, pci_id_iavf_map); +RTE_PMD_REGISTER_KMOD_DEP(net_iavf, "* igb_uio | vfio-pci"); +RTE_INIT(iavf_init_log) { - avf_logtype_init = rte_log_register("pmd.net.avf.init"); - if (avf_logtype_init >= 0) - rte_log_set_level(avf_logtype_init, RTE_LOG_NOTICE); - avf_logtype_driver = rte_log_register("pmd.net.avf.driver"); - if (avf_logtype_driver >= 0) - rte_log_set_level(avf_logtype_driver, RTE_LOG_NOTICE); + iavf_logtype_init = rte_log_register("pmd.net.iavf.init"); + if (iavf_logtype_init >= 0) + rte_log_set_level(iavf_logtype_init, RTE_LOG_NOTICE); + iavf_logtype_driver = rte_log_register("pmd.net.iavf.driver"); + if (iavf_logtype_driver >= 0) + rte_log_set_level(iavf_logtype_driver, RTE_LOG_NOTICE); } /* memory func for base code */ -enum avf_status_code -avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw, - struct avf_dma_mem *mem, +enum iavf_status_code +iavf_allocate_dma_mem_d(__rte_unused struct iavf_hw *hw, + struct iavf_dma_mem *mem, u64 size, u32 alignment) { @@ -1360,13 +1360,13 @@ avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw, char z_name[RTE_MEMZONE_NAMESIZE]; if (!mem) - return AVF_ERR_PARAM; + return IAVF_ERR_PARAM; - snprintf(z_name, sizeof(z_name), "avf_dma_%"PRIu64, rte_rand()); + snprintf(z_name, sizeof(z_name), "iavf_dma_%"PRIu64, rte_rand()); mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, RTE_MEMZONE_IOVA_CONTIG, alignment, RTE_PGSIZE_2M); if (!mz) - return AVF_ERR_NO_MEMORY; + return IAVF_ERR_NO_MEMORY; mem->size = size; mem->va = mz->addr; @@ -1376,15 +1376,15 @@ avf_allocate_dma_mem_d(__rte_unused struct avf_hw *hw, "memzone %s allocated with physical address: %"PRIu64, mz->name, mem->pa); - return AVF_SUCCESS; + return IAVF_SUCCESS; } -enum avf_status_code -avf_free_dma_mem_d(__rte_unused struct avf_hw *hw, - struct avf_dma_mem *mem) +enum iavf_status_code +iavf_free_dma_mem_d(__rte_unused struct iavf_hw *hw, + struct iavf_dma_mem *mem) { if (!mem) - return AVF_ERR_PARAM; + return IAVF_ERR_PARAM; PMD_DRV_LOG(DEBUG, "memzone %s to be freed with physical address: %"PRIu64, @@ -1394,35 +1394,35 @@ avf_free_dma_mem_d(__rte_unused struct avf_hw *hw, mem->va = NULL; mem->pa = (u64)0; - return AVF_SUCCESS; + return IAVF_SUCCESS; } -enum avf_status_code -avf_allocate_virt_mem_d(__rte_unused struct avf_hw *hw, - struct avf_virt_mem *mem, +enum iavf_status_code +iavf_allocate_virt_mem_d(__rte_unused struct iavf_hw *hw, + struct iavf_virt_mem *mem, u32 size) { if (!mem) - return AVF_ERR_PARAM; + return IAVF_ERR_PARAM; mem->size = size; - mem->va = rte_zmalloc("avf", size, 0); + mem->va = rte_zmalloc("iavf", size, 0); if (mem->va) - return AVF_SUCCESS; + return IAVF_SUCCESS; else - return AVF_ERR_NO_MEMORY; + return IAVF_ERR_NO_MEMORY; } -enum avf_status_code -avf_free_virt_mem_d(__rte_unused struct avf_hw *hw, - struct avf_virt_mem *mem) +enum iavf_status_code +iavf_free_virt_mem_d(__rte_unused struct iavf_hw *hw, + struct iavf_virt_mem *mem) { if (!mem) - return AVF_ERR_PARAM; + return IAVF_ERR_PARAM; rte_free(mem->va); mem->va = NULL; - return AVF_SUCCESS; + return IAVF_SUCCESS; } diff --git a/drivers/net/iavf/iavf_log.h b/drivers/net/iavf/iavf_log.h index 8d574d3f3..f66c37041 100644 --- a/drivers/net/iavf/iavf_log.h +++ b/drivers/net/iavf/iavf_log.h @@ -2,43 +2,43 @@ * Copyright(c) 2017 Intel Corporation */ -#ifndef _AVF_LOG_H_ -#define _AVF_LOG_H_ +#ifndef _IAVF_LOG_H_ +#define _IAVF_LOG_H_ -extern int avf_logtype_init; +extern int iavf_logtype_init; #define PMD_INIT_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, avf_logtype_init, "%s(): " fmt "\n", \ + rte_log(RTE_LOG_ ## level, iavf_logtype_init, "%s(): " fmt "\n", \ __func__, ## args) #define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") -extern int avf_logtype_driver; +extern int iavf_logtype_driver; #define PMD_DRV_LOG_RAW(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, avf_logtype_driver, "%s(): " fmt, \ + rte_log(RTE_LOG_ ## level, iavf_logtype_driver, "%s(): " fmt, \ __func__, ## args) #define PMD_DRV_LOG(level, fmt, args...) \ PMD_DRV_LOG_RAW(level, fmt "\n", ## args) #define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>") -#ifdef RTE_LIBRTE_AVF_DEBUG_RX +#ifdef RTE_LIBRTE_IAVF_DEBUG_RX #define PMD_RX_LOG(level, fmt, args...) \ RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args) #else #define PMD_RX_LOG(level, fmt, args...) do { } while (0) #endif -#ifdef RTE_LIBRTE_AVF_DEBUG_TX +#ifdef RTE_LIBRTE_IAVF_DEBUG_TX #define PMD_TX_LOG(level, fmt, args...) \ RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args) #else #define PMD_TX_LOG(level, fmt, args...) do { } while (0) #endif -#ifdef RTE_LIBRTE_AVF_DEBUG_TX_FREE +#ifdef RTE_LIBRTE_IAVF_DEBUG_TX_FREE #define PMD_TX_FREE_LOG(level, fmt, args...) \ RTE_LOG_DP(level, PMD, "%s(): " fmt "\n", __func__, ## args) #else #define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) #endif -#endif /* _AVF_LOG_H_ */ +#endif /* _IAVF_LOG_H_ */ diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 988a68ff4..db7070fb5 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -92,11 +92,11 @@ check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh, return 0; } -#ifdef RTE_LIBRTE_AVF_INC_VECTOR +#ifdef RTE_LIBRTE_IAVF_INC_VECTOR static inline bool -check_rx_vec_allow(struct avf_rx_queue *rxq) +check_rx_vec_allow(struct iavf_rx_queue *rxq) { - if (rxq->rx_free_thresh >= AVF_VPMD_RX_MAX_BURST && + if (rxq->rx_free_thresh >= IAVF_VPMD_RX_MAX_BURST && rxq->nb_rx_desc % rxq->rx_free_thresh == 0) { PMD_INIT_LOG(DEBUG, "Vector Rx can be enabled on this rxq."); return TRUE; @@ -107,11 +107,11 @@ check_rx_vec_allow(struct avf_rx_queue *rxq) } static inline bool -check_tx_vec_allow(struct avf_tx_queue *txq) +check_tx_vec_allow(struct iavf_tx_queue *txq) { - if (!(txq->offloads & AVF_NO_VECTOR_FLAGS) && - txq->rs_thresh >= AVF_VPMD_TX_MAX_BURST && - txq->rs_thresh <= AVF_VPMD_TX_MAX_FREE_BUF) { + if (!(txq->offloads & IAVF_NO_VECTOR_FLAGS) && + txq->rs_thresh >= IAVF_VPMD_TX_MAX_BURST && + txq->rs_thresh <= IAVF_VPMD_TX_MAX_FREE_BUF) { PMD_INIT_LOG(DEBUG, "Vector tx can be enabled on this txq."); return TRUE; } @@ -121,15 +121,15 @@ check_tx_vec_allow(struct avf_tx_queue *txq) #endif static inline bool -check_rx_bulk_allow(struct avf_rx_queue *rxq) +check_rx_bulk_allow(struct iavf_rx_queue *rxq) { int ret = TRUE; - if (!(rxq->rx_free_thresh >= AVF_RX_MAX_BURST)) { + if (!(rxq->rx_free_thresh >= IAVF_RX_MAX_BURST)) { PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " "rxq->rx_free_thresh=%d, " - "AVF_RX_MAX_BURST=%d", - rxq->rx_free_thresh, AVF_RX_MAX_BURST); + "IAVF_RX_MAX_BURST=%d", + rxq->rx_free_thresh, IAVF_RX_MAX_BURST); ret = FALSE; } else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) { PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " @@ -142,21 +142,21 @@ check_rx_bulk_allow(struct avf_rx_queue *rxq) } static inline void -reset_rx_queue(struct avf_rx_queue *rxq) +reset_rx_queue(struct iavf_rx_queue *rxq) { uint16_t len, i; if (!rxq) return; - len = rxq->nb_rx_desc + AVF_RX_MAX_BURST; + len = rxq->nb_rx_desc + IAVF_RX_MAX_BURST; - for (i = 0; i < len * sizeof(union avf_rx_desc); i++) + for (i = 0; i < len * sizeof(union iavf_rx_desc); i++) ((volatile char *)rxq->rx_ring)[i] = 0; memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf)); - for (i = 0; i < AVF_RX_MAX_BURST; i++) + for (i = 0; i < IAVF_RX_MAX_BURST; i++) rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf; /* for rx bulk */ @@ -171,9 +171,9 @@ reset_rx_queue(struct avf_rx_queue *rxq) } static inline void -reset_tx_queue(struct avf_tx_queue *txq) +reset_tx_queue(struct iavf_tx_queue *txq) { - struct avf_tx_entry *txe; + struct iavf_tx_entry *txe; uint16_t i, prev, size; if (!txq) { @@ -182,14 +182,14 @@ reset_tx_queue(struct avf_tx_queue *txq) } txe = txq->sw_ring; - size = sizeof(struct avf_tx_desc) * txq->nb_tx_desc; + size = sizeof(struct iavf_tx_desc) * txq->nb_tx_desc; for (i = 0; i < size; i++) ((volatile char *)txq->tx_ring)[i] = 0; prev = (uint16_t)(txq->nb_tx_desc - 1); for (i = 0; i < txq->nb_tx_desc; i++) { txq->tx_ring[i].cmd_type_offset_bsz = - rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE); + rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE); txe[i].mbuf = NULL; txe[i].last_id = i; txe[prev].next_id = i; @@ -207,9 +207,9 @@ reset_tx_queue(struct avf_tx_queue *txq) } static int -alloc_rxq_mbufs(struct avf_rx_queue *rxq) +alloc_rxq_mbufs(struct iavf_rx_queue *rxq) { - volatile union avf_rx_desc *rxd; + volatile union iavf_rx_desc *rxd; struct rte_mbuf *mbuf = NULL; uint64_t dma_addr; uint16_t i; @@ -233,7 +233,7 @@ alloc_rxq_mbufs(struct avf_rx_queue *rxq) rxd = &rxq->rx_ring[i]; rxd->read.pkt_addr = dma_addr; rxd->read.hdr_addr = 0; -#ifndef RTE_LIBRTE_AVF_16BYTE_RX_DESC +#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC rxd->read.rsvd1 = 0; rxd->read.rsvd2 = 0; #endif @@ -245,7 +245,7 @@ alloc_rxq_mbufs(struct avf_rx_queue *rxq) } static inline void -release_rxq_mbufs(struct avf_rx_queue *rxq) +release_rxq_mbufs(struct iavf_rx_queue *rxq) { uint16_t i; @@ -272,7 +272,7 @@ release_rxq_mbufs(struct avf_rx_queue *rxq) } static inline void -release_txq_mbufs(struct avf_tx_queue *txq) +release_txq_mbufs(struct iavf_tx_queue *txq) { uint16_t i; @@ -289,24 +289,24 @@ release_txq_mbufs(struct avf_tx_queue *txq) } } -static const struct avf_rxq_ops def_rxq_ops = { +static const struct iavf_rxq_ops def_rxq_ops = { .release_mbufs = release_rxq_mbufs, }; -static const struct avf_txq_ops def_txq_ops = { +static const struct iavf_txq_ops def_txq_ops = { .release_mbufs = release_txq_mbufs, }; int -avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, +iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct avf_adapter *ad = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_rx_queue *rxq; + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_adapter *ad = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_rx_queue *rxq; const struct rte_memzone *mz; uint32_t ring_size; uint16_t len; @@ -314,9 +314,9 @@ avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, PMD_INIT_FUNC_TRACE(); - if (nb_desc % AVF_ALIGN_RING_DESC != 0 || - nb_desc > AVF_MAX_RING_DESC || - nb_desc < AVF_MIN_RING_DESC) { + if (nb_desc % IAVF_ALIGN_RING_DESC != 0 || + nb_desc > IAVF_MAX_RING_DESC || + nb_desc < IAVF_MIN_RING_DESC) { PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is " "invalid", nb_desc); return -EINVAL; @@ -324,20 +324,20 @@ avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Check free threshold */ rx_free_thresh = (rx_conf->rx_free_thresh == 0) ? - AVF_DEFAULT_RX_FREE_THRESH : + IAVF_DEFAULT_RX_FREE_THRESH : rx_conf->rx_free_thresh; if (check_rx_thresh(nb_desc, rx_free_thresh) != 0) return -EINVAL; /* Free memory if needed */ if (dev->data->rx_queues[queue_idx]) { - avf_dev_rx_queue_release(dev->data->rx_queues[queue_idx]); + iavf_dev_rx_queue_release(dev->data->rx_queues[queue_idx]); dev->data->rx_queues[queue_idx] = NULL; } /* Allocate the rx queue data structure */ - rxq = rte_zmalloc_socket("avf rxq", - sizeof(struct avf_rx_queue), + rxq = rte_zmalloc_socket("iavf rxq", + sizeof(struct iavf_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); if (!rxq) { @@ -356,12 +356,12 @@ avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, rxq->rx_hdr_len = 0; len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; - rxq->rx_buf_len = RTE_ALIGN(len, (1 << AVF_RXQ_CTX_DBUFF_SHIFT)); + rxq->rx_buf_len = RTE_ALIGN(len, (1 << IAVF_RXQ_CTX_DBUFF_SHIFT)); /* Allocate the software ring. */ - len = nb_desc + AVF_RX_MAX_BURST; + len = nb_desc + IAVF_RX_MAX_BURST; rxq->sw_ring = - rte_zmalloc_socket("avf rx sw ring", + rte_zmalloc_socket("iavf rx sw ring", sizeof(struct rte_mbuf *) * len, RTE_CACHE_LINE_SIZE, socket_id); @@ -374,11 +374,11 @@ avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Allocate the maximun number of RX ring hardware descriptor with * a liitle more to support bulk allocate. */ - len = AVF_MAX_RING_DESC + AVF_RX_MAX_BURST; - ring_size = RTE_ALIGN(len * sizeof(union avf_rx_desc), - AVF_DMA_MEM_ALIGN); + len = IAVF_MAX_RING_DESC + IAVF_RX_MAX_BURST; + ring_size = RTE_ALIGN(len * sizeof(union iavf_rx_desc), + IAVF_DMA_MEM_ALIGN); mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, - ring_size, AVF_RING_BASE_ALIGN, + ring_size, IAVF_RING_BASE_ALIGN, socket_id); if (!mz) { PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX"); @@ -389,13 +389,13 @@ avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Zero all the descriptors in the ring. */ memset(mz->addr, 0, ring_size); rxq->rx_ring_phys_addr = mz->iova; - rxq->rx_ring = (union avf_rx_desc *)mz->addr; + rxq->rx_ring = (union iavf_rx_desc *)mz->addr; rxq->mz = mz; reset_rx_queue(rxq); rxq->q_set = TRUE; dev->data->rx_queues[queue_idx] = rxq; - rxq->qrx_tail = hw->hw_addr + AVF_QRX_TAIL1(rxq->queue_id); + rxq->qrx_tail = hw->hw_addr + IAVF_QRX_TAIL1(rxq->queue_id); rxq->ops = &def_rxq_ops; if (check_rx_bulk_allow(rxq) == TRUE) { @@ -411,7 +411,7 @@ avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, ad->rx_bulk_alloc_allowed = false; } -#ifdef RTE_LIBRTE_AVF_INC_VECTOR +#ifdef RTE_LIBRTE_IAVF_INC_VECTOR if (check_rx_vec_allow(rxq) == FALSE) ad->rx_vec_allowed = false; #endif @@ -419,14 +419,14 @@ avf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } int -avf_dev_tx_queue_setup(struct rte_eth_dev *dev, +iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct avf_tx_queue *txq; + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_tx_queue *txq; const struct rte_memzone *mz; uint32_t ring_size; uint16_t tx_rs_thresh, tx_free_thresh; @@ -436,9 +436,9 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev, offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; - if (nb_desc % AVF_ALIGN_RING_DESC != 0 || - nb_desc > AVF_MAX_RING_DESC || - nb_desc < AVF_MIN_RING_DESC) { + if (nb_desc % IAVF_ALIGN_RING_DESC != 0 || + nb_desc > IAVF_MAX_RING_DESC || + nb_desc < IAVF_MIN_RING_DESC) { PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is " "invalid", nb_desc); return -EINVAL; @@ -452,13 +452,13 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Free memory if needed. */ if (dev->data->tx_queues[queue_idx]) { - avf_dev_tx_queue_release(dev->data->tx_queues[queue_idx]); + iavf_dev_tx_queue_release(dev->data->tx_queues[queue_idx]); dev->data->tx_queues[queue_idx] = NULL; } /* Allocate the TX queue data structure. */ - txq = rte_zmalloc_socket("avf txq", - sizeof(struct avf_tx_queue), + txq = rte_zmalloc_socket("iavf txq", + sizeof(struct iavf_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); if (!txq) { @@ -477,8 +477,8 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev, /* Allocate software ring */ txq->sw_ring = - rte_zmalloc_socket("avf tx sw ring", - sizeof(struct avf_tx_entry) * nb_desc, + rte_zmalloc_socket("iavf tx sw ring", + sizeof(struct iavf_tx_entry) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); if (!txq->sw_ring) { @@ -488,10 +488,10 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev, } /* Allocate TX hardware ring descriptors. */ - ring_size = sizeof(struct avf_tx_desc) * AVF_MAX_RING_DESC; - ring_size = RTE_ALIGN(ring_size, AVF_DMA_MEM_ALIGN); + ring_size = sizeof(struct iavf_tx_desc) * IAVF_MAX_RING_DESC; + ring_size = RTE_ALIGN(ring_size, IAVF_DMA_MEM_ALIGN); mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx, - ring_size, AVF_RING_BASE_ALIGN, + ring_size, IAVF_RING_BASE_ALIGN, socket_id); if (!mz) { PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX"); @@ -500,19 +500,19 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } txq->tx_ring_phys_addr = mz->iova; - txq->tx_ring = (struct avf_tx_desc *)mz->addr; + txq->tx_ring = (struct iavf_tx_desc *)mz->addr; txq->mz = mz; reset_tx_queue(txq); txq->q_set = TRUE; dev->data->tx_queues[queue_idx] = txq; - txq->qtx_tail = hw->hw_addr + AVF_QTX_TAIL1(queue_idx); + txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(queue_idx); txq->ops = &def_txq_ops; -#ifdef RTE_LIBRTE_AVF_INC_VECTOR +#ifdef RTE_LIBRTE_IAVF_INC_VECTOR if (check_tx_vec_allow(txq) == FALSE) { - struct avf_adapter *ad = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_adapter *ad = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); ad->tx_vec_allowed = false; } #endif @@ -521,12 +521,12 @@ avf_dev_tx_queue_setup(struct rte_eth_dev *dev, } int -avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +iavf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct avf_rx_queue *rxq; + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_rx_queue *rxq; int err = 0; PMD_DRV_FUNC_TRACE(); @@ -545,11 +545,11 @@ avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) rte_wmb(); /* Init the RX tail register. */ - AVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); - AVF_WRITE_FLUSH(hw); + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + IAVF_WRITE_FLUSH(hw); /* Ready to switch the queue on */ - err = avf_switch_queue(adapter, rx_queue_id, TRUE, TRUE); + err = iavf_switch_queue(adapter, rx_queue_id, TRUE, TRUE); if (err) PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", rx_queue_id); @@ -561,12 +561,12 @@ avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) } int -avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +iavf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct avf_tx_queue *txq; + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_tx_queue *txq; int err = 0; PMD_DRV_FUNC_TRACE(); @@ -577,11 +577,11 @@ avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) txq = dev->data->tx_queues[tx_queue_id]; /* Init the RX tail register. */ - AVF_PCI_REG_WRITE(txq->qtx_tail, 0); - AVF_WRITE_FLUSH(hw); + IAVF_PCI_REG_WRITE(txq->qtx_tail, 0); + IAVF_WRITE_FLUSH(hw); /* Ready to switch the queue on */ - err = avf_switch_queue(adapter, tx_queue_id, FALSE, TRUE); + err = iavf_switch_queue(adapter, tx_queue_id, FALSE, TRUE); if (err) PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", @@ -594,11 +594,11 @@ avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) } int -avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +iavf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_rx_queue *rxq; + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_rx_queue *rxq; int err; PMD_DRV_FUNC_TRACE(); @@ -606,7 +606,7 @@ avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) if (rx_queue_id >= dev->data->nb_rx_queues) return -EINVAL; - err = avf_switch_queue(adapter, rx_queue_id, TRUE, FALSE); + err = iavf_switch_queue(adapter, rx_queue_id, TRUE, FALSE); if (err) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", rx_queue_id); @@ -622,11 +622,11 @@ avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) } int -avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_tx_queue *txq; + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_tx_queue *txq; int err; PMD_DRV_FUNC_TRACE(); @@ -634,7 +634,7 @@ avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) if (tx_queue_id >= dev->data->nb_tx_queues) return -EINVAL; - err = avf_switch_queue(adapter, tx_queue_id, FALSE, FALSE); + err = iavf_switch_queue(adapter, tx_queue_id, FALSE, FALSE); if (err) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", tx_queue_id); @@ -650,9 +650,9 @@ avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) } void -avf_dev_rx_queue_release(void *rxq) +iavf_dev_rx_queue_release(void *rxq) { - struct avf_rx_queue *q = (struct avf_rx_queue *)rxq; + struct iavf_rx_queue *q = (struct iavf_rx_queue *)rxq; if (!q) return; @@ -664,9 +664,9 @@ avf_dev_rx_queue_release(void *rxq) } void -avf_dev_tx_queue_release(void *txq) +iavf_dev_tx_queue_release(void *txq) { - struct avf_tx_queue *q = (struct avf_tx_queue *)txq; + struct iavf_tx_queue *q = (struct iavf_tx_queue *)txq; if (!q) return; @@ -678,16 +678,16 @@ avf_dev_tx_queue_release(void *txq) } void -avf_stop_queues(struct rte_eth_dev *dev) +iavf_stop_queues(struct rte_eth_dev *dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_rx_queue *rxq; - struct avf_tx_queue *txq; + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_rx_queue *rxq; + struct iavf_tx_queue *txq; int ret, i; /* Stop All queues */ - ret = avf_disable_queues(adapter); + ret = iavf_disable_queues(adapter); if (ret) PMD_DRV_LOG(WARNING, "Fail to stop queues"); @@ -710,10 +710,10 @@ avf_stop_queues(struct rte_eth_dev *dev) } static inline void -avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp) +iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp) { if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) & - (1 << AVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) { + (1 << IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) { mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED; mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1); @@ -724,29 +724,29 @@ avf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union avf_rx_desc *rxdp) /* Translate the rx descriptor status and error fields to pkt flags */ static inline uint64_t -avf_rxd_to_pkt_flags(uint64_t qword) +iavf_rxd_to_pkt_flags(uint64_t qword) { uint64_t flags; - uint64_t error_bits = (qword >> AVF_RXD_QW1_ERROR_SHIFT); + uint64_t error_bits = (qword >> IAVF_RXD_QW1_ERROR_SHIFT); -#define AVF_RX_ERR_BITS 0x3f +#define IAVF_RX_ERR_BITS 0x3f /* Check if RSS_HASH */ - flags = (((qword >> AVF_RX_DESC_STATUS_FLTSTAT_SHIFT) & - AVF_RX_DESC_FLTSTAT_RSS_HASH) == - AVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0; + flags = (((qword >> IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT) & + IAVF_RX_DESC_FLTSTAT_RSS_HASH) == + IAVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0; - if (likely((error_bits & AVF_RX_ERR_BITS) == 0)) { + if (likely((error_bits & IAVF_RX_ERR_BITS) == 0)) { flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD); return flags; } - if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_IPE_SHIFT))) + if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_IPE_SHIFT))) flags |= PKT_RX_IP_CKSUM_BAD; else flags |= PKT_RX_IP_CKSUM_GOOD; - if (unlikely(error_bits & (1 << AVF_RX_DESC_ERROR_L4E_SHIFT))) + if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_L4E_SHIFT))) flags |= PKT_RX_L4_CKSUM_BAD; else flags |= PKT_RX_L4_CKSUM_GOOD; @@ -758,12 +758,12 @@ avf_rxd_to_pkt_flags(uint64_t qword) /* implement recv_pkts */ uint16_t -avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +iavf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { - volatile union avf_rx_desc *rx_ring; - volatile union avf_rx_desc *rxdp; - struct avf_rx_queue *rxq; - union avf_rx_desc rxd; + volatile union iavf_rx_desc *rx_ring; + volatile union iavf_rx_desc *rxdp; + struct iavf_rx_queue *rxq; + union iavf_rx_desc rxd; struct rte_mbuf *rxe; struct rte_eth_dev *dev; struct rte_mbuf *rxm; @@ -804,13 +804,13 @@ avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); - rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >> - AVF_RXD_QW1_STATUS_SHIFT; + rx_status = (qword1 & IAVF_RXD_QW1_STATUS_MASK) >> + IAVF_RXD_QW1_STATUS_SHIFT; /* Check the DD bit first */ - if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT))) + if (!(rx_status & (1 << IAVF_RX_DESC_STATUS_DD_SHIFT))) break; - AVF_DUMP_RX_DESC(rxq, rxdp, rx_id); + IAVF_DUMP_RX_DESC(rxq, rxdp, rx_id); nmb = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!nmb)) { @@ -846,8 +846,8 @@ avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxdp->read.hdr_addr = 0; rxdp->read.pkt_addr = dma_addr; - rx_packet_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >> - AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len; + rx_packet_len = ((qword1 & IAVF_RXD_QW1_LENGTH_PBUF_MASK) >> + IAVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len; rxm->data_off = RTE_PKTMBUF_HEADROOM; rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM)); @@ -857,11 +857,11 @@ avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxm->data_len = rx_packet_len; rxm->port = rxq->port_id; rxm->ol_flags = 0; - avf_rxd_to_vlan_tci(rxm, &rxd); - pkt_flags = avf_rxd_to_pkt_flags(qword1); + iavf_rxd_to_vlan_tci(rxm, &rxd); + pkt_flags = iavf_rxd_to_pkt_flags(qword1); rxm->packet_type = ptype_tbl[(uint8_t)((qword1 & - AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)]; + IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)]; if (pkt_flags & PKT_RX_RSS_HASH) rxm->hash.rss = @@ -886,7 +886,7 @@ avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rx_id, nb_hold, nb_rx); rx_id = (uint16_t)((rx_id == 0) ? (rxq->nb_rx_desc - 1) : (rx_id - 1)); - AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id); + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id); nb_hold = 0; } rxq->nb_rx_hold = nb_hold; @@ -896,11 +896,11 @@ avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) /* implement recv_scattered_pkts */ uint16_t -avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, +iavf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { - struct avf_rx_queue *rxq = rx_queue; - union avf_rx_desc rxd; + struct iavf_rx_queue *rxq = rx_queue; + union iavf_rx_desc rxd; struct rte_mbuf *rxe; struct rte_mbuf *first_seg = rxq->pkt_first_seg; struct rte_mbuf *last_seg = rxq->pkt_last_seg; @@ -913,8 +913,8 @@ avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint64_t dma_addr; uint64_t pkt_flags; - volatile union avf_rx_desc *rx_ring = rxq->rx_ring; - volatile union avf_rx_desc *rxdp; + volatile union iavf_rx_desc *rx_ring = rxq->rx_ring; + volatile union iavf_rx_desc *rxdp; static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = { /* [0] reserved */ [1] = RTE_PTYPE_L2_ETHER, @@ -938,13 +938,13 @@ avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); - rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >> - AVF_RXD_QW1_STATUS_SHIFT; + rx_status = (qword1 & IAVF_RXD_QW1_STATUS_MASK) >> + IAVF_RXD_QW1_STATUS_SHIFT; /* Check the DD bit */ - if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT))) + if (!(rx_status & (1 << IAVF_RX_DESC_STATUS_DD_SHIFT))) break; - AVF_DUMP_RX_DESC(rxq, rxdp, rx_id); + IAVF_DUMP_RX_DESC(rxq, rxdp, rx_id); nmb = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!nmb)) { @@ -982,8 +982,8 @@ avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, /* Set data buffer address and data length of the mbuf */ rxdp->read.hdr_addr = 0; rxdp->read.pkt_addr = dma_addr; - rx_packet_len = (qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >> - AVF_RXD_QW1_LENGTH_PBUF_SHIFT; + rx_packet_len = (qword1 & IAVF_RXD_QW1_LENGTH_PBUF_MASK) >> + IAVF_RXD_QW1_LENGTH_PBUF_SHIFT; rxm->data_len = rx_packet_len; rxm->data_off = RTE_PKTMBUF_HEADROOM; @@ -1009,7 +1009,7 @@ avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * update the pointer to the last mbuf of the current scattered * packet and continue to parse the RX ring. */ - if (!(rx_status & (1 << AVF_RX_DESC_STATUS_EOF_SHIFT))) { + if (!(rx_status & (1 << IAVF_RX_DESC_STATUS_EOF_SHIFT))) { last_seg = rxm; continue; } @@ -1040,11 +1040,11 @@ avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, first_seg->port = rxq->port_id; first_seg->ol_flags = 0; - avf_rxd_to_vlan_tci(first_seg, &rxd); - pkt_flags = avf_rxd_to_pkt_flags(qword1); + iavf_rxd_to_vlan_tci(first_seg, &rxd); + pkt_flags = iavf_rxd_to_pkt_flags(qword1); first_seg->packet_type = ptype_tbl[(uint8_t)((qword1 & - AVF_RXD_QW1_PTYPE_MASK) >> AVF_RXD_QW1_PTYPE_SHIFT)]; + IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)]; if (pkt_flags & PKT_RX_RSS_HASH) first_seg->hash.rss = @@ -1079,7 +1079,7 @@ avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_id, nb_hold, nb_rx); rx_id = (uint16_t)(rx_id == 0 ? (rxq->nb_rx_desc - 1) : (rx_id - 1)); - AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id); + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id); nb_hold = 0; } rxq->nb_rx_hold = nb_hold; @@ -1087,17 +1087,17 @@ avf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_rx; } -#define AVF_LOOK_AHEAD 8 +#define IAVF_LOOK_AHEAD 8 static inline int -avf_rx_scan_hw_ring(struct avf_rx_queue *rxq) +iavf_rx_scan_hw_ring(struct iavf_rx_queue *rxq) { - volatile union avf_rx_desc *rxdp; + volatile union iavf_rx_desc *rxdp; struct rte_mbuf **rxep; struct rte_mbuf *mb; uint16_t pkt_len; uint64_t qword1; uint32_t rx_status; - int32_t s[AVF_LOOK_AHEAD], nb_dd; + int32_t s[IAVF_LOOK_AHEAD], nb_dd; int32_t i, j, nb_rx = 0; uint64_t pkt_flags; static const uint32_t ptype_tbl[UINT8_MAX + 1] __rte_cache_aligned = { @@ -1124,53 +1124,53 @@ avf_rx_scan_hw_ring(struct avf_rx_queue *rxq) rxep = &rxq->sw_ring[rxq->rx_tail]; qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); - rx_status = (qword1 & AVF_RXD_QW1_STATUS_MASK) >> - AVF_RXD_QW1_STATUS_SHIFT; + rx_status = (qword1 & IAVF_RXD_QW1_STATUS_MASK) >> + IAVF_RXD_QW1_STATUS_SHIFT; /* Make sure there is at least 1 packet to receive */ - if (!(rx_status & (1 << AVF_RX_DESC_STATUS_DD_SHIFT))) + if (!(rx_status & (1 << IAVF_RX_DESC_STATUS_DD_SHIFT))) return 0; /* Scan LOOK_AHEAD descriptors at a time to determine which * descriptors reference packets that are ready to be received. */ - for (i = 0; i < AVF_RX_MAX_BURST; i += AVF_LOOK_AHEAD, - rxdp += AVF_LOOK_AHEAD, rxep += AVF_LOOK_AHEAD) { + for (i = 0; i < IAVF_RX_MAX_BURST; i += IAVF_LOOK_AHEAD, + rxdp += IAVF_LOOK_AHEAD, rxep += IAVF_LOOK_AHEAD) { /* Read desc statuses backwards to avoid race condition */ - for (j = AVF_LOOK_AHEAD - 1; j >= 0; j--) { + for (j = IAVF_LOOK_AHEAD - 1; j >= 0; j--) { qword1 = rte_le_to_cpu_64( rxdp[j].wb.qword1.status_error_len); - s[j] = (qword1 & AVF_RXD_QW1_STATUS_MASK) >> - AVF_RXD_QW1_STATUS_SHIFT; + s[j] = (qword1 & IAVF_RXD_QW1_STATUS_MASK) >> + IAVF_RXD_QW1_STATUS_SHIFT; } rte_smp_rmb(); /* Compute how many status bits were set */ - for (j = 0, nb_dd = 0; j < AVF_LOOK_AHEAD; j++) - nb_dd += s[j] & (1 << AVF_RX_DESC_STATUS_DD_SHIFT); + for (j = 0, nb_dd = 0; j < IAVF_LOOK_AHEAD; j++) + nb_dd += s[j] & (1 << IAVF_RX_DESC_STATUS_DD_SHIFT); nb_rx += nb_dd; /* Translate descriptor info to mbuf parameters */ for (j = 0; j < nb_dd; j++) { - AVF_DUMP_RX_DESC(rxq, &rxdp[j], - rxq->rx_tail + i * AVF_LOOK_AHEAD + j); + IAVF_DUMP_RX_DESC(rxq, &rxdp[j], + rxq->rx_tail + i * IAVF_LOOK_AHEAD + j); mb = rxep[j]; qword1 = rte_le_to_cpu_64 (rxdp[j].wb.qword1.status_error_len); - pkt_len = ((qword1 & AVF_RXD_QW1_LENGTH_PBUF_MASK) >> - AVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len; + pkt_len = ((qword1 & IAVF_RXD_QW1_LENGTH_PBUF_MASK) >> + IAVF_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len; mb->data_len = pkt_len; mb->pkt_len = pkt_len; mb->ol_flags = 0; - avf_rxd_to_vlan_tci(mb, &rxdp[j]); - pkt_flags = avf_rxd_to_pkt_flags(qword1); + iavf_rxd_to_vlan_tci(mb, &rxdp[j]); + pkt_flags = iavf_rxd_to_pkt_flags(qword1); mb->packet_type = ptype_tbl[(uint8_t)((qword1 & - AVF_RXD_QW1_PTYPE_MASK) >> - AVF_RXD_QW1_PTYPE_SHIFT)]; + IAVF_RXD_QW1_PTYPE_MASK) >> + IAVF_RXD_QW1_PTYPE_SHIFT)]; if (pkt_flags & PKT_RX_RSS_HASH) mb->hash.rss = rte_le_to_cpu_32( @@ -1179,10 +1179,10 @@ avf_rx_scan_hw_ring(struct avf_rx_queue *rxq) mb->ol_flags |= pkt_flags; } - for (j = 0; j < AVF_LOOK_AHEAD; j++) + for (j = 0; j < IAVF_LOOK_AHEAD; j++) rxq->rx_stage[i + j] = rxep[j]; - if (nb_dd != AVF_LOOK_AHEAD) + if (nb_dd != IAVF_LOOK_AHEAD) break; } @@ -1194,7 +1194,7 @@ avf_rx_scan_hw_ring(struct avf_rx_queue *rxq) } static inline uint16_t -avf_rx_fill_from_stage(struct avf_rx_queue *rxq, +iavf_rx_fill_from_stage(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { @@ -1213,9 +1213,9 @@ avf_rx_fill_from_stage(struct avf_rx_queue *rxq, } static inline int -avf_rx_alloc_bufs(struct avf_rx_queue *rxq) +iavf_rx_alloc_bufs(struct iavf_rx_queue *rxq) { - volatile union avf_rx_desc *rxdp; + volatile union iavf_rx_desc *rxdp; struct rte_mbuf **rxep; struct rte_mbuf *mb; uint16_t alloc_idx, i; @@ -1252,7 +1252,7 @@ avf_rx_alloc_bufs(struct avf_rx_queue *rxq) /* Update rx tail register */ rte_wmb(); - AVF_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rxq->rx_free_trigger); + IAVF_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rxq->rx_free_trigger); rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh); @@ -1265,22 +1265,22 @@ avf_rx_alloc_bufs(struct avf_rx_queue *rxq) static inline uint16_t rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { - struct avf_rx_queue *rxq = (struct avf_rx_queue *)rx_queue; + struct iavf_rx_queue *rxq = (struct iavf_rx_queue *)rx_queue; uint16_t nb_rx = 0; if (!nb_pkts) return 0; if (rxq->rx_nb_avail) - return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); + return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); - nb_rx = (uint16_t)avf_rx_scan_hw_ring(rxq); + nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq); rxq->rx_next_avail = 0; rxq->rx_nb_avail = nb_rx; rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx); if (rxq->rx_tail > rxq->rx_free_trigger) { - if (avf_rx_alloc_bufs(rxq) != 0) { + if (iavf_rx_alloc_bufs(rxq) != 0) { uint16_t i, j; /* TODO: count rx_mbuf_alloc_failed here */ @@ -1302,13 +1302,13 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxq->rx_tail, nb_rx); if (rxq->rx_nb_avail) - return avf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); + return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); return 0; } static uint16_t -avf_recv_pkts_bulk_alloc(void *rx_queue, +iavf_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { @@ -1317,11 +1317,11 @@ avf_recv_pkts_bulk_alloc(void *rx_queue, if (unlikely(nb_pkts == 0)) return 0; - if (likely(nb_pkts <= AVF_RX_MAX_BURST)) + if (likely(nb_pkts <= IAVF_RX_MAX_BURST)) return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); while (nb_pkts) { - n = RTE_MIN(nb_pkts, AVF_RX_MAX_BURST); + n = RTE_MIN(nb_pkts, IAVF_RX_MAX_BURST); count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); nb_rx = (uint16_t)(nb_rx + count); nb_pkts = (uint16_t)(nb_pkts - count); @@ -1333,15 +1333,15 @@ avf_recv_pkts_bulk_alloc(void *rx_queue, } static inline int -avf_xmit_cleanup(struct avf_tx_queue *txq) +iavf_xmit_cleanup(struct iavf_tx_queue *txq) { - struct avf_tx_entry *sw_ring = txq->sw_ring; + struct iavf_tx_entry *sw_ring = txq->sw_ring; uint16_t last_desc_cleaned = txq->last_desc_cleaned; uint16_t nb_tx_desc = txq->nb_tx_desc; uint16_t desc_to_clean_to; uint16_t nb_tx_to_clean; - volatile struct avf_tx_desc *txd = txq->tx_ring; + volatile struct iavf_tx_desc *txd = txq->tx_ring; desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh); if (desc_to_clean_to >= nb_tx_desc) @@ -1349,8 +1349,8 @@ avf_xmit_cleanup(struct avf_tx_queue *txq) desc_to_clean_to = sw_ring[desc_to_clean_to].last_id; if ((txd[desc_to_clean_to].cmd_type_offset_bsz & - rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) != - rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE)) { + rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) != + rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE)) { PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done " "(port=%d queue=%d)", desc_to_clean_to, txq->port_id, txq->queue_id); @@ -1374,7 +1374,7 @@ avf_xmit_cleanup(struct avf_tx_queue *txq) /* Check if the context descriptor is needed for TX offloading */ static inline uint16_t -avf_calc_context_desc(uint64_t flags) +iavf_calc_context_desc(uint64_t flags) { static uint64_t mask = PKT_TX_TCP_SEG; @@ -1382,53 +1382,53 @@ avf_calc_context_desc(uint64_t flags) } static inline void -avf_txd_enable_checksum(uint64_t ol_flags, +iavf_txd_enable_checksum(uint64_t ol_flags, uint32_t *td_cmd, uint32_t *td_offset, - union avf_tx_offload tx_offload) + union iavf_tx_offload tx_offload) { /* Set MACLEN */ *td_offset |= (tx_offload.l2_len >> 1) << - AVF_TX_DESC_LENGTH_MACLEN_SHIFT; + IAVF_TX_DESC_LENGTH_MACLEN_SHIFT; /* Enable L3 checksum offloads */ if (ol_flags & PKT_TX_IP_CKSUM) { - *td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4_CSUM; + *td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM; *td_offset |= (tx_offload.l3_len >> 2) << - AVF_TX_DESC_LENGTH_IPLEN_SHIFT; + IAVF_TX_DESC_LENGTH_IPLEN_SHIFT; } else if (ol_flags & PKT_TX_IPV4) { - *td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV4; + *td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4; *td_offset |= (tx_offload.l3_len >> 2) << - AVF_TX_DESC_LENGTH_IPLEN_SHIFT; + IAVF_TX_DESC_LENGTH_IPLEN_SHIFT; } else if (ol_flags & PKT_TX_IPV6) { - *td_cmd |= AVF_TX_DESC_CMD_IIPT_IPV6; + *td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV6; *td_offset |= (tx_offload.l3_len >> 2) << - AVF_TX_DESC_LENGTH_IPLEN_SHIFT; + IAVF_TX_DESC_LENGTH_IPLEN_SHIFT; } if (ol_flags & PKT_TX_TCP_SEG) { - *td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP; + *td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP; *td_offset |= (tx_offload.l4_len >> 2) << - AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; + IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; return; } /* Enable L4 checksum offloads */ switch (ol_flags & PKT_TX_L4_MASK) { case PKT_TX_TCP_CKSUM: - *td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_TCP; + *td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP; *td_offset |= (sizeof(struct tcp_hdr) >> 2) << - AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; + IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; break; case PKT_TX_SCTP_CKSUM: - *td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_SCTP; + *td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_SCTP; *td_offset |= (sizeof(struct sctp_hdr) >> 2) << - AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; + IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; break; case PKT_TX_UDP_CKSUM: - *td_cmd |= AVF_TX_DESC_CMD_L4T_EOFT_UDP; + *td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_UDP; *td_offset |= (sizeof(struct udp_hdr) >> 2) << - AVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; + IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; break; default: break; @@ -1439,7 +1439,7 @@ avf_txd_enable_checksum(uint64_t ol_flags, * support IP -> L4 and IP -> IP -> L4 */ static inline uint64_t -avf_set_tso_ctx(struct rte_mbuf *mbuf, union avf_tx_offload tx_offload) +iavf_set_tso_ctx(struct rte_mbuf *mbuf, union iavf_tx_offload tx_offload) { uint64_t ctx_desc = 0; uint32_t cd_cmd, hdr_len, cd_tso_len; @@ -1456,39 +1456,39 @@ avf_set_tso_ctx(struct rte_mbuf *mbuf, union avf_tx_offload tx_offload) tx_offload.l3_len + tx_offload.l4_len; - cd_cmd = AVF_TX_CTX_DESC_TSO; + cd_cmd = IAVF_TX_CTX_DESC_TSO; cd_tso_len = mbuf->pkt_len - hdr_len; - ctx_desc |= ((uint64_t)cd_cmd << AVF_TXD_CTX_QW1_CMD_SHIFT) | - ((uint64_t)cd_tso_len << AVF_TXD_CTX_QW1_TSO_LEN_SHIFT) | - ((uint64_t)mbuf->tso_segsz << AVF_TXD_CTX_QW1_MSS_SHIFT); + ctx_desc |= ((uint64_t)cd_cmd << IAVF_TXD_CTX_QW1_CMD_SHIFT) | + ((uint64_t)cd_tso_len << IAVF_TXD_CTX_QW1_TSO_LEN_SHIFT) | + ((uint64_t)mbuf->tso_segsz << IAVF_TXD_CTX_QW1_MSS_SHIFT); return ctx_desc; } /* Construct the tx flags */ static inline uint64_t -avf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size, +iavf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size, uint32_t td_tag) { - return rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DATA | - ((uint64_t)td_cmd << AVF_TXD_QW1_CMD_SHIFT) | + return rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)td_cmd << IAVF_TXD_QW1_CMD_SHIFT) | ((uint64_t)td_offset << - AVF_TXD_QW1_OFFSET_SHIFT) | + IAVF_TXD_QW1_OFFSET_SHIFT) | ((uint64_t)size << - AVF_TXD_QW1_TX_BUF_SZ_SHIFT) | + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT) | ((uint64_t)td_tag << - AVF_TXD_QW1_L2TAG1_SHIFT)); + IAVF_TXD_QW1_L2TAG1_SHIFT)); } /* TX function */ uint16_t -avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - volatile struct avf_tx_desc *txd; - volatile struct avf_tx_desc *txr; - struct avf_tx_queue *txq; - struct avf_tx_entry *sw_ring; - struct avf_tx_entry *txe, *txn; + volatile struct iavf_tx_desc *txd; + volatile struct iavf_tx_desc *txr; + struct iavf_tx_queue *txq; + struct iavf_tx_entry *sw_ring; + struct iavf_tx_entry *txe, *txn; struct rte_mbuf *tx_pkt; struct rte_mbuf *m_seg; uint16_t tx_id; @@ -1502,7 +1502,7 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t tx_last; uint16_t slen; uint64_t buf_dma_addr; - union avf_tx_offload tx_offload = {0}; + union iavf_tx_offload tx_offload = {0}; txq = tx_queue; sw_ring = txq->sw_ring; @@ -1512,7 +1512,7 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Check if the descriptor ring needs to be cleaned. */ if (txq->nb_free < txq->free_thresh) - avf_xmit_cleanup(txq); + iavf_xmit_cleanup(txq); for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { td_cmd = 0; @@ -1529,7 +1529,7 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_offload.tso_segsz = tx_pkt->tso_segsz; /* Calculate the number of context descriptors needed. */ - nb_ctx = avf_calc_context_desc(ol_flags); + nb_ctx = iavf_calc_context_desc(ol_flags); /* The number of descriptors that must be allocated for * a packet equals to the number of the segments of that @@ -1547,14 +1547,14 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txq->port_id, txq->queue_id, tx_id, tx_last); if (nb_used > txq->nb_free) { - if (avf_xmit_cleanup(txq)) { + if (iavf_xmit_cleanup(txq)) { if (nb_tx == 0) return 0; goto end_of_tx; } if (unlikely(nb_used > txq->rs_thresh)) { while (nb_used > txq->nb_free) { - if (avf_xmit_cleanup(txq)) { + if (iavf_xmit_cleanup(txq)) { if (nb_tx == 0) return 0; goto end_of_tx; @@ -1565,7 +1565,7 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Descriptor based VLAN insertion */ if (ol_flags & PKT_TX_VLAN_PKT) { - td_cmd |= AVF_TX_DESC_CMD_IL2TAG1; + td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1; td_tag = tx_pkt->vlan_tci; } @@ -1575,14 +1575,14 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) td_cmd |= 0x04; /* Enable checksum offloading */ - if (ol_flags & AVF_TX_CKSUM_OFFLOAD_MASK) - avf_txd_enable_checksum(ol_flags, &td_cmd, + if (ol_flags & IAVF_TX_CKSUM_OFFLOAD_MASK) + iavf_txd_enable_checksum(ol_flags, &td_cmd, &td_offset, tx_offload); if (nb_ctx) { /* Setup TX context descriptor if required */ uint64_t cd_type_cmd_tso_mss = - AVF_TX_DESC_DTYPE_CONTEXT; + IAVF_TX_DESC_DTYPE_CONTEXT; txn = &sw_ring[txe->next_id]; RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf); @@ -1594,9 +1594,9 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* TSO enabled */ if (ol_flags & PKT_TX_TCP_SEG) cd_type_cmd_tso_mss |= - avf_set_tso_ctx(tx_pkt, tx_offload); + iavf_set_tso_ctx(tx_pkt, tx_offload); - AVF_DUMP_TX_DESC(txq, &txr[tx_id], tx_id); + IAVF_DUMP_TX_DESC(txq, &txr[tx_id], tx_id); txe->last_id = tx_last; tx_id = txe->next_id; txe = txn; @@ -1615,12 +1615,12 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) slen = m_seg->data_len; buf_dma_addr = rte_mbuf_data_iova(m_seg); txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr); - txd->cmd_type_offset_bsz = avf_build_ctob(td_cmd, + txd->cmd_type_offset_bsz = iavf_build_ctob(td_cmd, td_offset, slen, td_tag); - AVF_DUMP_TX_DESC(txq, txd, tx_id); + IAVF_DUMP_TX_DESC(txq, txd, tx_id); txe->last_id = tx_last; tx_id = txe->next_id; txe = txn; @@ -1628,7 +1628,7 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } while (m_seg); /* The last packet data descriptor needs End Of Packet (EOP) */ - td_cmd |= AVF_TX_DESC_CMD_EOP; + td_cmd |= IAVF_TX_DESC_CMD_EOP; txq->nb_used = (uint16_t)(txq->nb_used + nb_used); txq->nb_free = (uint16_t)(txq->nb_free - nb_used); @@ -1637,7 +1637,7 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) "%4u (port=%d queue=%d)", tx_last, txq->port_id, txq->queue_id); - td_cmd |= AVF_TX_DESC_CMD_RS; + td_cmd |= IAVF_TX_DESC_CMD_RS; /* Update txq RS bit counters */ txq->nb_used = 0; @@ -1645,8 +1645,8 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txd->cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)td_cmd) << - AVF_TXD_QW1_CMD_SHIFT); - AVF_DUMP_TX_DESC(txq, txd, tx_id); + IAVF_TXD_QW1_CMD_SHIFT); + IAVF_DUMP_TX_DESC(txq, txd, tx_id); } end_of_tx: @@ -1655,24 +1655,24 @@ avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u", txq->port_id, txq->queue_id, tx_id, nb_tx); - AVF_PCI_REG_WRITE_RELAXED(txq->qtx_tail, tx_id); + IAVF_PCI_REG_WRITE_RELAXED(txq->qtx_tail, tx_id); txq->tx_tail = tx_id; return nb_tx; } static uint16_t -avf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, +iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { uint16_t nb_tx = 0; - struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue; + struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; while (nb_pkts) { uint16_t ret, num; num = (uint16_t)RTE_MIN(nb_pkts, txq->rs_thresh); - ret = avf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num); + ret = iavf_xmit_fixed_burst_vec(tx_queue, &tx_pkts[nb_tx], num); nb_tx += ret; nb_pkts -= ret; if (ret < num) @@ -1684,7 +1684,7 @@ avf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, /* TX prep functions */ uint16_t -avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, +iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i, ret; @@ -1695,20 +1695,20 @@ avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, m = tx_pkts[i]; ol_flags = m->ol_flags; - /* Check condition for nb_segs > AVF_TX_MAX_MTU_SEG. */ + /* Check condition for nb_segs > IAVF_TX_MAX_MTU_SEG. */ if (!(ol_flags & PKT_TX_TCP_SEG)) { - if (m->nb_segs > AVF_TX_MAX_MTU_SEG) { + if (m->nb_segs > IAVF_TX_MAX_MTU_SEG) { rte_errno = -EINVAL; return i; } - } else if ((m->tso_segsz < AVF_MIN_TSO_MSS) || - (m->tso_segsz > AVF_MAX_TSO_MSS)) { + } else if ((m->tso_segsz < IAVF_MIN_TSO_MSS) || + (m->tso_segsz > IAVF_MAX_TSO_MSS)) { /* MSS outside the range are considered malicious */ rte_errno = -EINVAL; return i; } - if (ol_flags & AVF_TX_OFFLOAD_NOTSUP_MASK) { + if (ol_flags & IAVF_TX_OFFLOAD_NOTSUP_MASK) { rte_errno = -ENOTSUP; return i; } @@ -1732,77 +1732,77 @@ avf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, /* choose rx function*/ void -avf_set_rx_function(struct rte_eth_dev *dev) +iavf_set_rx_function(struct rte_eth_dev *dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_rx_queue *rxq; + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_rx_queue *rxq; int i; if (adapter->rx_vec_allowed) { if (dev->data->scattered_rx) { PMD_DRV_LOG(DEBUG, "Using Vector Scattered Rx callback" " (port=%d).", dev->data->port_id); - dev->rx_pkt_burst = avf_recv_scattered_pkts_vec; + dev->rx_pkt_burst = iavf_recv_scattered_pkts_vec; } else { PMD_DRV_LOG(DEBUG, "Using Vector Rx callback" " (port=%d).", dev->data->port_id); - dev->rx_pkt_burst = avf_recv_pkts_vec; + dev->rx_pkt_burst = iavf_recv_pkts_vec; } for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq = dev->data->rx_queues[i]; if (!rxq) continue; - avf_rxq_vec_setup(rxq); + iavf_rxq_vec_setup(rxq); } } else if (dev->data->scattered_rx) { PMD_DRV_LOG(DEBUG, "Using a Scattered Rx callback (port=%d).", dev->data->port_id); - dev->rx_pkt_burst = avf_recv_scattered_pkts; + dev->rx_pkt_burst = iavf_recv_scattered_pkts; } else if (adapter->rx_bulk_alloc_allowed) { PMD_DRV_LOG(DEBUG, "Using bulk Rx callback (port=%d).", dev->data->port_id); - dev->rx_pkt_burst = avf_recv_pkts_bulk_alloc; + dev->rx_pkt_burst = iavf_recv_pkts_bulk_alloc; } else { PMD_DRV_LOG(DEBUG, "Using Basic Rx callback (port=%d).", dev->data->port_id); - dev->rx_pkt_burst = avf_recv_pkts; + dev->rx_pkt_burst = iavf_recv_pkts; } } /* choose tx function*/ void -avf_set_tx_function(struct rte_eth_dev *dev) +iavf_set_tx_function(struct rte_eth_dev *dev) { - struct avf_adapter *adapter = - AVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - struct avf_tx_queue *txq; + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_tx_queue *txq; int i; if (adapter->tx_vec_allowed) { PMD_DRV_LOG(DEBUG, "Using Vector Tx callback (port=%d).", dev->data->port_id); - dev->tx_pkt_burst = avf_xmit_pkts_vec; + dev->tx_pkt_burst = iavf_xmit_pkts_vec; dev->tx_pkt_prepare = NULL; for (i = 0; i < dev->data->nb_tx_queues; i++) { txq = dev->data->tx_queues[i]; if (!txq) continue; - avf_txq_vec_setup(txq); + iavf_txq_vec_setup(txq); } } else { PMD_DRV_LOG(DEBUG, "Using Basic Tx callback (port=%d).", dev->data->port_id); - dev->tx_pkt_burst = avf_xmit_pkts; - dev->tx_pkt_prepare = avf_prep_pkts; + dev->tx_pkt_burst = iavf_xmit_pkts; + dev->tx_pkt_prepare = iavf_prep_pkts; } } void -avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, +iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo) { - struct avf_rx_queue *rxq; + struct iavf_rx_queue *rxq; rxq = dev->data->rx_queues[queue_id]; @@ -1816,10 +1816,10 @@ avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, } void -avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, +iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo) { - struct avf_tx_queue *txq; + struct iavf_tx_queue *txq; txq = dev->data->tx_queues[queue_id]; @@ -1833,25 +1833,25 @@ avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, /* Get the number of used descriptors of a rx queue */ uint32_t -avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id) +iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id) { -#define AVF_RXQ_SCAN_INTERVAL 4 - volatile union avf_rx_desc *rxdp; - struct avf_rx_queue *rxq; +#define IAVF_RXQ_SCAN_INTERVAL 4 + volatile union iavf_rx_desc *rxdp; + struct iavf_rx_queue *rxq; uint16_t desc = 0; rxq = dev->data->rx_queues[queue_id]; rxdp = &rxq->rx_ring[rxq->rx_tail]; while ((desc < rxq->nb_rx_desc) && ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) & - AVF_RXD_QW1_STATUS_MASK) >> AVF_RXD_QW1_STATUS_SHIFT) & - (1 << AVF_RX_DESC_STATUS_DD_SHIFT)) { + IAVF_RXD_QW1_STATUS_MASK) >> IAVF_RXD_QW1_STATUS_SHIFT) & + (1 << IAVF_RX_DESC_STATUS_DD_SHIFT)) { /* Check the DD bit of a rx descriptor of each 4 in a group, * to avoid checking too frequently and downgrading performance * too much. */ - desc += AVF_RXQ_SCAN_INTERVAL; - rxdp += AVF_RXQ_SCAN_INTERVAL; + desc += IAVF_RXQ_SCAN_INTERVAL; + rxdp += IAVF_RXQ_SCAN_INTERVAL; if (rxq->rx_tail + desc >= rxq->nb_rx_desc) rxdp = &(rxq->rx_ring[rxq->rx_tail + desc - rxq->nb_rx_desc]); @@ -1861,9 +1861,9 @@ avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id) } int -avf_dev_rx_desc_status(void *rx_queue, uint16_t offset) +iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset) { - struct avf_rx_queue *rxq = rx_queue; + struct iavf_rx_queue *rxq = rx_queue; volatile uint64_t *status; uint64_t mask; uint32_t desc; @@ -1879,8 +1879,8 @@ avf_dev_rx_desc_status(void *rx_queue, uint16_t offset) desc -= rxq->nb_rx_desc; status = &rxq->rx_ring[desc].wb.qword1.status_error_len; - mask = rte_le_to_cpu_64((1ULL << AVF_RX_DESC_STATUS_DD_SHIFT) - << AVF_RXD_QW1_STATUS_SHIFT); + mask = rte_le_to_cpu_64((1ULL << IAVF_RX_DESC_STATUS_DD_SHIFT) + << IAVF_RXD_QW1_STATUS_SHIFT); if (*status & mask) return RTE_ETH_RX_DESC_DONE; @@ -1888,9 +1888,9 @@ avf_dev_rx_desc_status(void *rx_queue, uint16_t offset) } int -avf_dev_tx_desc_status(void *tx_queue, uint16_t offset) +iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset) { - struct avf_tx_queue *txq = tx_queue; + struct iavf_tx_queue *txq = tx_queue; volatile uint64_t *status; uint64_t mask, expect; uint32_t desc; @@ -1909,9 +1909,9 @@ avf_dev_tx_desc_status(void *tx_queue, uint16_t offset) } status = &txq->tx_ring[desc].cmd_type_offset_bsz; - mask = rte_le_to_cpu_64(AVF_TXD_QW1_DTYPE_MASK); + mask = rte_le_to_cpu_64(IAVF_TXD_QW1_DTYPE_MASK); expect = rte_cpu_to_le_64( - AVF_TX_DESC_DTYPE_DESC_DONE << AVF_TXD_QW1_DTYPE_SHIFT); + IAVF_TX_DESC_DTYPE_DESC_DONE << IAVF_TXD_QW1_DTYPE_SHIFT); if ((*status & mask) == expect) return RTE_ETH_TX_DESC_DONE; @@ -1919,7 +1919,7 @@ avf_dev_tx_desc_status(void *tx_queue, uint16_t offset) } __rte_weak uint16_t -avf_recv_pkts_vec(__rte_unused void *rx_queue, +iavf_recv_pkts_vec(__rte_unused void *rx_queue, __rte_unused struct rte_mbuf **rx_pkts, __rte_unused uint16_t nb_pkts) { @@ -1927,7 +1927,7 @@ avf_recv_pkts_vec(__rte_unused void *rx_queue, } __rte_weak uint16_t -avf_recv_scattered_pkts_vec(__rte_unused void *rx_queue, +iavf_recv_scattered_pkts_vec(__rte_unused void *rx_queue, __rte_unused struct rte_mbuf **rx_pkts, __rte_unused uint16_t nb_pkts) { @@ -1935,7 +1935,7 @@ avf_recv_scattered_pkts_vec(__rte_unused void *rx_queue, } __rte_weak uint16_t -avf_xmit_fixed_burst_vec(__rte_unused void *tx_queue, +iavf_xmit_fixed_burst_vec(__rte_unused void *tx_queue, __rte_unused struct rte_mbuf **tx_pkts, __rte_unused uint16_t nb_pkts) { @@ -1943,13 +1943,13 @@ avf_xmit_fixed_burst_vec(__rte_unused void *tx_queue, } __rte_weak int -avf_rxq_vec_setup(__rte_unused struct avf_rx_queue *rxq) +iavf_rxq_vec_setup(__rte_unused struct iavf_rx_queue *rxq) { return -1; } __rte_weak int -avf_txq_vec_setup(__rte_unused struct avf_tx_queue *txq) +iavf_txq_vec_setup(__rte_unused struct iavf_tx_queue *txq) { return -1; } diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index ffc835d44..e821dcae6 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -2,27 +2,27 @@ * Copyright(c) 2017 Intel Corporation */ -#ifndef _AVF_RXTX_H_ -#define _AVF_RXTX_H_ +#ifndef _IAVF_RXTX_H_ +#define _IAVF_RXTX_H_ /* In QLEN must be whole number of 32 descriptors. */ -#define AVF_ALIGN_RING_DESC 32 -#define AVF_MIN_RING_DESC 64 -#define AVF_MAX_RING_DESC 4096 -#define AVF_DMA_MEM_ALIGN 4096 +#define IAVF_ALIGN_RING_DESC 32 +#define IAVF_MIN_RING_DESC 64 +#define IAVF_MAX_RING_DESC 4096 +#define IAVF_DMA_MEM_ALIGN 4096 /* Base address of the HW descriptor ring should be 128B aligned. */ -#define AVF_RING_BASE_ALIGN 128 +#define IAVF_RING_BASE_ALIGN 128 /* used for Rx Bulk Allocate */ -#define AVF_RX_MAX_BURST 32 +#define IAVF_RX_MAX_BURST 32 /* used for Vector PMD */ -#define AVF_VPMD_RX_MAX_BURST 32 -#define AVF_VPMD_TX_MAX_BURST 32 -#define AVF_VPMD_DESCS_PER_LOOP 4 -#define AVF_VPMD_TX_MAX_FREE_BUF 64 +#define IAVF_VPMD_RX_MAX_BURST 32 +#define IAVF_VPMD_TX_MAX_BURST 32 +#define IAVF_VPMD_DESCS_PER_LOOP 4 +#define IAVF_VPMD_TX_MAX_FREE_BUF 64 -#define AVF_NO_VECTOR_FLAGS ( \ +#define IAVF_NO_VECTOR_FLAGS ( \ DEV_TX_OFFLOAD_MULTI_SEGS | \ DEV_TX_OFFLOAD_VLAN_INSERT | \ DEV_TX_OFFLOAD_SCTP_CKSUM | \ @@ -32,17 +32,17 @@ #define DEFAULT_TX_RS_THRESH 32 #define DEFAULT_TX_FREE_THRESH 32 -#define AVF_MIN_TSO_MSS 256 -#define AVF_MAX_TSO_MSS 9668 -#define AVF_TSO_MAX_SEG UINT8_MAX -#define AVF_TX_MAX_MTU_SEG 8 +#define IAVF_MIN_TSO_MSS 256 +#define IAVF_MAX_TSO_MSS 9668 +#define IAVF_TSO_MAX_SEG UINT8_MAX +#define IAVF_TX_MAX_MTU_SEG 8 -#define AVF_TX_CKSUM_OFFLOAD_MASK ( \ +#define IAVF_TX_CKSUM_OFFLOAD_MASK ( \ PKT_TX_IP_CKSUM | \ PKT_TX_L4_MASK | \ PKT_TX_TCP_SEG) -#define AVF_TX_OFFLOAD_MASK ( \ +#define IAVF_TX_OFFLOAD_MASK ( \ PKT_TX_OUTER_IPV6 | \ PKT_TX_OUTER_IPV4 | \ PKT_TX_IPV6 | \ @@ -52,29 +52,29 @@ PKT_TX_L4_MASK | \ PKT_TX_TCP_SEG) -#define AVF_TX_OFFLOAD_NOTSUP_MASK \ - (PKT_TX_OFFLOAD_MASK ^ AVF_TX_OFFLOAD_MASK) +#define IAVF_TX_OFFLOAD_NOTSUP_MASK \ + (PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK) /* HW desc structure, both 16-byte and 32-byte types are supported */ -#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC -#define avf_rx_desc avf_16byte_rx_desc +#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC +#define iavf_rx_desc iavf_16byte_rx_desc #else -#define avf_rx_desc avf_32byte_rx_desc +#define iavf_rx_desc iavf_32byte_rx_desc #endif -struct avf_rxq_ops { - void (*release_mbufs)(struct avf_rx_queue *rxq); +struct iavf_rxq_ops { + void (*release_mbufs)(struct iavf_rx_queue *rxq); }; -struct avf_txq_ops { - void (*release_mbufs)(struct avf_tx_queue *txq); +struct iavf_txq_ops { + void (*release_mbufs)(struct iavf_tx_queue *txq); }; /* Structure associated with each Rx queue. */ -struct avf_rx_queue { +struct iavf_rx_queue { struct rte_mempool *mp; /* mbuf pool to populate Rx ring */ const struct rte_memzone *mz; /* memzone for Rx ring */ - volatile union avf_rx_desc *rx_ring; /* Rx ring virtual address */ + volatile union iavf_rx_desc *rx_ring; /* Rx ring virtual address */ uint64_t rx_ring_phys_addr; /* Rx ring DMA address */ struct rte_mbuf **sw_ring; /* address of SW ring */ uint16_t nb_rx_desc; /* ring length */ @@ -95,7 +95,7 @@ struct avf_rx_queue { uint16_t rx_nb_avail; /* number of staged packets ready */ uint16_t rx_next_avail; /* index of next staged packets */ uint16_t rx_free_trigger; /* triggers rx buffer allocation */ - struct rte_mbuf *rx_stage[AVF_RX_MAX_BURST * 2]; /* store mbuf */ + struct rte_mbuf *rx_stage[IAVF_RX_MAX_BURST * 2]; /* store mbuf */ uint16_t port_id; /* device port ID */ uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */ @@ -106,21 +106,21 @@ struct avf_rx_queue { bool q_set; /* if rx queue has been configured */ bool rx_deferred_start; /* don't start this queue in dev start */ - const struct avf_rxq_ops *ops; + const struct iavf_rxq_ops *ops; }; -struct avf_tx_entry { +struct iavf_tx_entry { struct rte_mbuf *mbuf; uint16_t next_id; uint16_t last_id; }; /* Structure associated with each TX queue. */ -struct avf_tx_queue { +struct iavf_tx_queue { const struct rte_memzone *mz; /* memzone for Tx ring */ - volatile struct avf_tx_desc *tx_ring; /* Tx ring virtual address */ + volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */ uint64_t tx_ring_phys_addr; /* Tx ring DMA address */ - struct avf_tx_entry *sw_ring; /* address array of SW ring */ + struct iavf_tx_entry *sw_ring; /* address array of SW ring */ uint16_t nb_tx_desc; /* ring length */ uint16_t tx_tail; /* current value of tail */ volatile uint8_t *qtx_tail; /* register address of tail */ @@ -139,11 +139,11 @@ struct avf_tx_queue { bool q_set; /* if rx queue has been configured */ bool tx_deferred_start; /* don't start this queue in dev start */ - const struct avf_txq_ops *ops; + const struct iavf_txq_ops *ops; }; /* Offload features */ -union avf_tx_offload { +union iavf_tx_offload { uint64_t data; struct { uint64_t l2_len:7; /* L2 (MAC) Header Length. */ @@ -154,68 +154,68 @@ union avf_tx_offload { }; }; -int avf_dev_rx_queue_setup(struct rte_eth_dev *dev, +int iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); -int avf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); -int avf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); -void avf_dev_rx_queue_release(void *rxq); +int iavf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); +int iavf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); +void iavf_dev_rx_queue_release(void *rxq); -int avf_dev_tx_queue_setup(struct rte_eth_dev *dev, +int iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -int avf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); -int avf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); -void avf_dev_tx_queue_release(void *txq); -void avf_stop_queues(struct rte_eth_dev *dev); -uint16_t avf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, +int iavf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); +int iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); +void iavf_dev_tx_queue_release(void *txq); +void iavf_stop_queues(struct rte_eth_dev *dev); +uint16_t iavf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -uint16_t avf_recv_scattered_pkts(void *rx_queue, +uint16_t iavf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -uint16_t avf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, +uint16_t iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); -uint16_t avf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, +uint16_t iavf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); -void avf_set_rx_function(struct rte_eth_dev *dev); -void avf_set_tx_function(struct rte_eth_dev *dev); -void avf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, +void iavf_set_rx_function(struct rte_eth_dev *dev); +void iavf_set_tx_function(struct rte_eth_dev *dev); +void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); -void avf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, +void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); -uint32_t avf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id); -int avf_dev_rx_desc_status(void *rx_queue, uint16_t offset); -int avf_dev_tx_desc_status(void *tx_queue, uint16_t offset); +uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id); +int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset); +int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset); -uint16_t avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, +uint16_t iavf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -uint16_t avf_recv_scattered_pkts_vec(void *rx_queue, +uint16_t iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -uint16_t avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, +uint16_t iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); -int avf_rxq_vec_setup(struct avf_rx_queue *rxq); -int avf_txq_vec_setup(struct avf_tx_queue *txq); +int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq); +int iavf_txq_vec_setup(struct iavf_tx_queue *txq); static inline -void avf_dump_rx_descriptor(struct avf_rx_queue *rxq, +void iavf_dump_rx_descriptor(struct iavf_rx_queue *rxq, const volatile void *desc, uint16_t rx_id) { -#ifdef RTE_LIBRTE_AVF_16BYTE_RX_DESC - const volatile union avf_16byte_rx_desc *rx_desc = desc; +#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC + const volatile union iavf_16byte_rx_desc *rx_desc = desc; printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64"\n", rxq->queue_id, rx_id, rx_desc->read.pkt_addr, rx_desc->read.hdr_addr); #else - const volatile union avf_32byte_rx_desc *rx_desc = desc; + const volatile union iavf_32byte_rx_desc *rx_desc = desc; printf("Queue %d Rx_desc %d: QW0: 0x%016"PRIx64" QW1: 0x%016"PRIx64 " QW2: 0x%016"PRIx64" QW3: 0x%016"PRIx64"\n", rxq->queue_id, @@ -228,21 +228,21 @@ void avf_dump_rx_descriptor(struct avf_rx_queue *rxq, * to print the qwords */ static inline -void avf_dump_tx_descriptor(const struct avf_tx_queue *txq, +void iavf_dump_tx_descriptor(const struct iavf_tx_queue *txq, const volatile void *desc, uint16_t tx_id) { const char *name; - const volatile struct avf_tx_desc *tx_desc = desc; - enum avf_tx_desc_dtype_value type; + const volatile struct iavf_tx_desc *tx_desc = desc; + enum iavf_tx_desc_dtype_value type; - type = (enum avf_tx_desc_dtype_value)rte_le_to_cpu_64( + type = (enum iavf_tx_desc_dtype_value)rte_le_to_cpu_64( tx_desc->cmd_type_offset_bsz & - rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)); + rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)); switch (type) { - case AVF_TX_DESC_DTYPE_DATA: + case IAVF_TX_DESC_DTYPE_DATA: name = "Tx_data_desc"; break; - case AVF_TX_DESC_DTYPE_CONTEXT: + case IAVF_TX_DESC_DTYPE_CONTEXT: name = "Tx_context_desc"; break; default: @@ -256,13 +256,13 @@ void avf_dump_tx_descriptor(const struct avf_tx_queue *txq, } #ifdef DEBUG_DUMP_DESC -#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) \ - avf_dump_rx_descriptor(rxq, desc, rx_id) -#define AVF_DUMP_TX_DESC(txq, desc, tx_id) \ - avf_dump_tx_descriptor(txq, desc, tx_id) +#define IAVF_DUMP_RX_DESC(rxq, desc, rx_id) \ + iavf_dump_rx_descriptor(rxq, desc, rx_id) +#define IAVF_DUMP_TX_DESC(txq, desc, tx_id) \ + iavf_dump_tx_descriptor(txq, desc, tx_id) #else -#define AVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0) -#define AVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0) +#define IAVF_DUMP_RX_DESC(rxq, desc, rx_id) do { } while (0) +#define IAVF_DUMP_TX_DESC(txq, desc, tx_id) do { } while (0) #endif -#endif /* _AVF_RXTX_H_ */ +#endif /* _IAVF_RXTX_H_ */ diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 16433e586..db509d71f 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -2,8 +2,8 @@ * Copyright(c) 2017 Intel Corporation */ -#ifndef _AVF_RXTX_VEC_COMMON_H_ -#define _AVF_RXTX_VEC_COMMON_H_ +#ifndef _IAVF_RXTX_VEC_COMMON_H_ +#define _IAVF_RXTX_VEC_COMMON_H_ #include #include #include @@ -12,10 +12,10 @@ #include "iavf_rxtx.h" static inline uint16_t -reassemble_packets(struct avf_rx_queue *rxq, struct rte_mbuf **rx_bufs, +reassemble_packets(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_bufs, uint16_t nb_bufs, uint8_t *split_flags) { - struct rte_mbuf *pkts[AVF_VPMD_RX_MAX_BURST]; + struct rte_mbuf *pkts[IAVF_VPMD_RX_MAX_BURST]; struct rte_mbuf *start = rxq->pkt_first_seg; struct rte_mbuf *end = rxq->pkt_last_seg; unsigned int pkt_idx, buf_idx; @@ -75,18 +75,18 @@ reassemble_packets(struct avf_rx_queue *rxq, struct rte_mbuf **rx_bufs, } static __rte_always_inline int -avf_tx_free_bufs(struct avf_tx_queue *txq) +iavf_tx_free_bufs(struct iavf_tx_queue *txq) { - struct avf_tx_entry *txep; + struct iavf_tx_entry *txep; uint32_t n; uint32_t i; int nb_free = 0; - struct rte_mbuf *m, *free[AVF_VPMD_TX_MAX_FREE_BUF]; + struct rte_mbuf *m, *free[IAVF_VPMD_TX_MAX_FREE_BUF]; /* check DD bits on threshold descriptor */ if ((txq->tx_ring[txq->next_dd].cmd_type_offset_bsz & - rte_cpu_to_le_64(AVF_TXD_QW1_DTYPE_MASK)) != - rte_cpu_to_le_64(AVF_TX_DESC_DTYPE_DESC_DONE)) + rte_cpu_to_le_64(IAVF_TXD_QW1_DTYPE_MASK)) != + rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE)) return 0; n = txq->rs_thresh; @@ -132,7 +132,7 @@ avf_tx_free_bufs(struct avf_tx_queue *txq) } static __rte_always_inline void -tx_backlog_entry(struct avf_tx_entry *txep, +tx_backlog_entry(struct iavf_tx_entry *txep, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { int i; @@ -142,7 +142,7 @@ tx_backlog_entry(struct avf_tx_entry *txep, } static inline void -_avf_rx_queue_release_mbufs_vec(struct avf_rx_queue *rxq) +_iavf_rx_queue_release_mbufs_vec(struct iavf_rx_queue *rxq) { const unsigned int mask = rxq->nb_rx_desc - 1; unsigned int i; @@ -172,7 +172,7 @@ _avf_rx_queue_release_mbufs_vec(struct avf_rx_queue *rxq) } static inline void -_avf_tx_queue_release_mbufs_vec(struct avf_tx_queue *txq) +_iavf_tx_queue_release_mbufs_vec(struct iavf_tx_queue *txq) { unsigned i; const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); @@ -191,7 +191,7 @@ _avf_tx_queue_release_mbufs_vec(struct avf_tx_queue *txq) } static inline int -avf_rxq_vec_setup_default(struct avf_rx_queue *rxq) +iavf_rxq_vec_setup_default(struct iavf_rx_queue *rxq) { uintptr_t p; struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */ diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 7071ed686..3d98514c3 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -19,12 +19,12 @@ #endif static inline void -avf_rxq_rearm(struct avf_rx_queue *rxq) +iavf_rxq_rearm(struct iavf_rx_queue *rxq) { int i; uint16_t rx_id; - volatile union avf_rx_desc *rxdp; + volatile union iavf_rx_desc *rxdp; struct rte_mbuf **rxp = &rxq->sw_ring[rxq->rxrearm_start]; struct rte_mbuf *mb0, *mb1; __m128i hdr_room = _mm_set_epi64x(RTE_PKTMBUF_HEADROOM, @@ -38,7 +38,7 @@ avf_rxq_rearm(struct avf_rx_queue *rxq) rxq->rx_free_thresh) < 0) { if (rxq->rxrearm_nb + rxq->rx_free_thresh >= rxq->nb_rx_desc) { dma_addr0 = _mm_setzero_si128(); - for (i = 0; i < AVF_VPMD_DESCS_PER_LOOP; i++) { + for (i = 0; i < IAVF_VPMD_DESCS_PER_LOOP; i++) { rxp[i] = &rxq->fake_mbuf; _mm_store_si128((__m128i *)&rxdp[i].read, dma_addr0); @@ -90,11 +90,11 @@ avf_rxq_rearm(struct avf_rx_queue *rxq) rx_id, rxq->rxrearm_start, rxq->rxrearm_nb); /* Update the tail pointer on the NIC */ - AVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id); + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rx_id); } static inline void -desc_to_olflags_v(struct avf_rx_queue *rxq, __m128i descs[4], +desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4], struct rte_mbuf **rx_pkts) { const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer); @@ -228,15 +228,15 @@ desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts) } /* Notice: - * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST + * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet + * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST * numbers of DD bits */ static inline uint16_t -_recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts, +_recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) { - volatile union avf_rx_desc *rxdp; + volatile union iavf_rx_desc *rxdp; struct rte_mbuf **sw_ring; uint16_t nb_pkts_recd; int pos; @@ -260,11 +260,11 @@ _recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts, offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); __m128i dd_check, eop_check; - /* nb_pkts shall be less equal than AVF_VPMD_RX_MAX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, AVF_VPMD_RX_MAX_BURST); + /* nb_pkts shall be less equal than IAVF_VPMD_RX_MAX_BURST */ + nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); - /* nb_pkts has to be floor-aligned to AVF_VPMD_DESCS_PER_LOOP */ - nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, AVF_VPMD_DESCS_PER_LOOP); + /* nb_pkts has to be floor-aligned to IAVF_VPMD_DESCS_PER_LOOP */ + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_VPMD_DESCS_PER_LOOP); /* Just the act of getting into the function from the application is * going to cost about 7 cycles @@ -277,13 +277,13 @@ _recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts, * of time to act */ if (rxq->rxrearm_nb > rxq->rx_free_thresh) - avf_rxq_rearm(rxq); + iavf_rxq_rearm(rxq); /* Before we start moving massive data around, check to see if * there is actually a packet available */ if (!(rxdp->wb.qword1.status_error_len & - rte_cpu_to_le_32(1 << AVF_RX_DESC_STATUS_DD_SHIFT))) + rte_cpu_to_le_32(1 << IAVF_RX_DESC_STATUS_DD_SHIFT))) return 0; /* 4 packets DD mask */ @@ -328,9 +328,9 @@ _recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts, */ for (pos = 0, nb_pkts_recd = 0; pos < nb_pkts; - pos += AVF_VPMD_DESCS_PER_LOOP, - rxdp += AVF_VPMD_DESCS_PER_LOOP) { - __m128i descs[AVF_VPMD_DESCS_PER_LOOP]; + pos += IAVF_VPMD_DESCS_PER_LOOP, + rxdp += IAVF_VPMD_DESCS_PER_LOOP) { + __m128i descs[IAVF_VPMD_DESCS_PER_LOOP]; __m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4; __m128i zero, staterr, sterr_tmp1, sterr_tmp2; /* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */ @@ -445,7 +445,7 @@ _recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts, eop_bits = _mm_shuffle_epi8(eop_bits, eop_shuf_mask); /* store the resulting 32-bit value */ *(int *)split_packet = _mm_cvtsi128_si32(eop_bits); - split_packet += AVF_VPMD_DESCS_PER_LOOP; + split_packet += IAVF_VPMD_DESCS_PER_LOOP; } /* C.3 calc available number of desc */ @@ -462,7 +462,7 @@ _recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts, /* C.4 calc avaialbe number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; - if (likely(var != AVF_VPMD_DESCS_PER_LOOP)) + if (likely(var != IAVF_VPMD_DESCS_PER_LOOP)) break; } @@ -475,12 +475,12 @@ _recv_raw_pkts_vec(struct avf_rx_queue *rxq, struct rte_mbuf **rx_pkts, } /* Notice: - * - nb_pkts < AVF_DESCS_PER_LOOP, just return no packet - * - nb_pkts > AVF_VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST + * - nb_pkts < IAVF_DESCS_PER_LOOP, just return no packet + * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST * numbers of DD bits */ uint16_t -avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, +iavf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); @@ -488,16 +488,16 @@ avf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, /* vPMD receive routine that reassembles scattered packets * Notice: - * - nb_pkts < AVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > VPMD_RX_MAX_BURST, only scan AVF_VPMD_RX_MAX_BURST + * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet + * - nb_pkts > VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST * numbers of DD bits */ uint16_t -avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, +iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { - struct avf_rx_queue *rxq = rx_queue; - uint8_t split_flags[AVF_VPMD_RX_MAX_BURST] = {0}; + struct iavf_rx_queue *rxq = rx_queue; + uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; unsigned int i = 0; /* get some new buffers */ @@ -527,13 +527,13 @@ avf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, } static inline void -vtx1(volatile struct avf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) +vtx1(volatile struct iavf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) { uint64_t high_qw = - (AVF_TX_DESC_DTYPE_DATA | - ((uint64_t)flags << AVF_TXD_QW1_CMD_SHIFT) | + (IAVF_TX_DESC_DTYPE_DATA | + ((uint64_t)flags << IAVF_TXD_QW1_CMD_SHIFT) | ((uint64_t)pkt->data_len << - AVF_TXD_QW1_TX_BUF_SZ_SHIFT)); + IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)); __m128i descriptor = _mm_set_epi64x(high_qw, pkt->buf_iova + pkt->data_off); @@ -541,7 +541,7 @@ vtx1(volatile struct avf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) } static inline void -avf_vtx(volatile struct avf_tx_desc *txdp, struct rte_mbuf **pkt, +iavf_vtx(volatile struct iavf_tx_desc *txdp, struct rte_mbuf **pkt, uint16_t nb_pkts, uint64_t flags) { int i; @@ -551,22 +551,22 @@ avf_vtx(volatile struct avf_tx_desc *txdp, struct rte_mbuf **pkt, } uint16_t -avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, +iavf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { - struct avf_tx_queue *txq = (struct avf_tx_queue *)tx_queue; - volatile struct avf_tx_desc *txdp; - struct avf_tx_entry *txep; + struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + volatile struct iavf_tx_desc *txdp; + struct iavf_tx_entry *txep; uint16_t n, nb_commit, tx_id; - uint64_t flags = AVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */ - uint64_t rs = AVF_TX_DESC_CMD_RS | flags; + uint64_t flags = IAVF_TX_DESC_CMD_EOP | 0x04; /* bit 2 must be set */ + uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; int i; /* cross rx_thresh boundary is not allowed */ nb_pkts = RTE_MIN(nb_pkts, txq->rs_thresh); if (txq->nb_free < txq->free_thresh) - avf_tx_free_bufs(txq); + iavf_tx_free_bufs(txq); nb_pkts = (uint16_t)RTE_MIN(txq->nb_free, nb_pkts); if (unlikely(nb_pkts == 0)) @@ -600,13 +600,13 @@ avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, tx_backlog_entry(txep, tx_pkts, nb_commit); - avf_vtx(txdp, tx_pkts, nb_commit, flags); + iavf_vtx(txdp, tx_pkts, nb_commit, flags); tx_id = (uint16_t)(tx_id + nb_commit); if (tx_id > txq->next_rs) { txq->tx_ring[txq->next_rs].cmd_type_offset_bsz |= - rte_cpu_to_le_64(((uint64_t)AVF_TX_DESC_CMD_RS) << - AVF_TXD_QW1_CMD_SHIFT); + rte_cpu_to_le_64(((uint64_t)IAVF_TX_DESC_CMD_RS) << + IAVF_TXD_QW1_CMD_SHIFT); txq->next_rs = (uint16_t)(txq->next_rs + txq->rs_thresh); } @@ -616,41 +616,41 @@ avf_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_pkts=%u", txq->port_id, txq->queue_id, tx_id, nb_pkts); - AVF_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail); + IAVF_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail); return nb_pkts; } static void __attribute__((cold)) -avf_rx_queue_release_mbufs_sse(struct avf_rx_queue *rxq) +iavf_rx_queue_release_mbufs_sse(struct iavf_rx_queue *rxq) { - _avf_rx_queue_release_mbufs_vec(rxq); + _iavf_rx_queue_release_mbufs_vec(rxq); } static void __attribute__((cold)) -avf_tx_queue_release_mbufs_sse(struct avf_tx_queue *txq) +iavf_tx_queue_release_mbufs_sse(struct iavf_tx_queue *txq) { - _avf_tx_queue_release_mbufs_vec(txq); + _iavf_tx_queue_release_mbufs_vec(txq); } -static const struct avf_rxq_ops sse_vec_rxq_ops = { - .release_mbufs = avf_rx_queue_release_mbufs_sse, +static const struct iavf_rxq_ops sse_vec_rxq_ops = { + .release_mbufs = iavf_rx_queue_release_mbufs_sse, }; -static const struct avf_txq_ops sse_vec_txq_ops = { - .release_mbufs = avf_tx_queue_release_mbufs_sse, +static const struct iavf_txq_ops sse_vec_txq_ops = { + .release_mbufs = iavf_tx_queue_release_mbufs_sse, }; int __attribute__((cold)) -avf_txq_vec_setup(struct avf_tx_queue *txq) +iavf_txq_vec_setup(struct iavf_tx_queue *txq) { txq->ops = &sse_vec_txq_ops; return 0; } int __attribute__((cold)) -avf_rxq_vec_setup(struct avf_rx_queue *rxq) +iavf_rxq_vec_setup(struct iavf_rx_queue *rxq) { rxq->ops = &sse_vec_rxq_ops; - return avf_rxq_vec_setup_default(rxq); + return iavf_rxq_vec_setup_default(rxq); } diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 9b6255368..6381fb63c 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -31,19 +31,19 @@ #define ASQ_DELAY_MS 10 /* Read data in admin queue to get msg from pf driver */ -static enum avf_status_code -avf_read_msg_from_pf(struct avf_adapter *adapter, uint16_t buf_len, +static enum iavf_status_code +iavf_read_msg_from_pf(struct iavf_adapter *adapter, uint16_t buf_len, uint8_t *buf) { - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); - struct avf_arq_event_info event; + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_arq_event_info event; enum virtchnl_ops opcode; int ret; event.buf_len = buf_len; event.msg_buf = buf; - ret = avf_clean_arq_element(hw, &event, NULL); + ret = iavf_clean_arq_element(hw, &event, NULL); /* Can't read any msg from adminQ */ if (ret) { PMD_DRV_LOG(DEBUG, "Can't read msg from AQ"); @@ -61,22 +61,22 @@ avf_read_msg_from_pf(struct avf_adapter *adapter, uint16_t buf_len, PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u", vf->pend_cmd, opcode); - return AVF_SUCCESS; + return IAVF_SUCCESS; } static int -avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args) +iavf_execute_vf_cmd(struct iavf_adapter *adapter, struct iavf_cmd_info *args) { - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); - enum avf_status_code ret; + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + enum iavf_status_code ret; int err = 0; int i = 0; if (_atomic_set_cmd(vf, args->ops)) return -1; - ret = avf_aq_send_msg_to_pf(hw, args->ops, AVF_SUCCESS, + ret = iavf_aq_send_msg_to_pf(hw, args->ops, IAVF_SUCCESS, args->in_args, args->in_args_size, NULL); if (ret) { PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops); @@ -93,9 +93,9 @@ avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args) case VIRTCHNL_OP_GET_VF_RESOURCES: /* for init virtchnl ops, need to poll the response */ do { - ret = avf_read_msg_from_pf(adapter, args->out_size, + ret = iavf_read_msg_from_pf(adapter, args->out_size, args->out_buffer); - if (ret == AVF_SUCCESS) + if (ret == IAVF_SUCCESS) break; rte_delay_ms(ASQ_DELAY_MS); } while (i++ < MAX_TRY_TIMES); @@ -133,12 +133,12 @@ avf_execute_vf_cmd(struct avf_adapter *adapter, struct avf_cmd_info *args) } static void -avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg, +iavf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg, uint16_t msglen) { struct virtchnl_pf_event *pf_msg = (struct virtchnl_pf_event *)msg; - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); if (msglen < sizeof(struct virtchnl_pf_event)) { PMD_DRV_LOG(DEBUG, "Error event"); @@ -154,7 +154,7 @@ avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg, PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event"); vf->link_up = pf_msg->event_data.link_event.link_status; vf->link_speed = pf_msg->event_data.link_event.link_speed; - avf_dev_link_update(dev, 0); + iavf_dev_link_update(dev, 0); _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); break; @@ -168,17 +168,17 @@ avf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg, } void -avf_handle_virtchnl_msg(struct rte_eth_dev *dev) +iavf_handle_virtchnl_msg(struct rte_eth_dev *dev) { - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct avf_arq_event_info info; + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_arq_event_info info; uint16_t pending, aq_opc; enum virtchnl_ops msg_opc; - enum avf_status_code msg_ret; + enum iavf_status_code msg_ret; int ret; - info.buf_len = AVF_AQ_BUF_SZ; + info.buf_len = IAVF_AQ_BUF_SZ; if (!vf->aq_resp) { PMD_DRV_LOG(ERR, "Buffer for adminq resp should not be NULL"); return; @@ -187,26 +187,26 @@ avf_handle_virtchnl_msg(struct rte_eth_dev *dev) pending = 1; while (pending) { - ret = avf_clean_arq_element(hw, &info, &pending); + ret = iavf_clean_arq_element(hw, &info, &pending); - if (ret != AVF_SUCCESS) { + if (ret != IAVF_SUCCESS) { PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ," "ret: %d", ret); break; } aq_opc = rte_le_to_cpu_16(info.desc.opcode); /* For the message sent from pf to vf, opcode is stored in - * cookie_high of struct avf_aq_desc, while return error code + * cookie_high of struct iavf_aq_desc, while return error code * are stored in cookie_low, Which is done by PF driver. */ msg_opc = (enum virtchnl_ops)rte_le_to_cpu_32( info.desc.cookie_high); - msg_ret = (enum avf_status_code)rte_le_to_cpu_32( + msg_ret = (enum iavf_status_code)rte_le_to_cpu_32( info.desc.cookie_low); switch (aq_opc) { - case avf_aqc_opc_send_msg_to_vf: + case iavf_aqc_opc_send_msg_to_vf: if (msg_opc == VIRTCHNL_OP_EVENT) { - avf_handle_pf_event_msg(dev, info.msg_buf, + iavf_handle_pf_event_msg(dev, info.msg_buf, info.msg_len); } else { /* read message and it's expected one */ @@ -233,10 +233,10 @@ avf_handle_virtchnl_msg(struct rte_eth_dev *dev) } int -avf_enable_vlan_strip(struct avf_adapter *adapter) +iavf_enable_vlan_strip(struct iavf_adapter *adapter) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); - struct avf_cmd_info args; + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_cmd_info args; int ret; memset(&args, 0, sizeof(args)); @@ -244,8 +244,8 @@ avf_enable_vlan_strip(struct avf_adapter *adapter) args.in_args = NULL; args.in_args_size = 0; args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; - ret = avf_execute_vf_cmd(adapter, &args); + args.out_size = IAVF_AQ_BUF_SZ; + ret = iavf_execute_vf_cmd(adapter, &args); if (ret) PMD_DRV_LOG(ERR, "Failed to execute command of" " OP_ENABLE_VLAN_STRIPPING"); @@ -254,10 +254,10 @@ avf_enable_vlan_strip(struct avf_adapter *adapter) } int -avf_disable_vlan_strip(struct avf_adapter *adapter) +iavf_disable_vlan_strip(struct iavf_adapter *adapter) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); - struct avf_cmd_info args; + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_cmd_info args; int ret; memset(&args, 0, sizeof(args)); @@ -265,8 +265,8 @@ avf_disable_vlan_strip(struct avf_adapter *adapter) args.in_args = NULL; args.in_args_size = 0; args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; - ret = avf_execute_vf_cmd(adapter, &args); + args.out_size = IAVF_AQ_BUF_SZ; + ret = iavf_execute_vf_cmd(adapter, &args); if (ret) PMD_DRV_LOG(ERR, "Failed to execute command of" " OP_DISABLE_VLAN_STRIPPING"); @@ -279,11 +279,11 @@ avf_disable_vlan_strip(struct avf_adapter *adapter) /* Check API version with sync wait until version read from admin queue */ int -avf_check_api_version(struct avf_adapter *adapter) +iavf_check_api_version(struct iavf_adapter *adapter) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_version_info version, *pver; - struct avf_cmd_info args; + struct iavf_cmd_info args; int err; version.major = VIRTCHNL_VERSION_MAJOR; @@ -293,9 +293,9 @@ avf_check_api_version(struct avf_adapter *adapter) args.in_args = (uint8_t *)&version; args.in_args_size = sizeof(version); args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; + args.out_size = IAVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + err = iavf_execute_vf_cmd(adapter, &args); if (err) { PMD_INIT_LOG(ERR, "Fail to execute command of OP_VERSION"); return err; @@ -328,28 +328,28 @@ avf_check_api_version(struct avf_adapter *adapter) } int -avf_get_vf_resource(struct avf_adapter *adapter) +iavf_get_vf_resource(struct iavf_adapter *adapter) { - struct avf_hw *hw = AVF_DEV_PRIVATE_TO_HW(adapter); - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); - struct avf_cmd_info args; + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_cmd_info args; uint32_t caps, len; int err, i; args.ops = VIRTCHNL_OP_GET_VF_RESOURCES; args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; + args.out_size = IAVF_AQ_BUF_SZ; /* TODO: basic offload capabilities, need to * add advanced/optional offload capabilities */ - caps = AVF_BASIC_OFFLOAD_CAPS; + caps = IAVF_BASIC_OFFLOAD_CAPS; args.in_args = (uint8_t *)∩︀ args.in_args_size = sizeof(caps); - err = avf_execute_vf_cmd(adapter, &args); + err = iavf_execute_vf_cmd(adapter, &args); if (err) { PMD_DRV_LOG(ERR, @@ -358,12 +358,12 @@ avf_get_vf_resource(struct avf_adapter *adapter) } len = sizeof(struct virtchnl_vf_resource) + - AVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource); + IAVF_MAX_VF_VSI * sizeof(struct virtchnl_vsi_resource); rte_memcpy(vf->vf_res, args.out_buffer, RTE_MIN(args.out_size, len)); /* parse VF config message back from PF*/ - avf_parse_hw_config(hw, vf->vf_res); + iavf_parse_hw_config(hw, vf->vf_res); for (i = 0; i < vf->vf_res->num_vsis; i++) { if (vf->vf_res->vsi_res[i].vsi_type == VIRTCHNL_VSI_SRIOV) vf->vsi_res = &vf->vf_res->vsi_res[i]; @@ -382,11 +382,11 @@ avf_get_vf_resource(struct avf_adapter *adapter) } int -avf_enable_queues(struct avf_adapter *adapter) +iavf_enable_queues(struct iavf_adapter *adapter) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_queue_select queue_select; - struct avf_cmd_info args; + struct iavf_cmd_info args; int err; memset(&queue_select, 0, sizeof(queue_select)); @@ -399,8 +399,8 @@ avf_enable_queues(struct avf_adapter *adapter) args.in_args = (u8 *)&queue_select; args.in_args_size = sizeof(queue_select); args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + args.out_size = IAVF_AQ_BUF_SZ; + err = iavf_execute_vf_cmd(adapter, &args); if (err) { PMD_DRV_LOG(ERR, "Failed to execute command of OP_ENABLE_QUEUES"); @@ -410,11 +410,11 @@ avf_enable_queues(struct avf_adapter *adapter) } int -avf_disable_queues(struct avf_adapter *adapter) +iavf_disable_queues(struct iavf_adapter *adapter) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_queue_select queue_select; - struct avf_cmd_info args; + struct iavf_cmd_info args; int err; memset(&queue_select, 0, sizeof(queue_select)); @@ -427,8 +427,8 @@ avf_disable_queues(struct avf_adapter *adapter) args.in_args = (u8 *)&queue_select; args.in_args_size = sizeof(queue_select); args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + args.out_size = IAVF_AQ_BUF_SZ; + err = iavf_execute_vf_cmd(adapter, &args); if (err) { PMD_DRV_LOG(ERR, "Failed to execute command of OP_DISABLE_QUEUES"); @@ -438,12 +438,12 @@ avf_disable_queues(struct avf_adapter *adapter) } int -avf_switch_queue(struct avf_adapter *adapter, uint16_t qid, +iavf_switch_queue(struct iavf_adapter *adapter, uint16_t qid, bool rx, bool on) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_queue_select queue_select; - struct avf_cmd_info args; + struct iavf_cmd_info args; int err; memset(&queue_select, 0, sizeof(queue_select)); @@ -460,8 +460,8 @@ avf_switch_queue(struct avf_adapter *adapter, uint16_t qid, args.in_args = (u8 *)&queue_select; args.in_args_size = sizeof(queue_select); args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + args.out_size = IAVF_AQ_BUF_SZ; + err = iavf_execute_vf_cmd(adapter, &args); if (err) PMD_DRV_LOG(ERR, "Failed to execute command of %s", on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES"); @@ -469,11 +469,11 @@ avf_switch_queue(struct avf_adapter *adapter, uint16_t qid, } int -avf_configure_rss_lut(struct avf_adapter *adapter) +iavf_configure_rss_lut(struct iavf_adapter *adapter) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_rss_lut *rss_lut; - struct avf_cmd_info args; + struct iavf_cmd_info args; int len, err = 0; len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1; @@ -489,9 +489,9 @@ avf_configure_rss_lut(struct avf_adapter *adapter) args.in_args = (u8 *)rss_lut; args.in_args_size = len; args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; + args.out_size = IAVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + err = iavf_execute_vf_cmd(adapter, &args); if (err) PMD_DRV_LOG(ERR, "Failed to execute command of OP_CONFIG_RSS_LUT"); @@ -501,11 +501,11 @@ avf_configure_rss_lut(struct avf_adapter *adapter) } int -avf_configure_rss_key(struct avf_adapter *adapter) +iavf_configure_rss_key(struct iavf_adapter *adapter) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_rss_key *rss_key; - struct avf_cmd_info args; + struct iavf_cmd_info args; int len, err = 0; len = sizeof(*rss_key) + vf->vf_res->rss_key_size - 1; @@ -521,9 +521,9 @@ avf_configure_rss_key(struct avf_adapter *adapter) args.in_args = (u8 *)rss_key; args.in_args_size = len; args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; + args.out_size = IAVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + err = iavf_execute_vf_cmd(adapter, &args); if (err) PMD_DRV_LOG(ERR, "Failed to execute command of OP_CONFIG_RSS_KEY"); @@ -533,16 +533,16 @@ avf_configure_rss_key(struct avf_adapter *adapter) } int -avf_configure_queues(struct avf_adapter *adapter) +iavf_configure_queues(struct iavf_adapter *adapter) { - struct avf_rx_queue **rxq = - (struct avf_rx_queue **)adapter->eth_dev->data->rx_queues; - struct avf_tx_queue **txq = - (struct avf_tx_queue **)adapter->eth_dev->data->tx_queues; - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_rx_queue **rxq = + (struct iavf_rx_queue **)adapter->eth_dev->data->rx_queues; + struct iavf_tx_queue **txq = + (struct iavf_tx_queue **)adapter->eth_dev->data->tx_queues; + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_vsi_queue_config_info *vc_config; struct virtchnl_queue_pair_info *vc_qp; - struct avf_cmd_info args; + struct iavf_cmd_info args; uint16_t i, size; int err; @@ -581,9 +581,9 @@ avf_configure_queues(struct avf_adapter *adapter) args.in_args = (uint8_t *)vc_config; args.in_args_size = size; args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; + args.out_size = IAVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + err = iavf_execute_vf_cmd(adapter, &args); if (err) PMD_DRV_LOG(ERR, "Failed to execute command of" " VIRTCHNL_OP_CONFIG_VSI_QUEUES"); @@ -593,12 +593,12 @@ avf_configure_queues(struct avf_adapter *adapter) } int -avf_config_irq_map(struct avf_adapter *adapter) +iavf_config_irq_map(struct iavf_adapter *adapter) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_irq_map_info *map_info; struct virtchnl_vector_map *vecmap; - struct avf_cmd_info args; + struct iavf_cmd_info args; int len, i, err; len = sizeof(struct virtchnl_irq_map_info) + @@ -612,7 +612,7 @@ avf_config_irq_map(struct avf_adapter *adapter) for (i = 0; i < vf->nb_msix; i++) { vecmap = &map_info->vecmap[i]; vecmap->vsi_id = vf->vsi_res->vsi_id; - vecmap->rxitr_idx = AVF_ITR_INDEX_DEFAULT; + vecmap->rxitr_idx = IAVF_ITR_INDEX_DEFAULT; vecmap->vector_id = vf->msix_base + i; vecmap->txq_map = 0; vecmap->rxq_map = vf->rxq_map[vf->msix_base + i]; @@ -622,8 +622,8 @@ avf_config_irq_map(struct avf_adapter *adapter) args.in_args = (u8 *)map_info; args.in_args_size = len; args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + args.out_size = IAVF_AQ_BUF_SZ; + err = iavf_execute_vf_cmd(adapter, &args); if (err) PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP"); @@ -632,12 +632,12 @@ avf_config_irq_map(struct avf_adapter *adapter) } void -avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add) +iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add) { struct virtchnl_ether_addr_list *list; - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct ether_addr *addr; - struct avf_cmd_info args; + struct iavf_cmd_info args; int len, err, i, j; int next_begin = 0; int begin = 0; @@ -645,18 +645,18 @@ avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add) do { j = 0; len = sizeof(struct virtchnl_ether_addr_list); - for (i = begin; i < AVF_NUM_MACADDR_MAX; i++, next_begin++) { + for (i = begin; i < IAVF_NUM_MACADDR_MAX; i++, next_begin++) { addr = &adapter->eth_dev->data->mac_addrs[i]; if (is_zero_ether_addr(addr)) continue; len += sizeof(struct virtchnl_ether_addr); - if (len >= AVF_AQ_BUF_SZ) { + if (len >= IAVF_AQ_BUF_SZ) { next_begin = i + 1; break; } } - list = rte_zmalloc("avf_del_mac_buffer", len, 0); + list = rte_zmalloc("iavf_del_mac_buffer", len, 0); if (!list) { PMD_DRV_LOG(ERR, "fail to allocate memory"); return; @@ -681,24 +681,24 @@ avf_add_del_all_mac_addr(struct avf_adapter *adapter, bool add) args.in_args = (uint8_t *)list; args.in_args_size = len; args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + args.out_size = IAVF_AQ_BUF_SZ; + err = iavf_execute_vf_cmd(adapter, &args); if (err) PMD_DRV_LOG(ERR, "fail to execute command %s", add ? "OP_ADD_ETHER_ADDRESS" : "OP_DEL_ETHER_ADDRESS"); rte_free(list); begin = next_begin; - } while (begin < AVF_NUM_MACADDR_MAX); + } while (begin < IAVF_NUM_MACADDR_MAX); } int -avf_query_stats(struct avf_adapter *adapter, +iavf_query_stats(struct iavf_adapter *adapter, struct virtchnl_eth_stats **pstats) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_queue_select q_stats; - struct avf_cmd_info args; + struct iavf_cmd_info args; int err; memset(&q_stats, 0, sizeof(q_stats)); @@ -707,9 +707,9 @@ avf_query_stats(struct avf_adapter *adapter, args.in_args = (uint8_t *)&q_stats; args.in_args_size = sizeof(q_stats); args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; + args.out_size = IAVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + err = iavf_execute_vf_cmd(adapter, &args); if (err) { PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS"); *pstats = NULL; @@ -720,13 +720,13 @@ avf_query_stats(struct avf_adapter *adapter, } int -avf_config_promisc(struct avf_adapter *adapter, +iavf_config_promisc(struct iavf_adapter *adapter, bool enable_unicast, bool enable_multicast) { - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct virtchnl_promisc_info promisc; - struct avf_cmd_info args; + struct iavf_cmd_info args; int err; promisc.flags = 0; @@ -742,9 +742,9 @@ avf_config_promisc(struct avf_adapter *adapter, args.in_args = (uint8_t *)&promisc; args.in_args_size = sizeof(promisc); args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; + args.out_size = IAVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + err = iavf_execute_vf_cmd(adapter, &args); if (err) PMD_DRV_LOG(ERR, @@ -753,14 +753,14 @@ avf_config_promisc(struct avf_adapter *adapter, } int -avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr, +iavf_add_del_eth_addr(struct iavf_adapter *adapter, struct ether_addr *addr, bool add) { struct virtchnl_ether_addr_list *list; - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); uint8_t cmd_buffer[sizeof(struct virtchnl_ether_addr_list) + sizeof(struct virtchnl_ether_addr)]; - struct avf_cmd_info args; + struct iavf_cmd_info args; int err; list = (struct virtchnl_ether_addr_list *)cmd_buffer; @@ -773,8 +773,8 @@ avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr, args.in_args = cmd_buffer; args.in_args_size = sizeof(cmd_buffer); args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + args.out_size = IAVF_AQ_BUF_SZ; + err = iavf_execute_vf_cmd(adapter, &args); if (err) PMD_DRV_LOG(ERR, "fail to execute command %s", add ? "OP_ADD_ETH_ADDR" : "OP_DEL_ETH_ADDR"); @@ -782,13 +782,13 @@ avf_add_del_eth_addr(struct avf_adapter *adapter, struct ether_addr *addr, } int -avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add) +iavf_add_del_vlan(struct iavf_adapter *adapter, uint16_t vlanid, bool add) { struct virtchnl_vlan_filter_list *vlan_list; - struct avf_info *vf = AVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) + sizeof(uint16_t)]; - struct avf_cmd_info args; + struct iavf_cmd_info args; int err; vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer; @@ -800,8 +800,8 @@ avf_add_del_vlan(struct avf_adapter *adapter, uint16_t vlanid, bool add) args.in_args = cmd_buffer; args.in_args_size = sizeof(cmd_buffer); args.out_buffer = vf->aq_resp; - args.out_size = AVF_AQ_BUF_SZ; - err = avf_execute_vf_cmd(adapter, &args); + args.out_size = IAVF_AQ_BUF_SZ; + err = iavf_execute_vf_cmd(adapter, &args); if (err) PMD_DRV_LOG(ERR, "fail to execute command %s", add ? "OP_ADD_VLAN" : "OP_DEL_VLAN"); diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build index bc22ad179..e5a2f5553 100644 --- a/drivers/net/iavf/meson.build +++ b/drivers/net/iavf/meson.build @@ -15,6 +15,6 @@ sources = files( ) if arch_subdir == 'x86' - dpdk_conf.set('RTE_LIBRTE_AVF_INC_VECTOR', 1) + dpdk_conf.set('RTE_LIBRTE_IAVF_INC_VECTOR', 1) sources += files('iavf_rxtx_vec_sse.c') endif diff --git a/mk/rte.app.mk b/mk/rte.app.mk index f78bd0a22..443c2afab 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -164,7 +164,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += -lrte_pmd_enic _LDLIBS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += -lrte_pmd_fm10k _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_FAILSAFE) += -lrte_pmd_failsafe _LDLIBS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += -lrte_pmd_i40e -_LDLIBS-$(CONFIG_RTE_LIBRTE_AVF_PMD) += -lrte_pmd_iavf +_LDLIBS-$(CONFIG_RTE_LIBRTE_IAVF_PMD) += -lrte_pmd_iavf _LDLIBS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += -lrte_pmd_ice _LDLIBS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += -lrte_pmd_ixgbe ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)